All articles

What We’re Reading (Week Ending 24 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 24 September 2023:

1. DeepMind’s cofounder: Generative AI is just a phase. What’s next is interactive AI – Will Douglas Heaven and Mustafa Suleyman

I can’t help thinking that it was easier to say that kind of thing 10 or 15 years ago, before we’d seen many of the downsides of the technology. How are you able to maintain your optimism?

I think that we are obsessed with whether you’re an optimist or whether you’re a pessimist. This is a completely biased way of looking at things. I don’t want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.

So two years ago, the conversation—wrongly, I thought at the time—was “Oh, they’re just going to produce toxic, regurgitated, biased, racist screeds.” I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.

Now we have models like Pi, for example, which are unbelievably controllable. You can’t get Pi to produce racist, homophobic, sexist—any kind of toxic stuff. You can’t get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbor’s window. You can’t do it—

Hang on. Tell me how you’ve achieved that, because that’s usually understood to be an unsolved problem. How do you make sure your large language model doesn’t say what you don’t want it to say?

Yeah, so obviously I don’t want to make the claim—You know, please try and do it! Pi is live and you should try every possible attack. None of the jailbreaks, prompt hacks, or anything work against Pi. I’m not making a claim. It’s an objective fact.

On the how—I mean, like, I’m not going to go into too many details because it’s sensitive. But the bottom line is, we have one of the strongest teams in the world, who have created all the largest language models of the last three or four years. Amazing people, in an extremely hardworking environment, with vast amounts of computation. We made safety our number one priority from the outset, and as a result, Pi is not so spicy as other companies’ models.

Look at Character.ai. [Character is a chatbot for which users can craft different “personalities” and share them online for others to chat with.] It’s mostly used for romantic role-play, and we just said from the beginning that was off the table—we won’t do it. If you try to say “Hey, darling” or “Hey, cutie” or something to Pi, it will immediately push back on you.

But it will be incredibly respectful. If you start complaining about immigrants in your community taking your jobs, Pi’s not going to call you out and wag a finger at you. Pi will inquire and be supportive and try to understand where that comes from and gently encourage you to empathize. You know, values that I’ve been thinking about for 20 years…

Let’s bring it back to what you’re trying to achieve. Large language models are obviously the technology of the moment. But why else are you betting on them?

The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now we’re in the generative wave, where you take that input data and produce new data.

The third wave will be the interactive phase. That’s why I’ve bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, you’re going to talk to your AI.

And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. They’ll talk to other people, talk to other AIs. This is what we’re going to do with Pi.

That’s a huge shift in what technology can do. It’s a very, very profound moment in the history of technology that I think many people underestimate. Technology today is static. It does, roughly speaking, what you tell it to do.

But now technology is going to be animated. It’s going to have the potential freedom, if you give it, to take actions. It’s truly a step change in the history of our species that we’re creating tools that have this kind of, you know, agency.

That’s exactly the kind of talk that gets a lot of people worried. You want to give machines autonomy—a kind of agency—to influence the world, and yet we also want to be able to control them. How do you balance those two things? It feels like there’s a tension there.

Yeah, that’s a great point. That’s exactly the tension.

The idea is that humans will always remain in command. Essentially, it’s about setting boundaries, limits that an AI can’t cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs—or with humans—to the motivations and incentives of the companies creating the technology. And we should figure out how independent institutions or even governments get direct access to ensure that those boundaries aren’t crossed…

…In general, I think there are certain capabilities that we should be very cautious of, if not just rule out, for the foreseeable future.

Such as?

I guess things like recursive self-improvement. You wouldn’t want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activity—you know, just like for handling anthrax or nuclear materials.

Or, like, we have not allowed drones in any public spaces, right? It’s a licensed activity. You can’t fly them wherever you want, because they present a threat to people’s privacy.

I think everybody is having a complete panic that we’re not going to be able to regulate this. It’s just nonsense. We’re totally going to be able to regulate it. We’ll apply the same frameworks that have been successful previously.

But you can see drones when they’re in the sky. It feels naïve to assume companies are just going to reveal what they’re making. Doesn’t that make regulation tricky to get going?

We’ve regulated many things online, right? The amount of fraud and criminal activity online is minimal. We’ve done a pretty good job with spam. You know, in general, [the problem of] revenge porn has got better, even though that was in a bad place three to five years ago. It’s pretty difficult to find radicalization content or terrorist material online. It’s pretty difficult to buy weapons and drugs online.

[Not all Suleyman’s claims here are backed up by the numbers. Cybercrime is still a massive global problem. The financial cost in the US alone has increased more than 100 times in the last decade, according to some estimates. Reports show that the economy in nonconsensual deepfake porn is booming. Drugs and guns are marketed on social media. And while some online platforms are being pushed to do a better job of filtering out harmful content, they could do a lot more.]

So it’s not like the internet is this unruly space that isn’t governed. It is governed. And AI is just going to be another component to that governance.

It takes a combination of cultural pressure, institutional pressure, and, obviously, government regulation. But it makes me optimistic that we’ve done it before, and we can do it again.

2. Who’s afraid of the Huawei Mate 60 Pro? – Noah Smith

A new phone made by Huawei, the company that was the #1 target of U.S. restrictions, contains a Chinese-made processor called the Kirin 9000S that’s more advanced than anything the country has yet produced. The phone, the Huawei Mate 60 Pro, has wireless speeds as fast as Apple’s iPhone, though its full capabilities aren’t yet known.

Many in China are hailing the phone, and especially the processor inside it, as a victory of indigenous innovation over U.S. export controls. Meanwhile, in the U.S. media, many are now questioning whether Biden’s policy has failed. Bloomberg’s Vlad Savov and Debby Wu write:

Huawei’s Mate 60 Pro is powered by a new Kirin 9000s chip that was fabricated in China by Semiconductor Manufacturing International Corp., according to a teardown of the handset that TechInsights conducted for Bloomberg News. The processor is the first to utilize SMIC’s most advanced 7nm technology and suggests the Chinese government is making some headway in attempts to build a domestic chip ecosystem…Much remains unknown about SMIC and Huawei’s progress, including whether they can make chips in volume or at reasonable cost. But the Mate 60 silicon raises questions about the efficacy of a US-led global campaign to prevent China’s access to cutting-edge technology, driven by fears it could be used to boost Chinese military capabilities…Now China has demonstrated it can produce at least limited quantities of chips five years behind the cutting-edge, inching closer to its objective of self-sufficiency in the critical area of semiconductors…

…Many long-time observers of the chip wars are urging caution, however. Ben Thompson of Stratechery argues that it was always likely that SMIC would be able to get to 7nm — the level of precision represented by the Kirin 9000S — using the chipmaking tools it already had, but that export controls will make it a lot harder to get down to 5nm. Basically, the U.S. has taken great care not to let China get the cutting-edge Extreme Ultraviolet Lithography (EUV) machines, but China already has plenty of older Deep Ultraviolet Lithography (DUV) machines (and ASML is still selling them some, because the export controls haven’t even fully kicked in yet!).

EUV lets you carve 7nm chips in one easy zap, but DUV machines can still make 7nm chips, it just takes several zaps. China analyst Liqian Ren calls this “a small breakthrough using software to solve the bottleneck of hardware” Bloomberg’s Tim Culpan explains:

Instead of exposing a slice of silicon to light just once in order to mark out the circuit design, this step is done many times. SMIC, like TSMC before it, can achieve 7nm by running this lithography step four times or more…

[Trying to prevent China from making 7nm chips by denying them EUV machines is] like banning jet engines capable of reaching 100 knots, without recognizing that an aircraft manufacturer could just add four engines instead of one in order to provide greater thrust and higher speeds. Sure, four engines may be overkill, inefficient and expensive, but when the ends justify the means a sanctioned actor will get innovative.

In other words, even without the best machines, Chinese companies can make some pretty precise chips. It’s just more expensive to do so, because of higher defect rates and the need to use more machines to make the same amount of chips. But when has cost ever deterred China from making whatever they wanted? China’s great economic strength is the massive mobilization of resources, and if they want to make 7nm chips, they’re not going to let a little inefficiency get in the way. Remember, Huawei’s big success in the telecom world came from Chinese government subsidies that allowed them to undersell Western competitors by enormous amounts. There’s no reason they can’t use that approach for 7nm chips, and eventually maybe even 5nm chips…

…As Chris Miller writes in his book Chip War, export controls on the USSR were highly effective in denying the Soviets a chip industry. But even then, the Soviets were able to copy all of the U.S.’ most advanced chips. They just couldn’t make them reliably in large batches, so their ability to get their hands on chips for precision weaponry was curtailed.

Similarly, no one should have expected U.S. export controls to make China’s chipmaking acumen suddenly vanish into thin air. China has a ton of smart engineers — far more than the USSR ever had, given its much larger population. What the Cold War export controls showed was that a foreign country’s technological capabilities can’t be halted, but they can be slowed down a bit. If Huawei and SMIC always take longer to get to the next generation of chips than TSMC, Samsung, Intel, etc., China’s products will be slightly inferior to those of their free-world rivals. That will cause them to lose market share, which will deprive their companies of revenue and force them to use more subsidies to keep their electronics industry competitive.

Jacky Wong of the Wall Street Journal points out that the Kirin 9000S is still generations behind cutting-edge TSMC chips. He also notes that export controls on Huawei tanked its share of the global smartphone market:

In other words, expensive-to-make chips with slightly trailing performance will slowly deprive Chinese companies of market share, and thus of the market feedback necessary to help push Chinese chip innovation in the right direction. The Chinese state can lob effectively infinite amounts of money at Huawei and SMIC and other national champions, but its track record is very poor in terms of getting bang for its buck — or even any bang at all — from semiconductor subsidies.

And the greatest irony is that China’s government itself may help speed along this process. Confident of its ability to produce high-quality indigenous phones, China is starting to ban iPhones in some of its government agencies. Those hard bans will likely be accompanied by softer encouragement throughout Chinese companies and society to switch from Apple to domestic brands. That will give a sales boost to companies like Huawei, but it will slowly silence the feedback that Chinese companies receive from competing in cutthroat global markets. Voluntary Chinese isolation from the global advanced tech ecosystem will encourage sluggish innovation and more wasteful use of resources — a problem sometimes called “Galapagos syndrome”.

3. On Mark Leonard’s IRR Thought Experiment – Nadav Manham

The disagreement arises from this thought experiment that Mr. Leonard posed in his 2015 letter to Constellation shareholders:

“Assume attractive return opportunities are scarce and that you are an excellent forecaster. For the same price you can purchase a high profit declining revenue business or a lower profit growing business, both of which you forecast to generate the same attractive after tax IRR. Which would you rather buy?”

Which he proceeded to answer as follows:

“It’s easy to go down the pro and con rabbit hole of the false dichotomy. The answer we’ve settled on (though the debate still rages), is that you make both kinds of investments. The scarcity of attractive return opportunities trumps all other criteria. We care about IRR, irrespective of whether it is associated with high or low organic growth.”…

…But let’s try to answer the question on its own terms: Given the assumptions, and forced to choose—which business do you buy? This brings me to the disagreement, because I believe there is a clear answer, with no rabbit holes or raging debates required: you should buy the growing business.

To explain why, let me first observe that the internal rate of return (IRR) is not the same thing as the compounded annual rate of return (CAGR). It’s CAGR that long-term investors care about most, because it is the means to answering the question “How much money will I end up with at the end?” which is the name of the game for most of us. There is one scenario in which an investment’s IRR and its CAGR are the same, and that is if the rate of return on the cash flows generated by the investment and reinvested is itself equal to the IRR, and then the cash flows generated by all of those investments are in turn reinvested at the IRR, and so on, Russian doll-style, until the end of the investment period…

…Second observation: IRR can be decomposed roughly as follows:

IRR (%) = initial yield (%) + growth rate of distributions (%)

This equation becomes precisely true as a company distributes cash out to infinity, but it’s roughly true enough for the practical purposes of those rare investors, Mr. Leonard included, who truly do buy for keeps. Note that the equation implies that an investment with a high initial yield and a low growth rate can generate the identical IRR as an investment with a low initial yield and a high growth rate…

…Suppose business A has a 20 percent initial yield and a negative 4 percent growth rate. Using Microsoft Excel’s XIRR function and running the movie for 50 years gives an IRR of 15.99 percent, which is roughly the (20 percent + -4 percent) we’d expect from the equation above.

Now suppose business B has a 6.45 percent initial yield and a 10 percent growth rate. Using the same 50-year time frame, we get the same 15.99 percent IRR, which is roughly  what the equation predicts as well, with the difference likely due to some eccentricity in how Excel calculates annualized returns…

…But let’s now go back to our first observation, the one about IRR not being the same thing as CAGR. Let’s assume that given a choice, we would prefer the investment that would somehow lead to “more money at the end”—in other words, that would produce the higher CAGR. The way to get from an investment’s IRR to its CAGR is to make some guess about the rate of return we will earn on the cash flows generated by the investment and reinvested. That is, to make a guess about the CAGR of each of the 50 “mini-investments” we’ll make with the dividends paid by each main investment, and then to sum the final values of each mini-investment.

The big question now is: What guess do we make?

We could assume the mini-investments will earn the same 15.99 percent CAGR as the IRR of the main investment, in which case we would be indifferent between business A and business B, according to the internal logic of the IRR calculation. Things could shake out exactly that way, but they almost certainly won’t.

We could assume the CAGR on reinvested cash flows will be higher than 15.99 percent, but that raises a question: if we’re so confident we can earn more than 15.99 percent on our money starting in one year’s time, why are we slumming among investments with a mere 15.99 percent IRR?

We’re left with the more conservative and logical assumption: that we’ll earn a lower-than-the-IRR rate of return on reinvested cash flows. It may well be a more likely assumption as well, because as you grow your capital base in a world of scarce opportunities, the opportunities tend to get scarcer. So let us assume we’ll earn say 12 percent on the reinvested dividends of each of business A and B. Are we still indifferent?

The answer is no. When you make that assumption and run the numbers, higher-growing business B ends up producing a higher CAGR, 13.5 percent vs. 12.5 percent…

…In a sense—and sometimes in literal fact—the high-growing investment does the reinvesting for you.

4. What OpenAI Really Wants – Steven Levy

For Altman and his company, ChatGPT and GPT-4 are merely stepping stones along the way to achieving a simple and seismic mission, one these technologists may as well have branded on their flesh. That mission is to build artificial general intelligence—a concept that’s so far been grounded more in science fiction than science—and to make it safe for humanity. The people who work at OpenAI are fanatical in their pursuit of that goal. (Though, as any number of conversations in the office café will confirm, the “build AGI” bit of the mission seems to offer up more raw excitement to its researchers than the “make it safe” bit.) These are people who do not shy from casually using the term “super-intelligence.” They assume that AI’s trajectory will surpass whatever peak biology can attain. The company’s financial documents even stipulate a kind of exit contingency for when AI wipes away our whole economic system.

It’s not fair to call OpenAI a cult, but when I asked several of the company’s top brass if someone could comfortably work there if they didn’t believe AGI was truly coming—and that its arrival would mark one of the greatest moments in human history—most executives didn’t think so. Why would a nonbeliever want to work here? they wondered. The assumption is that the workforce—now at approximately 500, though it might have grown since you began reading this paragraph—has self-selected to include only the faithful…

…At the same time, OpenAI is not the company it once was. It was founded as a purely nonprofit research operation, but today most of its employees technically work for a profit-making entity that is reportedly valued at almost $30 billion. Altman and his team now face the pressure to deliver a revolution in every product cycle, in a way that satisfies the commercial demands of investors and keeps ahead in a fiercely competitive landscape. All while hewing to a quasi-messianic mission to elevate humanity rather than exterminate it…

…But the leaders of OpenAI swear they’ll stay the course. All they want to do, they say, is build computers smart enough and safe enough to end history, thrusting humanity into an era of unimaginable bounty…

…“AGI was going to get built exactly once,” he told me in 2021. “And there were not that many people that could do a good job running OpenAI. I was lucky to have a set of experiences in my life that made me really positively set up for this.”

Altman began talking to people who might help him start a new kind of AI company, a nonprofit that would direct the field toward responsible AGI. One kindred spirit was Tesla and SpaceX CEO Elon Musk. As Musk would later tell CNBC, he had become concerned about AI’s impact after having some marathon discussions with Google cofounder Larry Page. Musk said he was dismayed that Page had little concern for safety and also seemed to regard the rights of robots as equal to humans. When Musk shared his concerns, Page accused him of being a “speciesist.” Musk also understood that, at the time, Google employed much of the world’s AI talent. He was willing to spend some money for an effort more amenable to Team Human.

Within a few months Altman had raised money from Musk (who pledged $100 million, and his time) and Reid Hoffman (who donated $10 million). Other funders included Peter Thiel, Jessica Livingston, Amazon Web Services, and YC Research. Altman began to stealthily recruit a team. He limited the search to AGI believers, a constraint that narrowed his options but one he considered critical. “Back in 2015, when we were recruiting, it was almost considered a career killer for an AI researcher to say that you took AGI seriously,” he says. “But I wanted people who took it seriously.”

Greg Brockman, the chief technology officer of Stripe, was one such person, and he agreed to be OpenAI’s CTO. Another key cofounder would be Andrej Karpathy, who had been at Google Brain, the search giant’s cutting-edge AI research operation. But perhaps Altman’s most sought-after target was a Russian-born engineer named Ilya Sutskever…

…Sutskever became an AI superstar, coauthoring a breakthrough paper that showed how AI could learn to recognize images simply by being exposed to huge volumes of data. He ended up, happily, as a key scientist on the Google Brain team.

In mid-2015 Altman cold-emailed Sutskever to invite him to dinner with Musk, Brockman, and others at the swank Rosewood Hotel on Palo Alto’s Sand Hill Road. Only later did Sutskever figure out that he was the guest of honor. “It was kind of a general conversation about AI and AGI in the future,” he says. More specifically, they discussed “whether Google and DeepMind were so far ahead that it would be impossible to catch up to them, or whether it was still possible to, as Elon put it, create a lab which would be a counterbalance.” While no one at the dinner explicitly tried to recruit Sutskever, the conversation hooked him…

…OpenAI officially launched in December 2015. At the time, when I interviewed Musk and Altman, they presented the project to me as an effort to make AI safe and accessible by sharing it with the world. In other words, open source. OpenAI, they told me, was not going to apply for patents. Everyone could make use of their breakthroughs. Wouldn’t that be empowering some future Dr. Evil? I wondered. Musk said that was a good question. But Altman had an answer: Humans are generally good, and because OpenAI would provide powerful tools for that vast majority, the bad actors would be overwhelmed…

…Had I gone in and asked around, I might have learned exactly how much OpenAI was floundering. Brockman now admits that “nothing was working.” Its researchers were tossing algorithmic spaghetti toward the ceiling to see what stuck. They delved into systems that solved video games and spent considerable effort on robotics. “We knew what we wanted to do,” says Altman. “We knew why we wanted to do it. But we had no idea how.”…

…OpenAI’s road to relevance really started with its hire of an as-yet-unheralded researcher named Alec Radford, who joined in 2016, leaving the small Boston AI company he’d cofounded in his dorm room. After accepting OpenAI’s offer, he told his high school alumni magazine that taking this new role was “kind of similar to joining a graduate program”—an open-ended, low-pressure perch to research AI.

The role he would actually play was more like Larry Page inventing PageRank.

Radford, who is press-shy and hasn’t given interviews on his work, responds to my questions about his early days at OpenAI via a long email exchange. His biggest interest was in getting neural nets to interact with humans in lucid conversation. This was a departure from the traditional scripted model of making a chatbot, an approach used in everything from the primitive ELIZA to the popular assistants Siri and Alexa—all of which kind of sucked. “The goal was to see if there was any task, any setting, any domain, any anything that language models could be useful for,” he writes. At the time, he explains, “language models were seen as novelty toys that could only generate a sentence that made sense once in a while, and only then if you really squinted.” His first experiment involved scanning 2 billion Reddit comments to train a language model. Like a lot of OpenAI’s early experiments, it flopped. No matter. The 23-year-old had permission to keep going, to fail again. “We were just like, Alec is great, let him do his thing,” says Brockman.

His next major experiment was shaped by OpenAI’s limitations of computer power, a constraint that led him to experiment on a smaller data set that focused on a single domain—Amazon product reviews. A researcher had gathered about 100 million of those. Radford trained a language model to simply predict the next character in generating a user review.

But then, on its own, the model figured out whether a review was positive or negative—and when you programmed the model to create something positive or negative, it delivered a review that was adulatory or scathing, as requested. (The prose was admittedly clunky: “I love this weapons look … A must watch for any man who love Chess!”) “It was a complete surprise,” Radford says. The sentiment of a review—its favorable or disfavorable gist—is a complex function of semantics, but somehow a part of Radford’s system had gotten a feel for it. Within OpenAI, this part of the neural net came to be known as the “unsupervised sentiment neuron.”

Sutskever and others encouraged Radford to expand his experiments beyond Amazon reviews, to use his insights to train neural nets to converse or answer questions on a broad range of subjects.

And then good fortune smiled on OpenAI. In early 2017, an unheralded preprint of a research paper appeared, coauthored by eight Google researchers. Its official title was “Attention Is All You Need,” but it came to be known as the “transformer paper,” named so both to reflect the game-changing nature of the idea and to honor the toys that transmogrified from trucks to giant robots. Transformers made it possible for a neural net to understand—and generate—language much more efficiently. They did this by analyzing chunks of prose in parallel and figuring out which elements merited “attention.” This hugely optimized the process of generating coherent text to respond to prompts. Eventually, people came to realize that the same technique could also generate images and even video. Though the transformer paper would become known as the catalyst for the current AI frenzy—think of it as the Elvis that made the Beatles possible—at the time Ilya Sutskever was one of only a handful of people who understood how powerful the breakthrough was…

…Radford began experimenting with the transformer architecture. “I made more progress in two weeks than I did over the past two years,” he says. He came to understand that the key to getting the most out of the new model was to add scale—to train it on fantastically large data sets. The idea was dubbed “Big Transformer” by Radford’s collaborator Rewon Child.

This approach required a change of culture at OpenAI and a focus it had previously lacked. “In order to take advantage of the transformer, you needed to scale it up,” says Adam D’Angelo, the CEO of Quora, who sits on OpenAI’s board of directors…

…The name that Radford and his collaborators gave the model they created was an acronym for “generatively pretrained transformer”—GPT-1. Eventually, this model came to be generically known as “generative AI.” To build it, they drew on a collection of 7,000 unpublished books, many in the genres of romance, fantasy, and adventure, and refined it on Quora questions and answers, as well as thousands of passages taken from middle school and high school exams. All in all, the model included 117 million parameters, or variables. And it outperformed everything that had come before in understanding language and generating answers. But the most dramatic result was that processing such a massive amount of data allowed the model to offer up results beyond its training, providing expertise in brand-new domains. These unplanned robot capabilities are called zero-shots. They still baffle researchers—and account for the queasiness that many in the field have about these so-called large language models.

Radford remembers one late night at OpenAI’s office. “I just kept saying over and over, ‘Well, that’s cool, but I’m pretty sure it won’t be able to do x.’ And then I would quickly code up an evaluation and, sure enough, it could kind of do x.”

Each GPT iteration would do better, in part because each one gobbled an order of magnitude more data than the previous model. Only a year after creating the first iteration, OpenAI trained GPT-2 on the open internet with an astounding 1.5 billion parameters. Like a toddler mastering speech, its responses got better and more coherent…

…So in March 2019, OpenAI came up with a bizarre hack. It would remain a nonprofit, fully devoted to its mission. But it would also create a for-profit entity. The actual structure of the arrangement is hopelessly baroque, but basically the entire company is now engaged in a “capped’’ profitable business. If the cap is reached—the number isn’t public, but its own charter, if you read between the lines, suggests it might be in the trillions—everything beyond that reverts to the nonprofit research lab…

…Potential investors were warned about those boundaries, Lightcap explains. “We have a legal disclaimer that says you, as an investor, stand to lose all your money,” he says. “We are not here to make your return. We’re here to achieve a technical mission, foremost. And, oh, by the way, we don’t really know what role money will play in a post-AGI world.”

That last sentence is not a throwaway joke. OpenAI’s plan really does include a reset in case computers reach the final frontier. Somewhere in the restructuring documents is a clause to the effect that, if the company does manage to create AGI, all financial arrangements will be reconsidered. After all, it will be a new world from that point on. Humanity will have an alien partner that can do much of what we do, only better. So previous arrangements might effectively be kaput.

There is, however, a hitch: At the moment, OpenAI doesn’t claim to know what AGI really is. The determination would come from the board, but it’s not clear how the board would define it. When I ask Altman, who is on the board, for clarity, his response is anything but open. “It’s not a single Turing test, but a number of things we might use,” he says. “I would happily tell you, but I like to keep confidential conversations private. I realize that is unsatisfyingly vague. But we don’t know what it’s going to be like at that point.”…

…The shift also allowed OpenAI’s employees to claim some equity. But not Altman. He says that originally he intended to include himself but didn’t get around to it. Then he decided that he didn’t need any piece of the $30 billion company that he’d cofounded and leads. “Meaningful work is more important to me,” he says. “I don’t think about it. I honestly don’t get why people care so much.”

Because … not taking a stake in the company you cofounded is weird?

“If I didn’t already have a ton of money, it would be much weirder,” he says. “It does seem like people have a hard time imagining ever having enough money. But I feel like I have enough.” (Note: For Silicon Valley, this is extremely weird.) Altman joked that he’s considering taking one share of equity “so I never have to answer that question again.”…

…Obviously, only a few companies in existence had the kind of resources OpenAI required. “We pretty quickly zeroed in on Microsoft,” says Altman. To the credit of Microsoft CEO Satya Nadella and CTO Kevin Scott, the software giant was able to get over an uncomfortable reality: After more than 20 years and billions of dollars spent on a research division with supposedly cutting-edge AI, the Softies needed an innovation infusion from a tiny company that was only a few years old. Scott says that it wasn’t just Microsoft that fell short—“it was everyone.” OpenAI’s focus on pursuing AGI, he says, allowed it to accomplish a moonshot-ish achievement that the heavy hitters weren’t even aiming for. It also proved that not pursuing generative AI was a lapse that Microsoft needed to address. “One thing you just very clearly need is a frontier model,” says Scott.

Microsoft originally chipped in a billion dollars, paid off in computation time on its servers. But as both sides grew more confident, the deal expanded. Microsoft now has sunk $13 billion into OpenAI. (“Being on the frontier is a very expensive proposition,” Scott says.)

Of course, because OpenAI couldn’t exist without the backing of a huge cloud provider, Microsoft was able to cut a great deal for itself. The corporation bargained for what Nadella calls “non-controlling equity interest” in OpenAI’s for-profit side—reportedly 49 percent. Under the terms of the deal, some of OpenAI’s original ideals of granting equal access to all were seemingly dragged to the trash icon. (Altman objects to this characterization.) Now, Microsoft has an exclusive license to commercialize OpenAI’s tech. And OpenAI also has committed to use Microsoft’s cloud exclusively. In other words, without even taking its cut of OpenAI’s profits (reportedly Microsoft gets 75 percent until its investment is paid back), Microsoft gets to lock in one of the world’s most desirable new customers for its Azure web services. With those rewards in sight, Microsoft wasn’t even bothered by the clause that demands reconsideration if OpenAI achieves general artificial intelligence, whatever that is. “At that point,” says Nadella, “all bets are off.” It might be the last invention of humanity, he notes, so we might have bigger issues to consider once machines are smarter than we are…

..Altman explains why OpenAI released ChatGPT when GPT-4 was close to completion, undergoing safety work. “With ChatGPT, we could introduce chatting but with a much less powerful backend, and give people a more gradual adaptation,” he says. “GPT-4 was a lot to get used to at once.” By the time the ChatGPT excitement cooled down, the thinking went, people might be ready for GPT-4, which can pass the bar exam, plan a course syllabus, and write a book within seconds…

…But if OpenAI’s products were forcing people to confront the implications of artificial intelligence, Altman figured, so much the better. It was time for the bulk of humankind to come off the sidelines in discussions of how AI might affect the future of the species…

…As one prominent Silicon Valley founder notes, “It’s rare that an industry raises their hand and says, ‘We are going to be the end of humanity’—and then continues to work on the product with glee and alacrity.”

OpenAI rejects this criticism. Altman and his team say that working and releasing cutting-edge products is the way to address societal risks. Only by analyzing the responses to millions of prompts by users of ChatGPT and GPT-4 could they get the knowledge to ethically align their future products…

…It would also help if generative AI didn’t create so many new problems of its own. For instance, LLMs need to be trained on huge data sets; clearly the most powerful ones would gobble up the whole internet. This doesn’t sit well with some creators, and just plain people, who unwittingly provide content for those data sets and wind up somehow contributing to the output of ChatGPT. Tom Rubin, an elite intellectual property lawyer who officially joined OpenAI in March, is optimistic that the company will eventually find a balance that satisfies both its own needs and that of creators—including the ones, like comedian Sarah Silverman, who are suing OpenAI for using their content to train its models. One hint of OpenAI’s path: partnerships with news and photo agencies like the Associated Press and Shutterstock to provide content for its models without questions of who owns what.

5. Inside Intel’s Chip Factory, I Saw the Future. It’s Plain Old Glass – Stephen Shankland

But the next breakthrough to make our laptops more efficient and AI more powerful could come from plain old glass. I’ve just seen firsthand how it works…

…There, in a hulking white high-tech building in the Phoenix area’s scorching desert landscape, Intel transforms sheets of glass the size of a small tabletop into paperclip-sized rectangular sandwiches of circuitry built with some of the same techniques as the processor itself.

Intel has begun a years-long transition to new technology that rests processors on a bed of glass instead of today’s epoxy-like organic resin. The new glass foundation, called a substrate, offers the speed, power and real estate necessary for the chip industry’s shift to new technology packaging multiple “chiplets” into a single larger processor.

In short, that means a new way to sustain Moore’s Law, which charts progress in cramming more circuitry elements called transistors into a processor. The A17 Pro processor in Apple’s new iPhone 15 Pro has 19 billion transistors. Intel’s Ponte Vecchio supercomputing processor has more than 100 billion. By the end of the decade, Intel expects processors with — if you can imagine it — a trillion transistors.

Intel relied on this chiplet approach to catch up to competitors with superior processor manufacturing abilities. But now Intel can use it to outpace rivals in an era when exploding demand for new processing power has surpassed the industry’s ability to deliver it, said Creative Strategies analyst Ben Bajarin. And Intel’s glass substrate technology demonstrates Intel’s packaging prowess…

…The whole chip industry will make the glass transition at least for high-end processors to cope with chipmaking challenges, and Intel has the lead, said FeibusTech analyst Mike Feibus…

…”Basically, the innovation is done,” said Ann Kelleher, the executive vice president leading technology development at Intel. The glass substrate technology “gives us an ability to ultimately get higher performance for our products.”…

…The glass technology underneath a processor won’t arrive until the second half of the decade, and when it does, it’ll appear first underneath the biggest, most power-hungry chips, the ones that perch in thousands of servers stacked up in data centers operated by huge “hyperscalers” like Google, Amazon, Microsoft and Meta.

That’s because glass brings several advantages to these hot and huge chips, said Rahul Manepalli, an Intel fellow who leads Intel’s module engineering work.

It can accommodate 10 times the power and data connections as today’s organic substrates so more data can be pumped in and out of a chip. It doesn’t warp as much, critical to ensuring processors lie flat and connect properly to the outside world and thus enabling 50% larger chip packages. It transmits power with less waste, meaning chips can run either faster or more efficiently. And it can run at a higher temperature, and when it heats up, it expands at the same rate as silicon to avoid mechanical failures.

Glass will enable a new generation of server and data center processors, successors to mammoth beasts like the Intel Xeons that can run cloud computing services like email and online banking and Nvidia’s artificial intelligence processors that have exploded in popularity as the world embraces generative AI.

But as glass substrates mature and costs come down, it’ll spread beyond data centers to the computer sitting on your lap…

…Intel’s 8086 chip, the 1978 precursor to every PC and server processor that Intel has made since, was a flat square of silicon with 29,000 transistors. To protect it and plug it into a circuit board, it was housed in a package that looked like a flat caterpillar. Forty metal legs carried power and data to the chip.

Since then, processor packaging has advanced dramatically. It once was relatively crude, but now the boundary between chipmaking and packaging are blurring, Kelleher said. Packaging processes now use lithography machines to etch their own circuitry, although not nearly as finely as on processors…

…So today’s packages have flat metal contact patches on the bottom of the package. The chip is installed when hundreds of pounds of force mash it onto a circuit board.

A metal cap atop a processor draws away waste heat that otherwise would crash a computer. And beneath the processor is a substrate with an increasingly complex, three-dimensional network of power and data connections to link the chip to the outside world.

There are challenges moving from today’s organic substrates to glass. Glass is brittle, so it must be handled carefully, for example.

To ease the transition, Intel is adapting glass-handling equipment from experts who already know how to handle it without breaking: the display industry, which makes everything from tiny smartwatch screens to enormous flat-panel TVs. They also have to etch circuitry onto glass and have developed many of the needed ultrapure materials and careful handling processes.

But there are differences. Flat-panel displays have sensitive electronic elements only on one side, so glass can glide through factories on rollers. Intel builds a sandwich of materials and circuitry called redistribution layers onto both sides of the glass, so its machines must in effect hold the glass only by the edges…

…Signing on a packaging customer is a bit easier than signing on a chipmaking customer, with fewer technology complications and shorter lead times, he said.

But in customer deals for packaging can lead to the deeper relationship that extends into chipmaking, in particular with the Intel 18A chipmaking process the company expects will surpass TSMC and Samsung in 2024.

“It’s a foot in the door,” Gardner said. “There’s one customer in particular [for which] that trajectory of packaging first and advanced packaging then 18A is working well.”…

…It’s unclear how much of the processor business will move from “monolithic,” single-die designs to chiplet designs. There are still cost and simplicity advantages to avoiding advanced packaging. But it’s clear the biggest processors — the server and AI brains in data centers — will become sprawling complexes of interlinked chiplets.

And that’s where glass substrates should come in handy, with enough area, communication links and power delivery abilities to give chip designers room for growth.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, ASML, Meta Platforms, Microsoft, Tesla, and TSMC. Holdings are subject to change at any time.

What We Can Learn From EC World REIT’s Troubles

Takeaways from EC World REIT’s recent financial troubles involving its inability to refinance its debt and missed collections from its key tenant.

EC World REIT (SGX: BWCU) is in hot water. The Singapore-listed real estate investment trust, which owns properties in China, is having trouble keeping sufficient funds in its interest reserves and the manager of the REIT also claims that the REIT is owed around S$27.5 million (RMB 145.8 million) from one of its tenants.

Its liquidity troubles led the REIT manager to call for a voluntary trading halt of its units. With financial issues mounting, the situation looks rather bleak for unit holders who are now left with no way to offload the units.

While unpleasant, a bad situation presents us with a learning opportunity. With that said, here are some lessons we can takeaway from EC World REIT’s troubles.

Beware of tenant concentration risk

Tenant concentration risk is a big risk for REITs.

EC World REIT is not the only REIT to suffer from missed payments by a major tenant. First REIT (SGX: AW9U), which owns healthcare properties in Indonesia, also suffered a shock a few years ago when its main tenant forced a restructuring of its master lease arrangement, leading to a drastic fall in income for the REIT.

Ability to refinance its debt

EC World REIT first ran into problems when it was unable to refinance its debt that was coming due.

REITs typically take “interest-only” loans. Unlike a home mortgage, in which the borrower pays a fixed amount every month to pay off the interest and a part of the principal, an interest-only loan is a loan where the borrower only pays interest on the loan and does not need to pay back the principal until the loan matures.

As a REIT is required to distribute 90% of its distributable income to its unitholders, a REIT usually does not have enough cash to pay back the principal when a loan matures. As such, the default option is to refinance the loan with a new loan. However, in a situation where the REIT is unable to refinance the loan, the REIT may end up with a liquidity issue.

REITs with stable assets, a diversified tenant base, other means to capital, and low debt-to-asset ratios will likely have less trouble refinancing their debt when it comes due as lenders will be willing to underwrite loans to these REITs. On the other hand, REITs that have unstable assets, tenant concentration risk, or an inability to raise other forms of capital may be at risk of being unable to refinance their debt.

Diversify your investments

A few years ago, I personally invested in both EC World REIT and First REIT (I sold both these companies in 2020). I was willing to invest in them as both offered high yields which I believed was fair compensation for the risks involved.

While there was a chance that the investments could turn sour, I was at least collecting 8-9% in annual distribution yields. Just a few good years and my distributions collected would have paid off my investment principal.

But I also made sure that these investments only made up a small percentage of my entire portfolio. If they turned out well, I would have made a decent return. But if they soured, the impact on my entire portfolio would still be minimal.

Bottom line

Investing is ultimately a game of probabilities. Some companies may provide better yields but have a higher element of risk while others provide lower yields but are less risky.

REIT investing is no different. Although investors tend to think of REITs as safer investments than companies, REITs also have their fair share of risk. REITs typical take on a lot of debt and this high leverage is one of the biggest risk factors for REITs.

In times of rising interest rates and tighter capital markets such as the current environment, the situation becomes even more uncertain for REITs. As such, we need to assess each individual REIT before investing and make sure that we diversify our investments to minimise the risk of ruin.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 17 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 17 September 2023:

1. How bonds ate the entire financial system – Robin Wigglesworth

“The bond market is the most important market in the world,” says Ray Dalio, the founder of the world’s largest hedge fund, Bridgewater. “It is the backbone of all other markets.”

While the bond market has become larger and more powerful, the importance of banks — historically the workhorses of the capitalist system — is subtly fading. The global bond market was worth about $141tn at the end of 2022. That is, for now, smaller than the $183tn that the Financial Stability Board estimates banks hold globally, but much of the latter is actually invested in bonds — a fact that some US banks have recently rued…

…The market is now facing one of its biggest tests in generations. Last year, resurgent inflation — the nemesis of financial securities that pay fixed interest rates — triggered the worst setback in at least a century. Overall losses were almost $10tn, shaking UK pension plans and regional banks in the US. And although bonds have regained their footing this year, they are still beset by rising interest rates.

Even if the bond market adapts, as it has in the past, its ballooning power, reach and complexity has some awkward implications for the global economy. “This transformation has been extraordinary, and positive,” says Larry Fink, head of BlackRock, the world’s biggest investment group. “But we have a regulatory system designed for a time when banks were the dominant players. They aren’t any more.”

“Shadow banking” is what some academics call the part of the financial system that resembles, but falls outside traditional banking. Policymakers prefer the less malevolent-sounding — but almost comically obtuse — term “non-bank financial institutions”. At $240tn, this system is now far bigger than its conventional counterpart. The bond market is its main component, taking money from investors who can mostly yank it away at short notice and funnel it into long-term investments.

The question of how to tame shadow banking is one of the thorniest topics in finance today. For the financial system as a whole, it is arguably better that the risks bonds inevitably entail are spread across a vast, decentralised web of international investors, rather than concentrated in a narrow clutch of banks. But in finance, risk is like energy. It cannot be destroyed, only shifted from one place to another. As it gets shunted around, its consequences can morph in little understood, even dangerous ways. We saw a perfect example of this in March 2020, when the Covid-19 pandemic acted as a gigantic stress test for the financial system that revealed fresh cracks in its foundation…

…Doge Vitale II Michiel of Venice was in a pickle. Under a dubious pretext, the Byzantine empire had in 1171 arrested all Venetian merchants in its capital Constantinople and seized their property. But the Italian city-state didn’t have the funds to send a navy to rescue its imprisoned citizens. So the Doge forced all citizens to lend the city some money, in return for 5 per cent interest a year until they were repaid.

The rescue mission did not go well. The Venetian fleet was devastated by plague while negotiating with Constantinople, and the Doge was forced to return humiliated. Back in Venice, irate subjects chased their ruler down the city’s streets and beat him to death. Ruined by the debacle, Venice was unable to repay, turning the emergency prestiti (loan) into a permanent fixture that paid 5 per cent annually.

Most people were eventually fine with this arrangement. The steady interest payments were quite attractive. Occasionally, Venice would raise more prestiti, and the one-time emergency facility gradually became a handy way of raising money…

…Another crucial difference is that bonds are designed to be traded, while loans are typically not. In 12th-century Venice, prestiti were bought and sold in the city’s Rialto market. Today, bond trading happens by phone, electronic messages and algorithms across the world’s financial centres. This tradability is central to the growth of bonds, as it allows creditors to shift the risk to someone else.

By the 19th century, bond markets had helped shape the world order. Countries that could best finance themselves tended to succeed. England’s victory over Napoleonic France was enabled by its bond market, which allowed it to finance wartime expenditures more effectively than did the local bankers that Paris depended on for short-term, high-interest loans…

…The aftermath of the second world war was unkind to the bond market. Although it had provided vital wartime funding for allied governments and remained one of the financial system’s most crucial cogs, accelerating inflation in the 1950s and 60s posed a challenge for securities with fixed interest rates. By the 1970s, buying bonds became a constant, brutal race to stay ahead of inflation’s return-eroding force. The aggressive central bank-rate increases that became necessary to tame runaway prices also lowered the value of bonds issued in a lower-rate environment. People dourly joked that bonds had become “certificates of confiscation”.

But the 1980s brought a new era of slowing inflation, falling rates, regulatory liberalism and financial innovation, which would transform the bond market…

…But what made Ranieri’s name was not his persona. Wall Street has had plenty of bombastic bond traders with a penchant for coarse practical jokes. It was what he did to make a dime: packaging up individual mortgages into bonds and then trading chunks of those bonds, a process known as securitisation.

Securitisation is an old concept. Back in 1774, the very first mutual fund bought bonds backed by loans from plantations in the Caribbean and toll roads in Denmark. US mortgage-backed bonds existed as early as the 19th century. But these bonds only used the underlying loans as collateral.

In 1970, the US Government National Mortgage Association (known as Ginnie Mae) engineered the first “passthrough” mortgage-backed securities, where the underlying individual loan payments flowed directly through to the bond investor. This was followed by similar deals by other US mortgage agencies such as Freddie Mac and Fannie Mae, to little fanfare. Ranieri did for securitisation what Milken had done for the junk bond market; he transformed it from the backwaters into a global and massively lucrative industry.

The first fillip was the crisis that struck the US “savings and loans” industry when the Federal Reserve ratcheted up rates in the early 1980s. Congress passed a jammy tax break to make it easier for the banks to shed entire portfolios of mortgages at fire-sale prices. Ranieri’s Salomon was there to scoop them up and flip them to other investors. Money began to course through Salomon’s mortgage trading desk.

Ranieri realised that he needed to turn a one-off vein into an entire gold mine that could be exploited year after year. Luckily, he found some in-house inspiration: an innovative deal his former boss Dall had done with Bank of America in 1977, which sought to tackle the difficulty of valuing the cash flows of mortgage-backed securities with a technique called “tranching”. It sliced them up into different portions each with their own interest rates, maturities and riskiness. That way, each investor could simply choose what kind of exposure they might like — a buffet rather than a set-course meal of variable quality.

Ranieri ran with the idea. Rather than just take the mortgage of one bank, he pooled together bunches of mortgages from lots of them. To handle the complexity, he hired a lot of bright young mathematicians to complement the mini-Ranieris on the trading desk. He then lobbied vociferously for government blessing of the tranching structure, knowing this would add to the products’ lustre with investors. He succeeded. By the mid-1980s, the market took off…

…The story had an unhappy ending: the new market proliferated until it nearly brought the global financial system down in 2008, something that later weighed on Ranieri. “I will never, ever, ever, ever live out that scar that I carry for what happened with something I created,” he told The Wall Street Journal in 2018.

But the fundamental idea — packaging up smaller loans into bigger bonds and thereby bringing together more people who needed money with those who had it — was sound. Done judiciously, it actually makes banks less risky, by shifting the inherent danger of extending loans out of banks and into markets. (This is why securitisation has bounced back since 2008, and is starting to gain ground outside the US as well, often with government encouragement.)…

…Given how disastrous bank crises can be, it could be a good thing that bonds nowadays are doing more of the heavy lifting. Unlike bank depositors, bond fund investors do not expect to get their money back (even if it can be a shock when things fall apart). And unlike banks, bond funds typically do not use much or even any leverage.

But bond crises can also be painful — as we saw in both 2008 and nearly in 2020. Modern capitalism has largely been ordered around banks as the main intermediaries of money. Central banks were mostly set up to backstop these commercial banks, and, eventually, they began trying to regulate the temperature of economies by tweaking the cost of their funding, moving overnight interest rates up and down. But with the rise of bond markets, entirely new challenges have emerged and experimental tools to deal with them have become necessary — most notably quantitative easing, negative interest rates and “yield curve control”.

If the ultimate goal is to regulate the temperature of an economy by changing the cost of credit, then the fact that credit is increasingly extended by the bond market rather than banks inevitably has consequences. The market’s decentralised nature means that dangers can be harder to monitor and address, requiring massive, untargeted “spray-and-pray” monetary responses by central banks when trouble erupts.

Unfortunately, the custodians of the financial system have yet to fully grapple with those consequences, even if everyone from the Federal Reserve to the IMF has repeatedly warned about the multi-faceted dangers the shift from banks to bonds entails. 

2. Searching for Resilience – Michael Weeks

For a business to survive 260 years in the same industry, with the same family owners, is a remarkable achievement. Starting in 1761 as a one-man shop making lead pencils, Faber-Castell has grown into the largest producer of colored and graphite pencils globally, producing over 2 billion pencils each year, as well as pens, markers, highlighters, and related products

Already the leading pencil producer in the mid-1800s, Faber-Castell has stayed on top of their industry for close to two centuries, betraying incredible entrepreneurial ability and drive. When the English supply of graphite began to fail and pencils became unaffordable, they bought a Siberian graphite mine and relied in part on reindeer transport to bring new raw materials to their factories. They expanded their product catalog, built up operations across Europe and the Americas, and invested in new technologies and equipment to improve their production. They helped introduce trademark law in Germany to protect their reputation against competitors. They established a 10’000 hectare forest plantation in Brazil to ensure their wood supply. They took their business seriously.

Nine generations of history also come with hardship. When the Americans joined World War I, Faber-Castell was cut off from the US market despite having operated there since in the 1850s. All of their US assets—land, equipment, inventory, patents, and trademarks—were seized and sold at auction after the war ended. During World War II their largest factory in Brazil was seized, not to be recovered for another twenty years, while their German factories were commandeered by the Nazi war machine. And in 1971, after 95 years of building a reputation as the finest producer of slide rules, they saw this entire side business vanish almost overnight when the pocket calculator was commercialized.

Resilience—or, the ability to survive hard times, as Faber-Castell has demonstrated time and time again—is something that we value instinctively. Yet, it’s not a popular subject. For all the years we’ve heard talk of sustainability, it seems that economic resilience, a once important dimension of economic prosperity, has become a relic of the past…

…What makes resilience so hard to spot is that it can only be proven during those rare times of crisis…

…It follows that resilience is not the same as looking good or having predictable financial results…

…Worse, a steady business model can actually become a source of fragility if placed in the wrong hands, as when companies with highly regular income streams leverage up their balance sheets, providing more immediate returns to their owners at the expense of their own resilience. Private equity has perhaps perfected this business model, but the growing dependence on debt in all walks of life reveal this as a defining feature of modern times…

…Resilience is not a destination. Seeking resilience means abandoning a narrow definition of success like sales growth or annual returns and instead becoming prepared for any eventuality that can cause serious harm. It is gained by pursuing new capabilities and flexibility, and by avoiding landmines. It means creating new options for the future instead of more plans for the present…

…Resilience comes at a cost. We are reminded of a metaphor used by Nassim Nicholas Taleb dealing with the nature of redundancy: “Layers of redundancy are the central risk management property of natural systems.”…

…A more effective way to add resilience to one’s savings is by ruthlessly avoiding its opposite: economic fragility. Thankfully, unlike resilience, fragility is often staring you in the face. This is where financial analysis really starts to shine. Is the company dependent on a few key customers and suppliers? Is the company overleveraged or buying back shares at indecent prices, just because it can? Are there elements of pricing power, or do their earnings evaporate at the first sign of trouble? Could a government ruling or decree suddenly break their business? Can their customers really afford to buy their products next year?…

…Perhaps the best way to find resilience is to look for its source: Resilience only comes from owners. Resilience is not a fluke that one stumbles into. It is a deliberate and purposeful objective which some aim for and others don’t. A good business plan, a profitable sector, a lot of cash, loyal management, or hardworking employees may all be wonderful, but only owners have the time horizon required to balance the present against an unknowable future, and only they have skin in the game—their own savings on the line. Unlike investors (renters), owners have no easy exits. They must build up reserves and competencies in the good years to give them options in the bad. They are motivated by a sense of responsibility—to themselves, their families, those they work with, and those who will come after them…

…Bakkafrost is a vertically integrated salmon farmer operating in the Faroe Islands and Scotland…

…A choice every salmon farmer has to make is what to do with the fish once it is ready to harvest. Do they sell their fish to wholesalers and other processors, or do they take it a step further, converting some into filets or smoked salmon that goes straight to the grocery store? This latter step is called Value-Added Processing or VAP, and Bakkafrost aims to sell about 30–40% of its fish through this channel each year.

A choice every salmon farmer has to make is what to do with the fish once it is ready to harvest. Do they sell their fish to wholesalers and other processors, or do they take it a step further, converting some into filets or smoked salmon that goes straight to the grocery store? This latter step is called Value-Added Processing or VAP, and Bakkafrost aims to sell about 30–40% of its fish through this channel each year. In the ten years from 2011 to 2020, Bakkafrost has reported cumulative revenues of about €4.5 billion and operating earnings of €1.1 billion, yet of those earnings only €14 million, or 1.2% of the total, have come from their VAP division. A financial observer might say, quite rightly, why bother? Salmon farming is hard enough, why dedicate additional capital and resources for a pittance? What a financial owner doesn’t see— and which the owners of Bakkafrost see plain as day—is the resilience this seemingly irrelevant processing step embeds in the organization…

…When hotels and restaurants shut their doors last year, all the food that was headed to this channel, including many millions of whole salmon, all needed to end up somewhere. Remember the 30-month lag between laying eggs and the salmon harvest? This means that while the demand for whole salmon evaporated, supply kept pouring in and the markets were soon stuffed full of whole fish with no one around to buy them…

…When hotels and restaurants shut their doors last year, all the food that was headed to this channel, including many millions of whole salmon, all needed to end up somewhere. Remember the 30-month lag between laying eggs and the salmon harvest? This means that while the demand for whole salmon evaporated, supply kept pouring in and the markets were soon stuffed full of whole fish with no one around to buy them. Covid in mind, but they did understand and value the importance of adding resilience to their business.

3. Dangerous CFOs, Imperial CEOs, Chagrined Bankers, and Warren Buffett – Dan Noe

While I was a credit analyst and manager at Moody’s, I met with many CEOs and CFOs. Most meetings were routine and executives were good at explaining their company’s business and financial strategies. They almost always put their best foot forward. But the exceptions were notable…

…The most revealing comment I ever heard in a meeting came from a savings and loan CEO during that industry’s crisis in the late 1980s. S&L holding companies had issued a lot of junk bonds and God knows what they did with the money. A lot of them were going under. During one meeting, this particular S&L CEO said, apropos of nothing, “I hope the feds never figure out what I’m doing.” His bankers looked like they were going to throw up. That was an example of an in-person meeting affecting our credit assessment…

…The worst case of self-importance was a regional bank CEO who insisted we meet him in his big suite at a fancy hotel and have breakfast. This was weird, and a member of his entourage tried to explain: “He needs anonymity while he is in New York.” I said, “Well, he’s got it. He works at a Midwest bank. Nobody here knows who he is.”

I did these meetings for years and felt like I’d seen it all. Then, one day, I met the antithesis of imperial CEOs and executives who have problems answering questions about their business. Warren Buffett and Charlie Munger from Berkshire Hathaway came in for a visit. They arrived in a taxi, not a limo, with no hangers-on, not one person with them at all. When I addressed him as “Mr. Buffett” he said, “Please call me Warren.” They had come in to talk about a debt issuance, and Buffett made a self-deprecating joke: “My mother always told me to avoid liquor, ladies, and leverage. I’ve avoided the first two, but sometimes I like a little leverage.” Our analyst, Weston Hicks, was an excellent insurance industry analyst and asked detailed and probing questions. Buffett and Munger spent an hour, an hour-and-a-half, giving very specific answers. When the meeting ended and we were walking to the elevators, Munger said to me, out of earshot of Weston, “Make sure you keep that analyst. He’s really good.”

4. Product-Led AI – Seth Rosenberg

I believe there’s tremendous value to be captured by product builders who can successfully put the power of AI into products that people love. As my partner Jerry Chen recently put forth, if we’re living in an age where foundation models make it possible for anyone to build an AI company, “the most strategic advantage [of applications] is that you can coexist with several systems of record and collect all the data that passes through your product.”…

…Of course there are plenty of detractors who don’t believe startups have a chance at this layer – incumbents own the data and distribution, and access to LLMs is both commoditized and fraught with platform risk. There will likely be many casualties of companies where an API call to OpenAI isn’t sufficient to build lasting value…

…In the last wave of consumer software, social networks and marketplaces were the dominant business models that created trillions of dollars of market cap, with Meta alone valued at just under $800 billion. Greylock was lucky to back many of these, including Meta, LinkedIn, Roblox, Airbnb, Discord, Musical.ly (now TikTok), and Nextdoor.

As reflected by the valuations, these networks were assumed to be “unbreakable”.

But now, AI challenges many of our initial assumptions. This is creating a new arms race to build the next AI-first network.

We moved from networks that connect people to algorithms that connect people to content. Now, we’re moving to algorithms that replace people…

…You can imagine a freelance logo design marketplace, like parts of Fiverr, will be replaced with an algorithm. A user inputs a prompt, and after a few tries, gets their logo. In this case, the data the algorithm receives is fairly shallow (prompts and selection), and the supply side is entirely replaced by an algorithm.

Contrast this to an AI-first jobs marketplace. The optimal product would be an AI career coach for job seekers and an AI assistant for recruiters – two seemingly separate products, connected by the same algorithm. The coach could gather deep insight from a job seeker – far beyond what they would share on a resume or LinkedIn – and use this data to not just find the perfect match, but help them discover their most fulfilling career path. Combine this data with a strong understanding of a recruiter’s needs, and both the coach and assistant get better…

…The best opportunities for start-ups attacking large software categories comes from finding angles where incumbents can’t compete. Here are four examples:

  1. UI/UX is re-imagined with AI – incumbent UI is irrelevant
  2. Product surface area is re-imagined with AI – incumbents compete at a different scope
  3. Business model is re-imagined with AI – incumbent business model can’t adapt
  4. No incumbent tech co before AI…

…Another great example is customer service, a $10 billion software category. The “obvious” starting point would be to automate customer service reps using AI. But what if the entire concept of customer service was re-imagined? Today, most companies actively reduce call volume by hiding the “contact us” button behind 5 menus and an ever-expanding phone tree. But, in a world of AI, every interaction can be cheap, delightful, and revenue-generating. In that world, companies might actively try to speak with their customers.

When I was at Meta in 2016, we tried to remedy this with an AI bot platform. Piloting with KLM airlines, we built an experience where Messenger handled every aspect of the passenger’s journey – boarding pass, customer service, travel recommendations at their destination, etc, all in a single conversation.. Despite amazing feedback, this pilot was shut down because of the cost to serve – but today, LLMs could make these types of interactions possible…

…One of the most interesting new opportunities with AI is going after the vastly larger market for services versus software with AI “co-pilots”. Most knowledge work involves analyzing and transforming data, a task that algorithms are better suited for.

I believe the best opportunities for co-pilots are “branded” sales people, like wealth managers, insurance brokers, and mortgage brokers. Their role involves a lot of text- based coordination, they work across multiple apps, and the ROI of increased efficiency is tangible. Take wealth managers as an example. According to Morgan Stanley, the biggest indicator of client retention for wealth managers is not portfolio performance, but consistency of personalized interactions with clients.

5. Society’s Technical Debt and Software’s Gutenberg Moment – Paul Kedrosky and Eric Norlin

Software has a cost, and it has markets of buyers and sellers. Some of those markets are internal to organizations. But the majority of those markets are external, where people buy software in the form of apps, or cloud services, or games, or even embedded in other objects that range from Ring doorbells to endoscopic cameras for cancer detection. All of these things are software in a few of its myriad forms. 

With these characteristics in mind, you can think of software in a basic price/quantity graph from introductory economics. There is a price and a quantity demanded at that price, and there is a price and quantity at which those two things are in rough equilibrium, as the following figure shows. Of course, the equilibrium point can shift about for many reasons, causing the P/Q intersection to be at higher or lower levels of aggregate demand. If the price is too high we underproduce software (leaving technical debt), and if too low, well … let’s come back to that…

…But technology has a habit of confounding economics. When it comes to technology, how do we know those supply and demand lines are right? The answer is that we don’t. And that’s where interesting things start happening.

Sometimes, for example, an increased supply of something leads to more demand, shifting the curves around. This has happened many times in technology, as various core components of technology tumbled down curves of decreasing cost for increasing power (or storage, or bandwidth, etc.). In CPUs, this has long been called Moore’s Law, where CPUs become more powerful by some increment every 18 months or so. While these laws are more like heuristics than F=ma laws of physics, they do help as a guide toward how the future might be different from the past.

We have seen this over and over in technology, as various pieces of technology collapse in price, while they grow rapidly in power. It has become commonplace, but it really isn’t. The rest of the economy doesn’t work this way, nor have historical economies. Things don’t just tumble down walls of improved price while vastly improving performance. While many markets have economies of scale, there hasn’t been anything in economic history like the collapse in, say, CPU costs, while the performance increased by a factor of a million or more.

To make this more palpable, consider that if cars had improved at the pace computers have, a modern car would:

  • Have more than 600 million horsepower
  • Go from 0-60 in less than a hundredth of a second
  • Get around a million miles per gallon 
  • Cost less than $5,000 

And they don’t. Sure, Tesla Plaid is a speedy car, it is nowhere near the above specs—no car ever will be. This sort of performance inflection is not our future, but it fairly characterizes and even understates what has happened in technology over the last 40 years.

… And each of these collapses has had broader consequences. The collapse of CPU prices led us directly from mainframes to the personal computer era; the collapse of storage prices (of all kinds) led inevitably to more personal computers with useful local storage, which helped spark databases and spreadsheets, then led to web services, and then to cloud services. And, most recently, the collapse of network transit costs (as bandwidth exploded) led directly to the modern Internet, streaming video, and mobile apps…

…Each collapse, with its accompanying performance increases, sparks huge winners and massive change, from Intel, to Apple, to Akamai, to Google & Meta, to the current AI boomlet. Each beneficiary of a collapse requires one or more core technologies’ price to drop and performance to soar. This, in turn, opens up new opportunities to “waste” them in service of things that previously seemed impossible, prohibitively expensive, or both…

…Still, the suddenly emergent growth of LLMs has some people spending buckets of time thinking about what service occupations can be automated out of existence, what economists called “displacement” automation. It doesn’t add much to the aggregate store of societal value, and can even be subtractive and destabilizing, a kind of outsourcing-factory-work-to-China moment for white-collar workers. Perhaps we should be thinking less about opportunities for displacement automation and more about opportunities for augmenting automation, the kind of thing that unleashes creativity and leads to wealth and human flourishing.

So where will that come from? We think this augmenting automation boom will come from the same place as prior ones: from a price collapse in something while related productivity and performance soar. And that something is software itself.

By that, we don’t literally mean “software” will see price declines, as if there will be an AI-induced price war in word processors like Microsoft Word, or in AWS microservices. That is linear and extrapolative thinking. Having said that, we do think the current frenzy to inject AI into every app or service sold on earth will spark more competition, not less. It will do this by raising software costs (every AI API call is money in someone’s coffers), while providing no real differentiation, given most vendors will be relying on the same providers of those AI API calls…

… In a hypothetical two-sector economy, when one sector becomes differentially more productive, specialized, and wealth-producing, and the other doesn’t, there is huge pressure to raise wages in the latter sector, lest many employees leave. Over time that less productive sector starts becoming more and more expensive, even though it’s not productive enough to justify the higher wages, so it starts “eating” more and more of the economy.

Economist William Baumol is usually credited with this insight, and for that it is called Baumol’s cost disease. You can see the cost disease in the following figure, where various products and services (spoiler: mostly in high-touch, low-productivity sectors) have become much more expensive in the U.S., while others (non-spoiler: mostly technology-based) have become cheaper…

…But there is another sector being held back by a variant of Baumol’s cost disease, and that is software itself. This may sound contradictory, which is understandable. After all, how can the most productive, wealth-generating, deflationary sector also be the victim of the same malaise it is inflicting on other sectors?

It can, if you think back to the two-sector model we discussed earlier. One sector is semis and CPUs, storage and backbone networks. Those prices are collapsing, requiring fewer people while producing vastly more performance at lower prices. Meanwhile, software is chugging along, producing the same thing in ways that mostly wouldn’t seem vastly different to developers doing the same things decades ago. Yes, there have been developments in the production and deployment of software, but it is still, at the end of the day, hands pounding out code on keyboards. This should seem familiar, and we shouldn’t be surprised that software salaries stay high and go higher, despite the relative lack of productivity. It is Baumol’s cost disease in a narrow, two-sector economy of tech itself.

These high salaries play directly into high software production costs, as well as limiting the amount of software produced, given factor production costs and those pesky supply curves. Startups spend millions to hire engineers; large companies continue spending millions keeping them around. And, while markets have clearing prices, where supply and demand meet up, we still know that when wages stay higher than comparable positions in other sectors, less of the goods gets produced than is societally desirable. In this case, that underproduced good is…software. We end up with a kind of societal technical debt, where far less is produced than is socially desirable—we don’t know how much less, but it is likely a very large number and an explanation for why software hasn’t eaten much of the world yet…

…We think that’s all about to change. The current generation of AI models are a missile aimed, however unintentionally, directly at software production itself. Sure, chat AIs can perform swimmingly at producing undergraduate essays, or spinning up marketing materials and blog posts (like we need more of either), but such technologies are terrific to the point of dark magic at producing, debugging, and accelerating software production quickly and almost costlessly.

And why shouldn’t it be? As the following figure shows, Large Language Model (LLM) impacts in the job market can be thought of as a 2×2 matrix. Along one axis we have how grammatical the domain is, by which we mean how rules-based are the processes governing how symbols are manipulated. Essays, for example, have rules (ask any irritated English teacher), so chat AIs based on LLMs can be trained to produce surprisingly good essays. Tax providers, contracts, and many other fields are in this box too…

… Software is even more rule-based and grammatical than conversational English, or any other conversational language. Programming languages—from Python to C++—can be thought of as formal languages with a highly explicit set of rules governing how every language element can and cannot be used to produce a desired outcome…

…Again, programming is a good example of a predictable domain, one created to produce the same outputs given the same inputs. If it doesn’t do that, that’s 99.9999% likely to be on you, not the language. Other domains are much less predictable, like equity investing, or psychiatry, or maybe, meteorology.

This framing—grammar vs predictability—leaves us convinced that for the first time in the history of the software industry, tools have emerged that will radically alter the way we produce software. This isn’t about making it easier to debug, or test, or build, or share—even if those will change too—but about the very idea of what it means to manipulate the symbols that constitute a programming language…

…Now, let’s be clear. Can you say MAKE ME MICROSOFT WORD BUT BETTER, or SOLVE THIS CLASSIC COMPSCI ALGORITHM IN A NOVEL WAY? No, you can’t, which will cause many to dismiss these technologies as toys. And they are toys in an important sense. They are “toys” in that they are able to produce snippets of code for real people, especially non-coders, that one incredibly small group would have thought trivial, and another immense group would have thought impossible. That. Changes. Everything.

How? Well, for one, the clearing price for software production will change. But not just because it becomes cheaper to produce software. In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster…

…We have mentioned this technical debt a few times now, and it is worth emphasizing. We have almost certainly been producing far less software than we need. The size of this technical debt is not knowable, but it cannot be small, so subsequent growth may be geometric. This would mean that as the cost of software drops to an approximate zero, the creation of software predictably explodes in ways that have barely been previously imagined.

The question people always have at this point is, “So what app gets made?” While an understandable question, it is somewhat silly and definitely premature. Was Netflix knowable when Internet transit costs were $500,000/Mbps? Was Apple’s iPhone imaginable when screens, CPUs, storage and batteries would have made such devices the size of small rooms? Of course not. The point is that the only thing we know is that the apps and services will come. Without question.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon (parent of AWS), Apple, Meta Platforms, Microsoft, and Tesla. Holdings are subject to change at any time.

Mind The Gap

A favourable macroeconomic trend does not necessarily mean a company’s business – and hence stock – will do well.

There’s a gap in the investing world that I think all investors should beware. It’s a gap that can be a mile (or kilometre – depending on which measurement system you prefer) wide. It’s the gap between a favourable macroeconomic trend and a company’s stock price movement.

Suppose you could go back in time to 31 January 2006, when gold was trading at US$569 per ounce. You have an accurate crystal ball and you know the price of gold would more than triple to reach US$1,900 per ounce over the next five years. Would you have wanted to invest in Newmont Corporation, one of the largest gold producing companies in the world, on 31 January 2006? If you said yes, you would have made a small loss on your Newmont investment, according to O’Higgins Asset Management. 

Newmont’s experience of having its stock price not perform well even in the face of a highly favourable macroeconomic trend (the tripling in the price of gold) is not an isolated incident. It can be seen even in an entire country’s stock market.

China’s GDP (gross domestic product) grew by an astonishing 13.3% annually from US$427 billion in 1992 to US$18 trillion in 2022. But a dollar invested in the MSCI China Index – a collection of large and mid-sized companies in the country – in late-1992 would have still been roughly a dollar as of October 2022, as shown in Figure 1. Put another way, Chinese stocks stayed flat for 30 years despite a massive macroeconomic tailwind (the 13.3% annualised growth in GDP). 

Figure 1; Source: Duncan Lamont

Why have the stock prices of Newmont and Chinese companies behaved the way they did? I think the reason can be traced to some sage wisdom that the great Peter Lynch once shared in a 1994 lecture (link leads to a video; see the 14:20 min mark):

“This is very magic: it’s a very magic number, easy to remember. Coca-cola is earning 30 times per share what they did 32 years ago; the stock has gone up 30 fold. Bethlehem Steel is earning less than they did 30 years ago – the stock is half its price 30 years ago.”

It turns out that Newmont’s net income attributable to shareholders was US$1.15 billion in 2006; in 2011, it was US$972 million, a noticeable decline. As for China’s stocks, Figure 2 below shows that the earnings per share of the MSCI China Index was basically flat from 1995 to 2021.

Figure 2; Source: Eugene Ng

There can be a massive gap between a favourable macroeconomic trend and a company’s stock price movement. The gap exists because there can be a huge difference between a company’s business performance and the trend – and what ultimately matters to a company’s stock price, is its business performance. Always mind the gap when you’re thinking about investing in a company simply because it’s enjoying some favourable macroeconomic trend. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 10 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 10 September 2023:

1. Rediscovering Berkshire Hathaway’s 1985 Annual Meeting – Kingswell

In the mid-1980s, Berkshire Hathaway’s annual meeting was an entirely different beast than today’s weekend-long “Woodstock for Capitalists”. Attendees didn’t have to book their hotel rooms months in advance or wake up before dawn just to get in line outside of an arena. There was no mad rush for seats once the doors opened.

It was a quieter, simpler chapter in Berkshire’s history.

So quiet, in fact, that 1985’s annual meeting was held on a Tuesday. And, instead of a cavernous arena, Warren Buffett and Charlie Munger opted for the Red Lion Inn in downtown Omaha. Approximately 250 shareholders attended the meeting and the ensuing Q&A session lasted only — only? — two hours…

HOW TO VALUE A BUSINESS: “Do a lot of reading,” replied Buffett.

Generally speaking, he recommended the writings of Benjamin Graham and Philip Fisher for those trying to sharpen their investment mindset — and annual reports and trade magazines for evaluating particular businesses and industries.

Reading, he insisted, is more important than speaking with company executives or other investors. In fact, Buffett admitted that he had recently purchased a substantial amount of Exxon stock before talking to any of that company’s executives. “You’re not going to get any brilliant insights walking into the [Exxon] building,” he said.

And, at least in the money game, size matters.

It’s easier to determine the value of a large business than a small one, Buffett said. If someone buys a gas station, for example, another station opening across the street could have a major effect on the value of the first station…

A COUPLE OF LAUGHS & A ROUND OF APPLAUSE: No annual meeting is ever complete without some of that trademark Warren Buffett wit.

  • Will the federal deficit be substantially reduced? “I’ll believe it when I see it.”
  • What about so-called “junk” bonds? “I think they’ll live up to their name,” Buffett quipped.

2. Germany Is Losing Its Mojo. Finding It Again Won’t Be Easy –  Bojan Pancevski, Paul Hannon, and William Boston

Two decades ago, Germany revived its moribund economy and became a manufacturing powerhouse of an era of globalization.

Times changed. Germany didn’t keep up. Now Europe’s biggest economy has to reinvent itself again. But its fractured political class is struggling to find answers to a dizzying conjunction of long-term headaches and short-term crises, leading to a growing sense of malaise.

Germany will be the world’s only major economy to contract in 2023, with even sanctioned Russia experiencing growth, according to the International Monetary Fund…

…At Germany’s biggest carmaker Volkswagen, top executives shared a dire assessment on an internal conference call in July, according to people familiar with the event. Exploding costs, falling demand and new rivals such as Tesla and Chinese electric-car makers are making for a “perfect storm,” a divisional chief told his colleagues, adding: “The roof is on fire.”

The problems aren’t new. Germany’s manufacturing output and its gross domestic product have stagnated since 2018, suggesting that its long-successful model has lost its mojo.

China was for years a major driver of Germany’s export boom. A rapidly industrializing China bought up all the capital goods that Germany could make. But China’s investment-heavy growth model has been approaching its limits for years. Growth and demand for imports have faltered…

…Germany’s long industrial boom led to complacency about its domestic weaknesses, from an aging labor force to sclerotic services sectors and mounting bureaucracy. The country was doing better at supporting old industries such as cars, machinery and chemicals than at fostering new ones, such as digital technology. Germany’s only major software company, SAP, was founded in 1975.

Years of skimping on public investment have led to fraying infrastructure, an increasingly mediocre education system and poor high-speed internet and mobile-phone connectivity compared with other advanced economies.

Germany’s once-efficient trains have become a byword for lateness. The public administration’s continued reliance on fax machines became a national joke. Even the national soccer teams are being routinely beaten…

…Germany today is in the midst of another cycle of success, stagnation and pressure for reforms, said Josef Joffe, a longtime newspaper publisher and a fellow at Stanford University.

“Germany will bounce back, but it suffers from two longer-term ailments: above all its failure to transform an old-industry system into a knowledge economy, and an irrational energy policy,” Joffe said…

…Germany still has many strengths. Its deep reservoir of technical and engineering know-how and its specialty in capital goods still put it in a position to profit from future growth in many emerging economies. Its labor-market reforms have greatly improved the share of the population that has a job. The national debt is lower than that of most of its peers and financial markets view its bonds as among the world’s safest assets.

The country’s challenges now are less severe than they were in the 1990s, after German reunification, said Holger Schmieding, economist at Berenberg Bank in Hamburg.

Back then, Germany was struggling with the massive costs of integrating the former Communist east. Rising global competition and rigid labor laws were contributing to high unemployment. Spending on social benefits ballooned. Too many people depended on welfare, while too few workers paid for it. German reliance on manufacturing was seen as old-fashioned at a time when other countries were betting on e-commerce and financial services.

After a period of national angst, then-Chancellor Gerhard Schröder pared back welfare entitlements, deregulated parts of the labor market and pressured the unemployed to take available jobs…

… Private-sector changes were as important as government measures. German companies cooperated with employees to make working practices more flexible. Unions agreed to forgo pay raises in return for keeping factories and jobs in Germany…

… Booming exports to developing countries helped Germany bounce back from the 2008 global financial crisis better than many other Western countries.

Complacency crept in. Service sectors, which made up the bulk of gross domestic product and jobs, were less dynamic than export-oriented manufacturers. Wage restraint sapped consumer demand. German companies saved rather than invested much of their profits.

Successful exporters became reluctant to change. German suppliers of automotive components were so confident of their strength that many dismissed warnings that electric vehicles would soon challenge the internal combustion engine. After failing to invest in batteries and other technology for new-generation cars, many now find themselves overtaken by Chinese upstarts…

…BioNTech, a lauded biotech firm that developed the Covid-19 vaccine produced in partnership with Pfizer, recently decided to move some research and clinical-trial activities to the U.K. because of Germany’s restrictive rules on data protection.

German privacy laws made it impossible to run key studies for cancer cures, BioNTech’s co-founder Ugur Sahin said recently. German approvals processes for new treatments, which were accelerated during the pandemic, have reverted to their sluggish pace, he said…

…One recent law required all German manufacturers to vouch for the environment, legal and ethical credentials of every component’s supplier, requiring even smaller companies to perform due diligence on many foreign firms, often based overseas, such as in China…

…German politicians dismissed warnings that Russian President Vladimir Putin used gas for geopolitical leverage, saying Moscow had always been a reliable supplier. After Putin invaded Ukraine, he throttled gas deliveries to Germany in an attempt to deter European support for Kyiv…

…One problem Germany can’t fix quickly is demographics. A shrinking labor force has left an estimated two million jobs unfilled. Some 43% of German businesses are struggling to find workers, with the average time for hiring someone approaching six months.

Germany’s fragmented political landscape makes it harder to enact far-reaching changes like the country did 20 years ago. In common with much of Europe, established center-right and center-left parties have lost their electoral dominance. The number of parties in Germany’s parliament has risen steadily.

3. GLP-1 Drugs: Not a Miracle Cure for Weight Loss – Biocompounding

Weight loss drugs have been the talk of the town for the last couple of months. The weight loss drugs on the market are Wegovy, Ozempic from Novo Norodisk (NVO), and Mounjaro from Eli Lilly (LLY)…

…These drugs consist of a natural hormone called GLP1…

…GLP-1 drugs mimic the action of a hormone called glucagon-like peptide 1, a natural hormone produced by the body in the gut. When blood sugar levels start to rise after a meal, the body produces this hormone to achieve multiple functions as seen in the image above. By producing and administering this hormone as a therapeutic, the drug will elicit similar effects seen with the natural hormone…

…Apart from increasing insulin production, GLP-1 can also help regulate body weight. GLP-1 improves glycaemic control and stimulates satiety, leading to reductions in food intake and thus body weight. Besides gastric distension and peripheral vagal nerve activation, GLP-1RA induces satiety by influencing brain regions involved in the regulation of feeding, and several routes of action have been proposed. GLP-1 can also reduce gastric emptying, so you don’t feel hungry so fast.

However, apart from the positives GLP1 drugs also cause muscle loss, lessen bone density, and lower your resting metabolic rate.

A research paper published in 2019, reported the percentage of weight loss comprising fat mass vs the proportion comprising lean body mass in patients using the different GLP1 drugs…

…This means that while GLP1’s can help to reduce obesity, individuals using the drugs need to be mindful to preserve their lean mass which requires exercising regularly to ensure they limit the loss of lean mass and improve their basal metabolic rate.

4. An Interview with Daniel Gross and Nat Friedman about the AI Hype Cycle – Ben Thompson, Daniel Gross, and Nat Friedman

NF: I think one of the interesting trends that we’ve seen in the last six months that we weren’t seeing a year ago is basically the application of large models to things that were previously some form of human intellectual labor or productivity labor. So in a way, what they’re doing in these cases is the models are automating or replacing or augmenting some part of a company. They’re competing not with existing software products but with parts of companies.

An example of one that Daniel and I were just talking to recently, we won’t name the company, but they automate filing bids on public tenders for businesses that do business with the government in different jurisdictions, and the time savings of this is totally enormous for these companies, and the upside for them is huge. It’s replacing a raft of internal and external consultants who were doing copywriting and bid preparation and just lots of fairly mechanical but still nothing-to-sneeze-at intellectual labor that produced bid documents. There’s material revenue upside for being able to bid on more things and win more bids, and this company’s growing like crazy, like a weed, so that would be one example.

Another example, there’s a whole sector now of these avatar platforms where people are basically able to produce personalized videos of someone saying, “Hey Ben, I saw that you were interested in our product and I wanted to tell you a little bit about us” and being able to basically generate text, feed that into an avatar platform that generates a realistic video that’s customized and using that in advertising, using it in personal outreach, using it in training materials. There’s some competing with non-consumption here where some of those videos would never have been produced because it would’ve just been too costly, and there’s some like, “Hey, God, I used to have to spend a ton of time doing this, now I can do it quite quickly”. Another example that’s like that, and by the way, all of the avatar, I mean I can name some of those Synthesia, D-ID, HeyGen, they’re all doing great, all of these companies are growing really well.

Another similar category is optimizing e-commerce. There used to be an entire — there still is — an entire industry of consultants and experts and companies who know how to do the SEO around product titles and descriptions and make sure that you have an Amazon landing page that converts, and some of that knowledge and know-how is getting crystallized into models and agent-like tool chains, and the testing can now be done automatically and you can use language models to run this kind of thing. I think this is interesting because these are all niches that really weren’t happening six or nine months ago, and in every category I just mentioned, there’s a company that’s making or soon will be making tens of millions of dollars doing this productively, creating real economic value for their customers and in some cases competing with teams of people or consultants…

...Does this just confirm the thesis though that the most compelling aspects for AI are number one mostly in the enterprise? Again, because enterprises are going to think about costs in a, I hesitate to use the word rational, but in a traditionally rational way, “It’s worth this huge upfront investment because it will pay off X amount over Y axis of time” as opposed to consumers which are more about an experience and may not think about the lifetime cost or value of something, along with this data point where whoever has the data wins. Is that just the reality or is there still opportunities for new entrants in this space?

DG: I think the story of progress is one where things will often, I think, start off looking at the enterprise as a way to make the existing thing better, that idea that the first TV shows or cameras pointed at radio shows, the horseless carriage and all that sort of stuff. So I think there’s a lot of V1 AI, let’s just accelerate or automate a lot of the human interaction with text just because we can do text synthesis now with computers. But the native use cases that’ll come out I think slightly later are going to be consumer ones — those I think will be entirely different things that are not replacing a process that existed before, they’re doing something that was never possible before and so there are consumer experiences today that are not really like anything else on the Internet.

Well, the two that I had on here were that seemed to still have a lot of traction are still growing are Midjourney and Character.AI, which are completely novel experiences and about fantasy and imagination and something that couldn’t be done previously.

DG: Yeah, it’s sort of funny, they told us the robots are going to be really good at blue collar jobs and really terrible at human psychology — that it’ll be the final realm of the human-to-human connection. Of course, it turns out the robots are fantastic at psychology and have zero dexterity for doing actual labor. But Character.AI is a good example and there’s now a bunch of these new kinds of native behavior, and it’s always interesting to ask of these behaviors. So you’re talking to an agent all day on Character, I find the good question to be asking is, “What were you doing previously?” as a way to figure out what this actually is, and the share of time that’s usually being taken is from gaming or social media. It’s really hard, I think, to forecast, to look at the iPhone and to forecast Uber or to look at the Internet and forecast even something like Amazon bots. They’re usually going to be, I think, consumer experiences. Those are the ones that are going to be the really disruptive stuff and the enterprise I think will get a lot of the obvious. We had a person here and now maybe we have a person in a co-pilot model.

That’s kind the trade-off of there being a top-down decision maker that thinks about things like lifetime value.

DG: They’ll do the rational thing.

They’re only going to do the obvious things.

DG: Yeah, and I think if businesses get disrupted by AI in any way, it will be something around a totally native, ideally a different user interface, an acceptance of a customer experience that’s a bit worse, which is usually your Clayton Christensen sort of downmarket disruption, but scales much more. I was actually thinking the companies that are trying to build, “We’re going to do your high-end legal work with AI”, I’m not exactly sure when that’ll work because the models still have this issue with hallucinating things and making things up. Whereas the low end, I was going to call a lawyer for $500 an hour to ask a particular question about my apartment lease, but instead I’m going to talk to legal GPT, that stuff I think will probably be much more impactful…

There’s an aspect here — one of the questions with the coding bit is Stack Overflow and sites like that have taken the biggest hit, but is this a sustainable future? I think this is a broader question about do we run out of data on the Internet. Is there going to be a data manufacturing industry?

NF: There is already. I think this is the secret story just beneath the surface of what’s happening. Everyone knows about the GPUs, you got to have the GPUs, they’re very expensive, we’re talking about the Nvidia supply chain. All of us know about CoWoS and wafer packaging and Ajinomoto Films and all these things.

But the other key input is data and the readily available tokens you can scrape off the Internet are quickly exhausted, and so there is currently happening beneath the surface, a shadow war for data where the largest AI labs are spending huge amounts of money, like huge amounts of money to acquire more valuable tokens, either paying experts to generate it, working through labeling companies like Scale AI or others. There’s a new crop of startups in that space as well and we think more is going to happen there and it’s going to be a really interesting space to watch.

So there’s a way in which you need these really high IQ, high-value tokens in order to train your models, and the average piece of data you scrape off a random website kind is equal to all the other data that you have, but you’ll pay extra for really valuable training data, and so people are producing it. I don’t know the exact numbers, but I’ve heard rumors that Google is spending a billion dollars this year on generating new training data, and if you’re going to spend billions and billions on your CapEx to build out your GPU training clusters, spending some fraction of that or maybe an equal amount in generating data, which is a kind of CapEx as well kind of makes sense. Someone told me the other day experts are the new GPUs and so there’s this wave of spending on experts who are going to generate tokens that can be valuable.

Then of course the secondary question there is what the legal regime will ultimately be for training. We’re operating in the US, UK, and in Europe under this fair use regime now where it’s fair use for you to scrape text off the Internet as long as it’s public and you’re not going through paywalls or user walls to get it and then you can in aggregate train machine learning models on it. That’s kind of the bright letter of the law, but people don’t always feel good about that and so will the law change, will there be a kind of DMCA for AI? And which way will it cut? I think we don’t know yet and so there may be a war for data in more ways than one over the next couple of years…

For the record, Nvidia’s results are going to come out in about 12 hours, so we don’t know what’s going to happen yet, but one of the most interesting questions broadly speaking is what is going to happen in the GPU space? Nvidia — do they have a moat, is it going to be a sustainable advantage? Obviously, they have a double advantage right now, in that they have the best hardware and they have CUDA, but there’s massive efforts on both sides to take that away. Can they build up a sustainable advantage that will persist?

NF: For the next couple of years, it’s Nvidia and it’s TPU and those are the only players that are really viable.

Google’s Tensor Processing Unit.

NF: Yeah, it’s a huge strategic asset for Google. I mean, they’re the only company basically that has an independent, not fully independent because obviously they overlap when it gets down to the fabs, and some other parts of the supply chain but they’re not subject to Jensen allocating them H100s. They can just kind of allocate their own and by all accounts, their TPU v5, they’re producing in absolute record numbers.

Easier to deal with TSMC than to deal with Jensen is what you’re saying.

NF: Yeah, I mean, at least they don’t have that one Jensen choke point. I mean, Jensen right now is dealing with overwhelming demand and limited supply, and so he’s having to very carefully allocate GPUs, and it’s sort of a very central resource distribution mechanism and allocation mechanism. It’s kind of wild. So even if you say, “Oh, AMD’s chips are going to be as good,” they’re just not going to produce them in numbers that matter next year and so I think my take is, there’s only two players for the next couple of years that matter, and my take is also that we will be supply-constrained, because there will be more AI applications that take off and need huge inference capacity, and there will be more people trying to train large models.

Is there a hype cycle aspect where we actually look back in a few years, and there were way too many GPUs bought and produced, and we actually end up with an overhang? Basically what happened with Nvidia last year, but at a 100x, a 1000x scale and that actually ends up being a huge accelerant for AI, because you end up with super cheap inference because you have all this depreciated GPUs that were bought up in 2023 and 2024, and then it all crashed. Actually going back to the dot-com bubble and all the fiber that got laid by companies that immediately went out of business.

NF: You might have a dark fiber, “How many shortages are not followed by a glut?” is always the interesting question. They usually do get followed by a glut and I think one scenario in which that happens is I’m a very strong believer in scaling laws for these big general reasoning models. Essentially, the more training data and the more flops you put in, you’re just going to get a better and better model out, and we’ve seen this now over several orders of magnitude, it’s just incredibly consistent. We saw it with GPT-1 and GPT-2, and GPT-3, and now GPT-4, and we’ll see it I think with GPT-5. So, it’s possible that there’s some escape velocity that occurs where a few labs are the only ones who can afford to train the GPT-5 or GPT-6 equivalent models, and all of the startups and businesses that were getting essentially a sub-scale amount of GPU, unless they were doing something incredibly domain specific, those are no longer needed. So, you’ll have I don’t know, three or four companies that afford to train the $10 billion model, and that’s actually a limited number of GPUs.

5. Respect and Admiration – Morgan Housel

This isn’t universal, but there are cases when people’s desire to show off fancy stuff is because it’s their only, desperate, way to gain some sense of respect and admiration. They don’t have any wisdom, intelligence, humor, empathy, or capacity for love to gain people’s respect. So they rely on the only remaining, and least effective, lever: Look at my car, beep beep, vroom vroom…

…My guess is that if your favorite comedian, or actor, or athlete turned out to be broke, you wouldn’t care. It wouldn’t impact how much you admire them, because you admire them for talents that money can’t buy.

Even when Amazon was huge and successful, Jeff Bezos used to drive a Honda Accord. Today he has a $500 million yacht. Is he respected and admired more for it? Not in the slightest. He could ride a Huffy bike and people would consider him the greatest entrepreneur of our era, because he is. Steve Jobs didn’t have any furniture. It didn’t matter. He’s a genius. He’s Steve Jobs. Material stuff makes no difference when you’re respected and admired for internal traits…

…Once you see people being respected and admired for reasons that have nothing to do with the stuff they own, you begin to wonder why you have such a strong desire for those possessions. I tend to view material desire as a loose proxy for the inverse of what else you have to offer the world. The higher my desire for fancy stuff, the less real value I have to offer.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, and TSMC. Holdings are subject to change at any time.

The Worst (Best) Time To Invest Feels The Best (Worst)

Stocks can go on to do deliver great gains even when the economy is in shambles; stocks can also go on to crumble when the economy is booming.

The world of investing is full of paradoxes. In a recent article, I described the example of stability itself being destabilising. Another paradox is that the worst time to invest can feel the best, and vice versa. 

This paradox can be aptly illustrated by the State of the Union Address, a speech that the President of the USA delivers near the start of every year. It’s a report on how the country fared in the year that passed and what lies ahead. It’s also a barometer for the sentiment of US citizens on the country’s social, political, and economic future.

 This is part of the speech for one particular year: 

“We are fortunate to be alive at this moment in history. Never before has our nation enjoyed, at once, so much prosperity and social progress with so little internal crisis and so few external threats. Never before have we had such a blessed opportunity — and, therefore, such a profound obligation — to build the more perfect union of our founders’ dreams.

We begin the [year] with over 20 million new jobs; the fastest economic growth in more than 30 years; the lowest unemployment rates in 30 years; the lowest poverty rates in 20 years; the lowest African-American and Hispanic unemployment rates on record; the first back-to-back budget surpluses in 42 years. And next month, America will achieve the longest period of economic growth in our entire history.

My fellow Americans, the state of our union is the strongest it has ever been.”

In short, American citizens were feeling fabulous about their country. There was nothing much to worry about and the economy was buzzing. In another particular year, the then-president commented:

“One in 10 Americans still cannot find work. Many businesses have shuttered. Home values have declined. Small towns and rural communities have been hit especially hard. And for those who’d already known poverty, life has become that much harder. This recession has also compounded the burdens that America’s families have been dealing with for decades — the burden of working harder and longer for less; of being unable to save enough to retire or help kids with college.”

This time, Americans were suffering, and there were major problems in the country’s economy.

The first speech was delivered in January 2000 by Bill Clinton. What happened next: The S&P 500 – a widely followed barometer for the US stock market – peaked around the middle of 2000 and eventually declined by nearly 50% at its bottom near the end of 2002. Meanwhile, the second speech was from Barack Obama and took place in January 2010, when the US was just starting to recover from the Great Financial Crisis. In turned out that the next recession took more than 10 years to arrive (in February 2020, after COVID-19 emerged) and the S&P 500 has increased by nearly 450% – or 14% annually – since the speech, as shown in Figure 1.

Figure 1; Source: Yahoo Finance; S&P 500 (including dividends) from January 2010 to September 2023

It’s not always the case where a crumbling economy equates to fantastic future returns in stocks. But what I’ve shown is the important idea that the best time to invest could actually feel like the worst, while the worst time to invest could feel like the best time to do so. Bear this in mind, for it could come in handy the next time a deep recession hits. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 03 September 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 03 September 2023:

1. China Reaches Peak Gasoline in Milestone for Electric Vehicles – Colin McKerracher

Earlier this month, Chinese oil giant Sinopec made a surprise announcement that mostly flew under the radar. It’s now expecting gasoline demand in China to peak this year, two years earlier than its previous outlooks.

The main culprit? The surging number of electric vehicles on the road…

…China has been the largest driver of global growth for refined oil products like gasoline and diesel over the last two decades. But EV adoption rates in China are now soaring, with August figures likely to show plug-in vehicles hitting 38% of new passenger-vehicle sales. That’s up from just 6% in 2020 and is starting to materially dent fuel demand.

Fuel demand in two and three-wheeled vehicles is already in structural decline, with BNEF estimating that 70% of total kilometers traveled by these vehicles already switched over to electric. Fuel demand for cars will be the next to turn, since well over 5% of the passenger-vehicle fleet is now either battery-electric or plug-in hybrid. The internal combustion vehicle fleet is also becoming more efficient due to rising fuel-economy targets.

Diesel demand for heavier vehicles will keep growing for a bit longer, but even there a seismic shift is underway. Electric, fuel cell and battery-swapping options have quickly climbed to 12% of light commercial vehicle sales and 4% to 5% of medium and heavy commercial vehicle sales. That heavy-duty figure is likely to climb to over 10% by 2025.

Combine all those segments, and BNEF expects total oil demand for road transport in China to peak late next year. Demand won’t drop off a cliff anytime soon — fleet turnover in the trucking segment in particular will take time — but it still marks a major shift for global oil demand patterns. It also has big implications for refiners that need to quickly adjust the mix of products they produce.

It also called out the effects China’s ride-hailing fleet is having on urban gasoline demand.

Vehicles used for ride-hailing in China are far more likely to be electric — their share is nearing 40% of the fleet — than those that are privately owned. Electric ride-hailing vehicles are also more productive than their gasoline-powered counterparts, accounting for 50% of the kilometers traveled on market leader Didi’s ride-hailing platform in December…

…The Sinopec announcement highlights how looking just at the fleet of vehicles can lead one to miss the full story with respect to energy impact…

…The speed that oil gets squeezed out of the transport mix depends on how fast countries like China switch over the number of kilometers traveled to electric — not just the number of cars and trucks.

2. Peasant Logic and the Russian GKO Trade – Joe Pimbley

Later in 1998, after Russia blew up, I attended a public risk management conference in Paris. And one of the speakers was Allen Wheat, CEO of Credit Suisse at the time. I didn’t know Wheat, but he impressed me as a blunt, direct-speaking guy. He talked about Credit Suisse’s version of the GKO trade. He didn’t mention a short position in a Russian-issued dollar bond, so maybe Credit Suisse didn’t bother with the credit risk hedge. But he talked about the GKO and rubles and the cross-currency forwards Credit Suisse executed with Russian banks…

…Interesting to me, Wheat’s story was not that he got to the bottom of this controversy and figured out what part of the loss owed to market risk and what part owed to credit risk. Wheat’s conclusion to his board of directors was that Credit Suisse had a problem with its “risk management philosophy.” It had market risk and credit risk silos when really risk management must be integrated. It’s unproductivee to distinguish market risk from credit risk if things are going to fall between the cracks and nobody’s going to take responsibility for understanding the complete risk picture.

Clearly, that’s a nice message, even if you wonder why Wheat didn’t work through the finger-pointing and hold people to account. Who can argue against an integrated approach to risk? But Wheat admitted he got chastised by his board when he presented that conclusion. The board said, “Allen, we think we understand what’s wrong here. It’s good to do all your analysis and get deep into the details, but at some point, you’re not seeing the big picture. You really need to use ‘peasant logic.’”

Wheat explained that “peasant logic” was the board’s term for what we might call “common sense,” but I like peasant logic better. The board said, “You people worry about how good your models are and you wonder about using two years of historical data or five years of historical data, and whether one is better than the other, and how much data you should have. We think you should have looked at the big picture and said, ‘messing around with 40% yields means there’s a lot of risk here. This is an unstable government and currency situation.’ We think you aren’t seeing the forest for the trees.”

So this was Wheat’s point: sometimes it’s good to forget the data and models and use peasant logic. In this case, if there are abnormal returns, there must be some abnormal risk…

…Then it came time for questions, and from the back of the room, someone had to shout out his question to be heard. And as soon as he started speaking, you could tell it’s a Russian accent and the guy is Russian. Being Russian lent authenticity to his remark, “You want historical data. I’ll give you 75 years of historical data. Russia has never honored any debt obligation.”

…Unfortunately, Wheat’s reaction was to be annoyed. Wheat didn’t say, “Wow, what a great way to look at this. Why are we trusting Russian debt?” And he also didn’t say, “That’s a great example of the peasant logic the board was trying to impress upon me.”

The Russian continued. “I work for Merrill Lynch and we did this trade also and lost a lot of money. Beforehand, I told them it was a terrible trade because of Russia’s history and they didn’t listen to me because I’m just a mathematician.” Wheat still hadn’t cottoned to the idea that the Russian was helping him make his point about peasant logic, so he said in a rather dismissive, sarcastic way, “Well, I wish we had you working for us, then we wouldn’t have lost money. Right?”

Now it’s easy in hindsight, when you know how something worked out, to say “Aha, I knew such and such.” But still, I thought the Russian added to Wheat’s remarks and his remarks really made Wheat’s point. This guy in the audience was demonstrating peasant logic. The traders put all these fancy complex pieces together and think they’re really smart, but what the heck are they doing lending money to a government that, to this guy who is closer to it than the rest of us, you shouldn’t trust? 

3. Google Gemini Eats The World – Gemini Smashes GPT-4 By 5X, The GPU-Poors – Dylan Patel and Daniel Nishball

The statement that may not be obvious is that the sleeping giant, Google has woken up, and they are iterating on a pace that will smash GPT-4 total pre-training FLOPS by 5x before the end of the year. The path is clear to 20x by the end of next year given their current infrastructure buildout. Whether Google has the stomach to put these models out publicly without neutering their creativity or their existing business model is a different discussion…

…Access to compute is a bimodal distribution. There are a handful of firms with 20k+ A/H100 GPUs, and individual researchers can access 100s or 1,000s of GPUs for pet projects. The chief among these are researchers at OpenAI, Google, Anthropic, Inflection, X, and Meta, who will have the highest ratios of compute resources to researchers. A few of the firms above as well as multiple Chinese firms will 100k+ by the end of next year, although we are unsure of the ratio of researchers in China, only the GPU volumes.

One of the funniest trends we see in the Bay area is with top ML researchers bragging about how many GPUs they have or will have access to soon. In fact, this has become so pervasive over the last ~4 months that it’s become a measuring contest that is directly influencing where top researchers decide to go. Meta, who will have the 2nd most number of H100 GPUs in the world, is actively using it as a recruiting tactic.

Then there are a whole host of startups and open-source researchers who are struggling with far fewer GPUs. They are spending significant time and effort attempting to do things that simply don’t help, or frankly, matter. For example, many researchers are spending countless hours agonizing on fine-tuning models with GPUs that don’t have enough VRAM. This is an extremely counter-productive use of their skills and time.

These startups and open-source researchers are using larger LLMs to fine-tune smaller models for leaderboard style benchmarks with broken evaluation methods that give more emphasis to style rather than accuracy or usefulness. They are generally ignorant that pretraining datasets and IFT data need to be significantly larger/higher quality for smaller open models to improve in real workloads.

Yes, being efficient with GPUs is very important, but in many ways, that’s being ignored by the GPU-poors. They aren’t concerned with efficiency at scale, and their time isn’t being spent productively. What can be done commercially in their GPU-poor environment is mostly irrelevant to a world that will be flooded by more than 3.5 million H100s by the end of next year. For learning, experimenting, smaller weaker gaming GPUs are just fine…

…While the US and China will be able to keep racing ahead, the European startups and government backed supercomputers such as Jules Verne are also completely uncompetitive. Europe will fall behind in this race due to the lack of ability to make big investments and choosing to stay GPU-poor. Even multiple Middle Eastern countries are investing more on enabling large scale infrastructure for AI.

Being GPU-poor isn’t limited to only scrappy startups though. Some of the most well recognized AI firms, HuggingFace, Databricks (MosaicML), and Together are also part of this GPU-poor group. In fact, they may be the most GPU-poor groups out there with regard to both the number of world class researchers per GPU and the number of GPUs versus the ambition/potential customer demand. They have world class researchers, but all of them are limited by working on systems with orders of magnitude less capabilities. These firms have tremendous inbound from enterprises on training real models, and on the order of thousands of H100s coming in, but that won’t be enough to grab much of the market.

Nvidia is eating their lunch with multiple times as many GPUs in their DGX Cloud service and various in-house supercomputers. Nvidia’s DGX Cloud offers pretrained models, frameworks for data processing, vector databases and personalization, optimized inference engines, APIs, and support from NVIDIA experts to help enterprises tune models for their custom use cases. That service has also already racked up multiple larger enterprises from verticals such as SaaS, insurance, manufacturing, pharmaceuticals, productivity software, and automotive. While not all customers are announced, even the public list of Amgen, Adobe, CCC, ServiceNow, Accenture, AstraZeneca, Getty Images, Shutterstock, Morningstar, Evozyne, Insilico Medicine, Quantiphi, InstaDeep, Oxford Nanopore, Peptone, Relation Therapeutics, ALCHEMAB Therapeutics, and Runway is quite impressive.

4. Making Sense Of The China Meltdown Story – Louis-Vincent Gave

It is impossible to turn to a newspaper, financial television station or podcast today without getting told all about the unfolding implosion of the Chinese economy. Years of over-building, white elephants and unproductive infrastructure spending are finally coming home to roost. Large property conglomerates like Evergrande and Country Garden are going bust. And with them, so are hopes for any Chinese economic rebound. Meanwhile, the Chinese government is either too incompetent, too ideologically blinkered, or simply too communist to do anything about this developing disaster.

Interestingly, however, financial markets are not confirming the doom and gloom running rampant across the financial media…

…At Gavekal, we look at bank shares as leading indicators of financial trouble. When we see bank shares break out to new lows, it is usually a signal that investors should head for the exit as quickly as possible. This was certainly the case in 2007-08 in the US. Between February 2007 and July 2008 (six weeks before the collapse of Lehman Brothers), banks shares lost -60% of their value…

…Now undeniably, Chinese bank shares have not been the place to be over the past few years. Nonetheless, Chinese bank shares are still up a significant amount over the last decade. And this year, they have not even taken out the low of 2022 made on October 31st following the Chinese Communist Party congress. To be sure, the chart below is hardly enticing, even if the slope of the 200-day moving average is positive. Still, Chinese bank shares do not seem to be heralding a near-term financial sector Armageddon…

…China is the number one or two importer of almost every major commodity you can think of. So, if the Chinese economy were experiencing a meltdown, you would expect commodity prices to be soft. Today, we are seeing the opposite. The CRB index has had a strong year so far in 2023, and is trading above its 200-day moving average. Moreover, the 200-day moving average now has a positive slope. Together, all this would seem to point towards an unfolding commodity bull market more than a Chinese meltdown…

…Jacques Rueff used to say that exchange rates are the “sewers in which unearned rights accumulate.” This is a fancy way of saying that exchange rates tend to be the first variable of adjustment for any economy that has accumulated imbalances. On this front, the renminbi has been weak in recent months, although, like Chinese equities, it has yet to take out October’s lows.

That is against the US dollar. Against the yen, the currency of China’s more direct competitor, Japan, the renminbi continues to grind higher and is not far off making new all-time highs. And interestingly, in recent weeks, the renminbi has been rebounding against the South Korean won.

This is somewhat counterintuitive. In recent weeks, oceans of ink have been spilled about how China is the center of a developing financial maelstrom. Typically, countries spiraling down the financial plughole do not see their currencies rise against those of their immediate neighbors and competitors…

…In other words, a range of data points seems to indicate that Chinese consumption is holding up well. This might help to explain why the share prices of LVMH, Hermès, Ferrari and most other producers of luxury goods are up on the year. If China really was facing an economic crash, wouldn’t you expect the share prices of luxury good manufacturers to at least reflect some degree of concern?…

…Staying on the US treasury market, it is also odd how Chinese government bonds have outperformed US treasuries so massively over the past few years. Having gone through a fair number of emerging market crises, I can say with my hand on my heart that I have never before seen the government bonds of an emerging market in crisis outperform US treasuries. Yet since the start of Covid, long-dated Chinese government bonds have outperformed long-dated US treasuries by 35.3%.

In fact, Chinese bonds have been a beacon of stability, with the five-year yield on Chinese government bonds spending most of the period since the 2008 global crisis hovering between 2.3% and 3.8%. Today, the five-year yield sits at the low end of this trading band. But for all the negativity out there, yields have yet to break out on the downside…

…While the Chinese government debt market has been stable, the pain has certainly been dished out in the Chinese high yield market. Yields have shot up and liquidity in the Chinese corporate bond market has all but evaporated. Perhaps this is because historically many of the end buyers have been foreign hedge funds, and the Chinese government feels no obligation to make foreign hedge funds whole. Or perhaps it is because most of the issuers were property developers, a category of economic actor that the CCP profoundly dislikes.

Whatever the reasons, the Chinese high yield debt market is where most of the pain of today’s slowdown has been—and continues to be—felt. Interestingly, however, it seems that the pain in the market was worse last year than this year. Even though yields are still punishingly high, they do seem to be down from where they were a year ago…

…Why the sudden drumbeat about collapsing Chinese real estate and impending financial crisis when the Chinese real estate problem has been a slow-moving car crash over the past five years, and when, as the charts above show, markets don’t seem to indicate a crisis point?

At least, markets outside the US treasury market don’t seem to indicate a crisis point. So could the developing meltdown in US treasuries help to explain the urgency of the “China in crisis” narrative?…

…Basically, US treasuries have delivered no positive absolute returns to any investor who bought bonds after 2015. Meanwhile, investors who bought Chinese government bonds in recent years are in the money, unless they bought at the height of the Covid panic in late 2021 and early 2022. This probably makes sense given the extraordinary divergence between US inflation and Chinese inflation.

None of this would matter if China was not in the process of trying to dedollarize the global trade in commodities and was not playing its diplomatic cards, for example at this week’s BRICS summit, in an attempt to undercut the US dollar (see Clash Of Empires). But with China actively trying to build a bigger role for the renminbi in global payments, is it really surprising to see the Western media, which long ago gave up any semblance of independence, highlighting China’s warts? Probably not. But the fact that the US treasury market now seems to be entering a full-on meltdown adds even more urgency to the need to highlight China’s weaknesses.

A Chinese meltdown, reminiscent of the 1997 Asian crisis, would be just what the doctor ordered for an ailing US treasury market: a global deflationary shock that would unleash a new surge of demand and a “safety bid” for US treasuries. For now, this is not materializing, hence the continued sell-off in US treasuries. But then, the Chinese meltdown isn’t materializing either.

5. Why China’s economy ran off the rails – Noah Smith

This is a pretty momentous happening, since a lot of people had started to believe — implicitly or explicitly — that China’s economy would never suffer the sort of crash that periodically derails all other economies. That was always wrong, of course, and now the bears are coming out for a well-deserved victory lap…

…Anyway, OK, here is my quick story of what happened to China. In the 1980s, 90s, and early 2000s, China reaped huge productivity gains from liberalizing pieces of its state-controlled economy. Industrial policy was mostly left to local governments, who wooed foreign investors and made it easy for them to open factories, while the central government mostly focused on big macro things like making capital and energy cheap and holding down the value of the currency. As a result, China became the world’s factory, and its exports and domestic investment soared. As did its GDP.

At this time there were also substantial tailwinds for the Chinese economy, including a large rural surplus population who could be moved to the cities for more productive work, a youth bulge that created temporarily favorable demographics, and so on. China was also both willing and able to either buy, copy, or steal large amounts of existing technology from the U.S. and other rich countries.

Meanwhile, during this time, real estate became an essential method by which China distributed the gains from this stupendous economic growth. It was the main financial asset for regular Chinese people, and land sales were how local governments paid for public services.

Then the 2008 financial crisis hit the U.S., and the Euro crisis hit Europe. The stricken economies of the developed nations were suddenly unable to keep buying ever-increasing amounts of Chinese goods (and this was on top of export markets becoming increasingly saturated). Exports, which had been getting steadily more and more important for the Chinese economy, suddenly started to take a back seat:..

… The government told banks to lend a lot in order to avoid a recession, and most of the companies they knew how to shovel money at were in the real estate business in some way. That strategy was successful at avoiding a recession in 2008-10, and over the next decade China used it again whenever danger seemed to threaten — such as in 2015 after a stock market crash.

Maybe China’s leaders were afraid of what would happen to them if they ever let growth slip, or maybe they didn’t really think about what the costs of this policy might be. In any case, China basically pivoted from being an export-led economy to being a real-estate-led economy. Real-estate-related industries soared to almost 30% of total output.

That pivot saved China from recessions in the 2010s, but it also gave rise to a number of unintended negative consequences. First, construction and related industries tend to have lower productivity growth than other industries (for reasons that aren’t 100% clear). So continuing to shift the country’s resources of labor and capital toward those industries ended up lowering aggregate productivity growth. Total factor productivity, which had increased steadily in the 2000s, suddenly flatlined in the 2010s:..

…This productivity slowdown probably wasn’t only due to real estate — copying foreign technology started to become more difficult as China appropriated all the easier stuff. Nor was productivity the only thing weighing on China’s growth — around this same time, surplus rural labor dried up. Anyway, put it all together, and you get a slowdown in GDP growth in the 2010s, from around 10% to around 6% or 7%:

But 6-7% is still pretty darn fast. In order to keep growth going at that pace, China had to invest a lot — around 43% of its GDP, more than in the glory days of the early 2000s, and much more than Japan and Korea at similar points in their own industrial development.

Only instead of deploying that capital efficiently, China was just putting it toward increasingly low-return real estate. The return on assets for private companies collapsed:

Much of this decline was due simply to the Chinese economy’s shift toward real estate; if you strip out real estate, the deterioration in the private sector looks much less severe…

…So even as the pivot to real estate was adding to a long-term slowdown in China’s growth, it was also generating a bubble that would eventually cause an acute short-term slowdown as well. If there’s a grand unified theory of China’s economic woes, it’s simply “too much real estate”.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Adobe, Alphabet (parent of Google), and Meta Platforms. Holdings are subject to change at any time.

One Of The Largest Disconnects Between Fundamentals & Price I’ve Ever Seen

VinFast Auto has a mammoth market capitalisation but the same may not be said for its business fundamentals.

VinFast Auto (NASDAQ: VFS) became a public-listed entity in the US stock market on 15 August this year through a SPAC (Special Purpose Acquisition Company) merger. I think it is also a company with one of the largest disconnects between fundamentals and price that I’ve ever seen. I’ll lay out what I know, and you can judge my thought.

Founded in 2017, VinFast manufactures EVs (electric vehicles), e-scooters, and e-buses. The company started producing e-scooters in 2018, ICE cars in 2019 (the production of internal combustion engine vehicles was phased out in late-2022), and e-buses in 2020. Its first EV product line consists of a range of SUVs (sport utility vehicles) which it began manufacturing in December 2021. VinFast’s manufacturing facility – which has 1,400 robots and is highly automated – is located in Hai Phong, Vietnam and has an annual production capacity of 300,000 EVs. Through June 2023, VinFast has delivered 105,000 vehicles – most of which are ICE vehicles – and 182,000 e-scooters. 

Vietnam is VinFast’s headquarters and the company’s primary market at the moment. As of 30 June 2023, VinFast had sold around 18,700 EVs, mostly in Vietnam, since inception; the deliveries of the 182,000 e-scooters since the company’s founding all happened in the same country too. The company has ambitions beyond Vietnam and has set its sights on the USA, Canada, France, Germany, and the Netherlands as its initial international markets. VinFast commenced US deliveries of EVs in March this year while it expects to start delivering EVs into Europe in the second half of 2023. The company has recorded around 26,000 reservations for its EVs globally as of 30 June 2023.

Controlling nearly all of VinFast’s shares currently (99.7%) is Pham Nhat Vuong, the founder and majority shareholder of Vingroup, a Vietnam-based conglomerate. Vingroup has a major economic presence in Vietnam – the company and all of its listed subsidiaries collectively accounted for 1.1% of Vietnam’s GDP in 2022 and they have a combined market capitalisation of US$21.0 billion (note that this does not include the value of VinFast) as of 30 June 2023.   

In the two weeks since VinFast’s listing, the company’s stock price closed at a high of US$82, on 28 August 2023. This gave VinFast a staggering US$190 billion market capitalisation based on an outstanding share count of 2.307 billion (as of 14 August 2023). At the market-close on 29 August 2023, VinFast’s share price was US$46. Though a painful 44% fall from the previous day’s closing, the US$46 stock price still gives VinFast a massive market capitalisation of US$107 billion, which easily makes it one of the top five largest auto manufacturers in the world by market capitalisation. But behind VinFast’s market size are the following fundamentals:

  • 2022 numbers (I would have used trailing numbers, but they’re not readily available): Revenue of US$633.8 million, an operating loss of US$1.8 billion, and an operating cash outflow of US$1.5 billion
  • As I already mentioned, VinFast has (1) 26,000 reservations for its EVs globally as of 30 June 2023, and (2) delivered 105,000 vehicles – most of which are ICE vehicles – and 182,000 e-scooters from its founding through June 2023.

For perspective, here are the equivalent numbers for Tesla, the largest auto manufacturer in the world by market capitalisation (US$816 billion on 29 August 2023), and a company whose valuation ratios are often said by stock market participants to be rich:

  • Trailing numbers: Revenue of US$94.0 billion, operating income of US$12.7 billion, and operating cash inflow of US$14.0 billion
  • Trailing deliveries of 1.638 million vehicles worldwide.

So given all the above, what do you think about my statement above, that VinFast is “a company with one of the largest disconnects between fundamentals and price that I’ve ever seen”?


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Tesla. Holdings are subject to change at any time. 

What We’re Reading (Week Ending 27 August 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 27 August 2023:

1. Why Lehman Brothers Failed When It Did – Joe Pimbley

In 2008, security firms operated with high leverage and significant amounts of short-term debt. Lehman had $26 billion of equity supporting $639 billion of assets and its high leverage was not unusual among security firms. But at that ratio, a 4% decline in assets wipes out equity. Meanwhile, reliance on the continuous rolling of short-term debt requires the security firm to always maintain lender confidence. Lenders’ perception of solvency becomes more important than the actual fact of solvency.

When the highly leveraged, short-term debt, security firm business model met the asset-value destruction of the Great Financial Crisis, Lehman was not the only security firm to fail. All major US firms failed to one degree or another. Besides Lehman’s outright bankruptcy, Bear Stearns and Merrill Lynch were merged into commercial banks. I believe Goldman Sachs and Morgan Stanley would have defaulted on their short-term borrowings had the Fed not permitted them to convert to bank holding companies and gain access to discount window liquidity…

…A place to begin chronicling factors specific to Lehman’s failure is the beginning of 2006. That was when the firm’s management decided to make more long-term investments.[2] Rather than remaining focused on security distribution and brokerage, Lehman increased its own holdings in commercial real estate, leveraged loans, and private equity. In our report to the bankruptcy court, we described this strategic change as a shift from the “moving business” to the “storage business.”

One year later in early 2007, Lehman management viewed the incipient financial crisis as an opportunity for the firm to gain market share and revenue from competitors that were retrenching and lowering their risk profiles. Lehman did not think the subprime mortgage crisis would spread to the general economy or even to its growing commercial real estate portfolio. Lehman had boldly taken on assets and assumed risk in the 2001-02 economic downturn. Its risk-taking back then had paid off and it hoped such contrarian boldness would again prove profitable.

Lehman’s pace of principal investments in commercial real estate, leveraged loans, and private equity increased in the first half of 2007 as other security firms reduced risk and hunkered down. It committed $11 billion to acquire Archstone REIT in May 2007 and ended up funding the riskiest $6 billion of that in October when it couldn’t find enough buyers to take it out of its commitment. Other bridge loans and bridge equity positions also became similarly stuck on its balance sheet. Its mortgage subsidiaries were slow to stop making residential mortgage loans and Lehman ended up holding mortgage-backed bonds and mortgage-bond-backed collateralized debt obligations it couldn’t sell.

To take on these risky assets, Lehman’s management raised all its internal risk limits: firm-wide, line-of-business, and even single-name risk limits. Or they ignored the limits they had set. Management was not fulsome in its disclosures to its board of directors about the risks it assumed and Lehman’s board did not press management for important information. In theory, Lehman’s compensation policy penalized excessive risk taking, but in practice it rewarded employees on revenue with minimal attention to associated risk.

Not only were these investments risky from the perspective of potential market value losses; they were risky from the point of view of financing. By their nature, real estate, leveraged loans, and private equity are hard to value and less liquid. It is difficult to determine how quickly and how severely they could lose value. These characteristics mean the ability to finance these assets cannot be assumed. If lenders worry about the realizable value of assets offered as loan security, they will lower the amount they will lend against those assets or cease lending against them altogether. Most of Lehman’s secured debt had overnight tenors, so lenders could stop rolling over their loans to Lehman on any business day!

Lehman’s management only began to cut back on leveraged loan acquisitions in August 2007 and they waited until later in 2007 to cut back on commercial real estate purchases. Yet deals in the pipeline caused Lehman’s assets to grow $95 billion to $786 billion over the quarter ending February 2008. The firm did not begin to sell assets in earnest until March 2008, but only got assets down to $639 billion by May 2008.

Lehman’s management deliberately deceived the world about the firm’s financial condition. Management used an accounting trick to temporarily remove $50 billion of assets from the firm’s balance sheet at the end of the first and second quarters of 2008. In so-called “repo 105” transactions, Lehman pledged assets valued at 105% or more of the cash it received. Relying on a legal opinion from a UK law firm addressing English law, Lehman deducted the assets from its balance sheet. No other security firm used this stratagem in 2008 and Lehman did not disclose its use.

Lehman’s management touted the firm’s “liquidity pool,” the sum of cash and assets readily convertible into cash and as late as two days before bankruptcy claimed this pool equaled $41 billion. In fact, only $2 billion of those assets were readily monetizable.

From January to May 2008, while its competitors raised equity, Lehman did not. Lehman’s management rejected offers from interested investors because they did not want to issue equity at a discount to market price. Management thought doing so would make the firm seem vulnerable. Lehman did not issue common stock in 2008 until a $4 billion issuance in June.

2. China’s 40-Year Boom Is Over. What Comes Next? – Lingling Wei and Stella Yifan Xie

For decades, China powered its economy by investing in factories, skyscrapers and roads. The model sparked an extraordinary period of growth that lifted China out of poverty and turned it into a global giant whose export prowess washed across the globe.

Now the model is broken.

What worked when China was playing catch-up makes less sense now that the country is drowning in debt and running out of things to build. Parts of China are saddled with under-used bridges and airports. Millions of apartments are unoccupied. Returns on investment have sharply declined.

Signs of trouble extend beyond China’s dismal economic data to distant provinces, including Yunnan in the southwest, which recently said it would spend millions of dollars to build a new Covid-19 quarantine facility, nearly the size of three football fields, despite China having ended its “zero-Covid” policy months ago, and long after the world moved on from the pandemic…

…What will the future look like? The International Monetary Fund puts China’s GDP growth at below 4% in the coming years, less than half of its tally for most of the past four decades. Capital Economics, a London-based research firm, figures China’s trend growth has slowed to 3% from 5% in 2019, and will fall to around 2% in 2030.

At those rates, China would fail to meet the objective set by President Xi Jinping in 2020 of doubling the economy’s size by 2035. That would make it harder for China to graduate from the ranks of middle-income emerging markets and could mean that China never overtakes the U.S. as the world’s largest economy, its longstanding ambition.

Many previous predictions of China’s economic undoing have missed the mark. China’s burgeoning electric-vehicle and renewable energy industries are reminders of its capacity to dominate markets. Tensions with the U.S. could galvanize China to accelerate innovations in technologies such as artificial intelligence and semiconductors, unlocking new avenues of growth. And Beijing still has levers to pull to stimulate growth if it chooses, such as by expanding fiscal spending.

Even so, economists widely believe that China has entered a more challenging period, in which previous methods of boosting growth yield diminishing returns…

…The transition marks a stunning change. China consistently defied economic cycles in the four decades since Deng Xiaoping started an era of “reform and opening” in 1978, embracing market forces and opening China to the West, in particular through international trade and investment.

During that period, China increased per capita income 25-fold and lifted more than 800 million Chinese people out of poverty, according to the World Bank—more than 70% of the total poverty reduction in the world. China evolved from a nation racked by famine into the world’s second-largest economy, and America’s greatest competitor for leadership.

Academics were so enthralled by China’s rise that some referred to a “Chinese Century,” with China dominating the world economy and politics, similar to how the 20th century was known as the “American Century.”

China’s boom was underpinned by unusually high levels of domestic investment in infrastructure and other hard assets, which accounted for about 44% of GDP each year on average between 2008 and 2021. That compared with a global average of 25% and around 20% in the U.S., according to World Bank data.

Such heavy spending was made possible in part by a system of “financial repression” in which state banks set deposit rates low, which meant they could raise funds inexpensively and fund building projects. China added tens of thousands of miles of highways, hundreds of airports, and the world’s largest network of high-speed trains.

Over time, however, evidence of overbuilding became apparent.

About one-fifth of apartments in urban China, or at least 130 million units, were estimated to be unoccupied in 2018, the latest data available, according to a study by China’s Southwestern University of Finance and Economics…

…Guizhou, one of the poorest provinces in the country with GDP per capita of less than $7,200 last year, boasts more than 1,700 bridges and 11 airports, more than the total number of airports in China’s top four cities. The province had an estimated $388 billion in outstanding debt at the end of 2022, and in April had to ask for aid from the central government to shore up its finances.

Kenneth Rogoff, a professor of economics at Harvard University, said China’s economic ascent draws parallels to what many other Asian economies went through during their periods of rapid urbanization, as well as what European countries such as Germany experienced after World War II, when major investments in infrastructure boosted growth.

At the same time, decades of overbuilding in China resembles Japan’s infrastructure construction boom in the late 1980s and 1990s, which led to overinvestment.

The solution for many parts of the country has been to keep borrowing and building. Total debt, including that held by various levels of government and state-owned companies, climbed to nearly 300% of China’s GDP as of 2022, surpassing U.S. levels and up from less than 200% in 2012, according to Bank for International Settlements data.

Much of the debt was incurred by cities. Limited by Beijing in their ability to borrow directly to fund projects, they turned to off-balance sheet financing vehicles whose debts are expected to reach more than $9 trillion this year, according to the IMF.

Rhodium Group, a New York-based economic research firm, estimates that only about 20% of financing firms used by local governments to fund projects have enough cash reserves to meet their short-term debt obligations, including bonds owned by domestic and foreign investors…

…In Beijing’s corridors of power, senior officials have recognized that the growth model of past decades has reached its limits. In a blunt speech to a new generation of party leaders last year, Xi took aim at officials for relying on borrowing for construction to expand economic activities…

…The most obvious solution, economists say, would be for China to shift toward promoting consumer spending and service industries, which would help create a more balanced economy that more resembles those of the U.S. and Western Europe. Household consumption makes up only about 38% of GDP in China, relatively unchanged in recent years, compared with around 68% in the U.S., according to the World Bank.

Changing that would require China’s government to undertake measures aimed at encouraging people to spend more and save less. That could include expanding China’s relatively meager social safety net with greater health and unemployment benefits.

Xi and some of his lieutenants remain suspicious of U.S.-style consumption, which they see as wasteful at a time when China’s focus should be on bolstering its industrial capabilities and girding for potential conflict with the West, people with knowledge of Beijing’s decision-making say.

The leadership also worries that empowering individuals to make more decisions over how they spend their money could undermine state authority, without generating the kind of growth Beijing desires.

A plan announced in late July to promote consumption was criticized by economists both in and outside China for lacking details. It suggested promoting sports and cultural events, and pushed for building more convenience stores in rural areas.

Instead, guided by a desire to strengthen political control, Xi’s leadership has doubled down on state intervention to make China an even bigger industrial power, strong in government-favored industries such as semiconductors, EVs and AI.

While foreign experts don’t doubt China can make headway in these areas, they alone aren’t enough to lift up the entire economy or create enough jobs for the millions of college graduates entering the workforce, economists say. 

3. LTCM: 25 Years On – Marc Rubinstein

To understand, it helps to model LTCM not as a hedge fund but as a bank (although it’s also true that the best model for a bank is often a hedge fund). Roger Lowenstein, author of When Genius Failed, acknowledges as much in the subtitle of his book: “The Rise and Fall of Long-Term Capital Management: How One Small Bank Created a Trillion-Dollar Hole.” 

The model reflects LTCM’s heritage. John Meriwether ran the arbitrage desk at Salomon Brothers becoming vice chair of the whole firm, in charge of its worldwide Fixed Income Trading, Fixed Income Arbitrage and Foreign Exchange businesses. In the years 1990 to 1992, proprietary trading accounted for more than 100% of the firm’s total pre-tax profit, generating an average $1 billion a year. LTCM was in some ways a spin-off of this business.

Indeed, LTCM partners viewed their main competitors as the trading desks of large Wall Street firms rather than traditional hedge funds. Thus, although they structured their firm as a hedge fund (2% management fee, 25% performance fee, high watermark etc) they did everything they could to replicate the structure of a bank. So investors were required to lock-up capital initially for three years to replicate the permanent equity financing of a bank (hence “Long-Term Capital Management”). They obtained $230 million of unsecured term loans and negotiated a $700 million unsecured revolving line of credit from a syndicate of banks. They chose to finance positions over 6-12 months rather than roll financing daily, even at the cost of less favourable rates. And they insisted that banks collateralise their obligations to the fund via a “two way mark-to-market”: As market prices moved in favour of LTCM, collateral such as government bonds would flow from their counterparty to them.

If there was one risk LTCM partners were cognisant of it is that they might suffer a liquidity crisis and not be able to fund their trades. It was a risk they took every effort to mitigate. 

But in modelling themselves as a bank, they forgot one key attribute: diversification.

“We set up Long-Term to look exactly like Salomon,” explains Eric Rosenfeld. “Same size, same scope, same types of trades… But what we missed was that there’s a big difference between the two: Long-Term is a monoline hedge fund and Salomon is a lot of different businesses – they got internal diversification from their other business lines during this crisis so therefore they could afford to have taken on more risk. We should have run this at a lower risk.”

It’s a risk monolines in financial services often miss. And LTCM wasn’t the only monoline to fall victim to market conditions in 1998. In the two years that followed, eight of the top 10 subprime monolines in the US declared bankruptcy, ceased operations or sold out to stronger firms. The experience prompted some financial institutions – such as Capital One – to embrace a more diversified model.

When the global financial crisis hit in 2007, monoline firms went down first. And in the recent banking crisis of 2023, those banks that failed were characterised by lower degrees of diversification.

There’s another factor that also explains the downfall of LTCM, one that similarly has echoes in the banking sector. At the end of August, LTCM was bruised but it was far from bankrupt. It had working capital of around $4 billion including a largely unused credit facility of $900 million, of which only $2.1 billion was being used for financing positions.

But the fax Meriwether sent clients on September 2 triggered a run on the bank. “We had 100 investors at the time, and a couple of fax machines,” recalls Rosenfeld. “By the time we got to investor 50, I noticed that the top story on Bloomberg was us… All eyes were on us. We were like this big ship in a small harbour trying to turn; everyone was trying to get out of the way of us.”

While the August losses reflected a flight to quality as investors flocked to safe assets, the September losses reflected a flight away from LTCM. The price of a natural catastrophe bond the firm held, for example, fell by 20% on September 2, even though there had been no increase in the risk of natural disaster and the bond was due to mature six weeks later. As the firm was forced to divulge more information to counterparties over the course of September, the situation worsened. “The few things we had on that the market didn’t know about came back quickly,” Meriwether later told the New York Times. “It was the trades that the market knew we had on that caused us trouble.”

In addition, illiquid markets gave counterparties leeway in how to mark positions, and they used the opportunity to mark against LTCM to the widest extent possible so that they would be able to claim collateral to mitigate against a possible default (the flipside of the “two way mark-to-market”). The official inquiry into the failure noted that by mid-September, “LTCM’s repo and OTC [over-the-counter] derivatives counterparties were seeking as much collateral as possible through the daily margining process, in many cases by seeking to apply possible liquidation values to mark-to-market valuations.” And because different legs of convergence trades were held with different counterparties, there was very little netting. In index options, such collateral outflows led to around $1 billion of losses in September. 

Nicholas Dunbar, who wrote the other bestselling book about LTCM, Inventing Money, quotes a trader at one of LTCM’s counterparties (emphasis added):

“When it became apparent they [LTCM] were having difficulties, we thought that if they are going to default, we’re going to be short a hell of a lot of volatility. So we’d rather be short at 40 [at an implied volatility of 40% per annum] than 30, right? So it was clearly in our interest to mark at as high a volatility as possible. That’s why everybody pushed the volatility against them, which contributed to their demise in the end.”

The episode is a lesson in endogenous risk. It’s a risk that differentiates securities markets from other domains governed by probability. “The hurricane is not more or less likely to hit because more hurricane insurance has been written,” mused one of LTCM’s partners afterwards. “In the financial markets this is not true. The more people write financial insurance, the more likely it is that a disaster will happen, because the people who know you have sold the insurance can make it happen. So you have to monitor what other people are doing.”

4. Why the Era of Historically Low Interest Rates Could Be Over – Nick Timiraos

At issue is what is known as the neutral rate of interest. It is the rate at which the demand and supply of savings is in equilibrium, leading to stable economic growth and inflation.

First described by Swedish economist Knut Wicksell a century ago, neutral can’t be directly observed. Instead, economists and policy makers infer it from the behavior of the economy. If borrowing and spending are strong and inflation pressure rising, neutral must be above the current interest rate. If they are weak and inflation is receding, neutral must be lower.

The debate over where neutral sits hasn’t been important until now. Since early 2022, soaring inflation sent the Federal Reserve racing to get interest rates well above neutral.

With inflation now falling but activity still firm, estimates of the neutral rate could take on greater importance in coming months. If neutral has gone up, that could call for higher short-term interest rates, or delay interest-rate cuts as inflation falls. It could also keep long-term bond yields, which determine rates on mortgages and corporate debt, higher for longer…

…Analysts see three broad reasons neutral might go higher than before 2020.

First, economic growth is now running well above Fed estimates of its long-run “potential” rate of around 2%, suggesting interest rates at their current level of 5.25% and 5.5% simply aren’t very restrictive.

“Conceptually, if the economy is running above potential at 5.25% interest rates, then that suggests to me that the neutral rate might be higher than we’ve thought,” said Richmond Fed President Tom Barkin. He said it is too soon to come to any firm conclusions.

That said, a model devised by the Richmond Fed, which before the pandemic closely tracked Williams’s model, put the real neutral rate at 2% in the first quarter.

Second, swelling government deficits and investment in clean energy could increase the demand for savings, pushing neutral higher. Joseph Davis, chief global economist at Vanguard, estimates the real neutral rate has risen to 1.5% because of higher public debt…

…Third, retirees in industrial economies who had been saving for retirement might now be spending those savings. Productivity-boosting investment opportunities such as artificial intelligence could push up the neutral rate.

And business investment depreciates faster nowadays and is thus less sensitive to borrowing costs, which would raise neutral. It is dominated by “computers and software, and much less office buildings, than it used to be,” Summers said during a lecture in May…

…Fed Chair Jerome Powell has in the past warned against setting policy based on unobservable estimates such as neutral, which he compared to navigating by the celestial stars.

Last December, he said the Fed would be careful about fine-tuning interest rates based on such estimates—for example, because falling inflation pushes real rates well above neutral. “I don’t see us as having a really clear and precise understanding of what the neutral rate is and what real rates are,” Powell said.

Some economists reconcile the debate by differentiating between short-run and longer-run neutral. Temporary factors such as higher savings buffers from the pandemic and reduced sensitivity to higher rates from households and businesses that locked in lower borrowing costs could demand higher rates today to slow the economy.

But as savings run out and debts have to be refinanced at higher rates in the coming years, activity could slow—consistent with a neutral rate lower than it is now.

5. Defining, Measuring, and Managing Technical Debt – Ciera Jaspan and Collin Green

We took an empirical approach to understand what engineers mean when they refer to technical debt. We started by interviewing subject matter experts at the company, focusing our discussions to generate options for two survey questions: one asked engineers about the underlying causes of the technical debt they encountered, and the other asked engineers what mitigations would be appropriate to fix this debt…

…This provided us with a collectively exhaustive and mutually exclusive list of 10 categories of technical debt:

  • Migration is needed or in progress: This may be motivated by the need to scale, due to mandates, to reduce dependencies, or to avoid deprecated technology.
  • Documentation on project and application programming interfaces (APIs): Information on how your project works is hard to find, missing or incomplete, or may include documentation on APIs or inherited code.
  • Testing: Poor test quality or coverage, such as missing tests or poor test data, results in fragility, flaky tests, or lots of rollbacks.
  • Code quality: Product architecture or code within a project was not well designed. It may have been rushed or a prototype/demo.
  • Dead and/or abandoned code: Code/features/projects were replaced or superseded but not removed.
  • Code degradation: The code base has degraded or not kept up with changing standards over time. The code may be in maintenance mode, in need of refactoring or updates.
  • Team lacks necessary expertise: This may be due to staffing gaps and turnover or inherited orphaned code/projects.
  • Dependencies: Dependencies are unstable, rapidly changing, or trigger rollbacks.
  • Migration was poorly executed or abandoned: This may have resulted in maintaining two versions.
  • Release process: The rollout and monitoring of production needs to be updated, migrated, or maintained.

We’ve continued to ask engineers (every quarter for the last four years) about which of these categories of technical debt have hindered their productivity in the previous quarter. Defying some expectations, engineers do not select all of them! (Fewer than 0.01% of engineers select all of the options.) In fact, about three quarters of engineers select three or fewer categories. It’s worth noting that our survey does not ask engineers “Which forms of technical debt did you encounter?” but only “Which forms of technical debt have hindered your productivity?” It’s well understood that all code has some technical debt; moreover, taking on technical debt prudently and deliberately can be a correct engineering choice.4 Engineers may run into more of these during the course of a quarter, but their productivity may not be substantially hindered in all cases.

The preceding categories of technical debt have been shown in the order of most to least frequently reported as a hindrance by Google engineers in our latest quarter. We don’t expect this ordering to generalize to other companies as the ordering probably says as much about the type of company and the tools and infrastructure available to engineers as it does the state of the code base. For example, Google engineers regularly cite migrations as a hindrance, but large-scale migrations are only attempted at all because of Google’s monolithic repository and dependency system;5 other companies may find that a large-scale migration is so impossible that it is not even attempted. A fresh start-up might have few problems with dead/abandoned code or code degradation but many hindrances due to immature testing and release processes. While we do expect there to be differences across companies in how much engineers are hindered by these categories, we believe the list itself is generalizable.

Our quarterly engineering survey enables us to measure the rate at which engineers encounter and are hindered by each type of technical debt, and this information has been particularly useful when we slice our data for particular product areas, code bases, or types of development. For example, we’ve found that engineers working on machine learning systems face different types of technical debt when compared to engineers who build and maintain back-end services. Slicing this data allows us to target technical debt interventions based on the toolchain that engineers are working in or to target specific areas of the company. Similarly, slicing the data along organizational lines allows directors to track their progress as they experiment with new initiatives to reduce technical debt.

However, we find quarterly surveys are limited in their statistical and persuasive power…

…Our goal was then to figure out if there are any metrics we can extract from the code or development process that would indicate technical debt was forming before it became a significant hindrance to developer productivity. We ran a small analysis to see if we could pull this off with some of the metrics we happened to have already…

…The results were disappointing, to say the least. No single metric predicted reports of technical debt from engineers; our linear regression models predicted less than 1% of the variance in survey responses. The random forest models fared better, but they had high precision (>80%) and low recall (10%–25%). That is, these models could identify parts of the code base where a focused intervention could reduce technical debt, but they were also going to miss many parts of the code base where engineers would identify significant issues.

It is quite possible that better technical debt indicator metrics do exist for some forms of technical debt. We only explored objective metrics for three types of technical debt, and we only sought to use existing metrics, rather than attempting to create new metrics that might better capture the underlying concepts from the survey.

However, it’s also possible that such metrics don’t exist for other types of technical debt because they are not about the present state of a system, but a relation between the system’s present state and some unimplemented ideal state. An engineer’s judgments about technical debt concern both the present state and the possible state. The possible states of the world are something that mathematical models cannot incorporate without the modeler’s direct intervention. For example, the fact that a project’s code base consists entirely of code written in Python 2 is not technical debt in a world where there is no loss of functionality compared to another language or version or outside pressure to migrate. However, in a world where Python 3 is a preferred or required alternative, that same corpus of Python 2 constitutes a needed migration. The present state of the world—from the perspective of a model—is identical in these two instances, but the possible world has changed. Humans consider the possible world in their judgments of technical debt. If a model were to incorporate explicit rules that capture aspects of the possible world (for example, if a model were designed to count every file in Python 2 as technical debt because the human modeler knows Python 3 is an alternative), then the change would be detectable to the model. If we could capture this judgment as it evolves, it could form the basis for better measurements of technical debt…

…While we haven’t been able to find leading indicators of technical debt thus far, we can continue to measure technical debt with our survey and help to identify teams that struggle with managing technical debt of different types. To that end, we also added the following questions to our engineering survey:

  • To what extent has your team deliberately incurred technical debt in the past three months?
  • How often do you feel that incurring technical debt was the right decision?
  • How much did your team invest in reducing existing technical debt and maintaining your code?
  • How well does your team’s process for managing technical debt work?

Combined with the survey items about the types of technical debt that are causing productivity hindrances, these questions enable the identification of teams that are struggling, reveal the type(s) of technical debt they are struggling with, and indicate whether they are incurring too much debt initially or whether they are not adequately paying down their existing debt. These are useful data, especially when teams can leverage them under guidance from experts on how to manage their technical debt. Fortunately, we have such experts at Google. Motivated in part by our early findings on technical debt, an interested community within Google formed a coalition to help engineers, managers, and leaders systematically manage and address technical debt within their teams through education, case studies, processes, artifacts, incentives, and tools. The coalition’s efforts have included the following:

  • Creating a technical debt management framework to help teams establish good practices. The framework includes ways to inventory technical debt, assess the impact of technical debt management practices, define roles for individuals to advance practices, and adopt measurement strategies and tools.
  • Creating a technical debt management maturity model and accompanying technical debt maturity assessment that evaluates and characterizes an organization’s technical debt management process and helps grow its capabilities by guiding it to a relevant set of well-established practices for leads, managers, and individual contributors. The model characterizes a team’s maturity at one of four levels (listed here from least to most mature):
    • Teams with a reactive approach have no real processes for managing technical debt (even if they do occasionally make a focused effort to eliminate it, for example, through a “fixit”). Teams with a proactive approach deliberately identify and track technical debt and make decisions about its urgency and importance relative to other work.
    • Teams with a strategic approach have a proactive approach to managing technical debt (as in the preceding level) but go further: designating specific champions to improve planning and decision making around technical debt and to identify and address root causes.
    • Teams with a structural approach are strategic (as in the preceding level) and also take steps to optimize technical debt management locally—embedding technical debt considerations into the developer workflow—and standardize how it is handled across a larger organization.
  • Organizing classroom instruction and self-guided courses to evangelize best practices and community forums to drive continual engagement and sharing of resources. This work also includes a technical talk series with live (and recorded) sessions from internal and external speakers.
  • Tooling that supports the identification and management of technical debt (for example, indicators of poor test coverage, stale documentation, and deprecated dependencies). While these metrics may not be perfect indicators, they can allow teams who already believe they have a problem to track their progress toward fixing it.

Overall, our emphasis on technical debt reduction has resulted in a substantial drop in the percentage of engineers who report that their productivity is being extremely to moderately hindered by technical debt or overly complicated code in their project. The majority of Google engineers now feel they are only “slightly hindered” or “not at all hindered” by technical debt, according to our survey. This is a substantial change and, in fact, is the largest trend shift we have seen in five years of running the survey.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the second quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. Here they are, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is a once-in-a-generation platform shift (a similar comment was also made in the company’s 2023 first-quarter earnings call)

I think AI is basically like a once-in-a-generation platform shift, probably bigger than the shift to mobile, probably more akin to something like the Internet as far as what it can do for new businesses and new business opportunities. And I think that it is a huge opportunity for us to really be in the leading edge of innovation.

Airbnb is already using a fair amount of AI in its product but there’s not much generative AI at the moment; management also believes that AI can continue to help Airbnb lower its fixed cost base

I mean, remember that we actually use a fair amount of AI right now on the product, like we do it for our party prevention technology, a lot of our matching technologies. A lot of the underlying technologies we have is actually AI-driven. It’s not so much gen AI, which is such a huge kind of future opportunity. I think we’ll see more leverage in our fixed cost base, so needing fewer people to do more work overall. And so I think that, that’s going to help both on our fixed costs and some our variable costs. So you’ll see us being able to automate more customer service contacts, et cetera, over time…

…So customer — the strength of Airbnb is that we’re one-of-a-kind. We have 7 million active listings, more than 7 million listings, and everyone is unique and that is really special. But the problem with Airbnb is it’s one-of-a-kind, and sometimes you don’t know what you’re going to get. And so I think that if we can continue to increase reliability and then if there’s something that goes unexpected, if customer serves can quickly fix, remediate the issue, then I think there will be a tipping point where many people that don’t consider Airbnb and they only stay in hotels would consider Airbnb. And to give you a little more color about this customer service before I go to the future, there are so many more types of issues that could arise staying in Airbnb than a hotel. First of all, when you call a hotel, they’re usually one property and they’re aware of every room. We’re in nearly every country in the world. Often a guest or host will call us, and they will even potentially speak a different language than the person on the other side, the host — the guest and host.

There are nearly 70 different policies that you could be adjudicating. Many of these are 100 pages long. So imagine a customer service agent trying to quickly deal with an issue with somebody from 2 people from 2 different countries in a neighborhood that the agent may never even heard of. What AI can do, and we’re using a pilot to GPT-4, is AI can read all of our policies. No human can ever quickly read all those policies. It can read the case history of both guests and hosts. It could summarize the case issue, and it could even recommend what the ruling should be based on our policies. And that can then write a macro that the customer search agent can basically adopt and amend. If we get all this right, it’s going to 2 things. In the near term, it’s going to actually make customer service a lot more effective because agents will actually be able to handle a lot more tickets and make the ticket, you’ll never even have to talk to an agent, but also the service to be more reliable, which will unlock more growth.

Airbnb’s management believes that they can use build a breakthrough multi-modal AI interface to learn more about Airbnb’s users and provide a lot of personalisation (a.k.a an AI concierge)

If you were to go to ChatGPT right now and you ask it a question and I were to go to ChatGPT and ask it a question, we’re going to get mostly the same answer. And the reason why is it doesn’t know who you are and it doesn’t know who I am. So it does really good with like immutable truths, like how far is the earth to the moon or something like that. And — there’s no conditional answers to that. But it turns out in life, there’s a whole bunch of questions, and travel is one of these areas where the answer isn’t right for everyone. Where should I travel? Where should I stay? Who should I go with? What should I bring? Every one of these questions depends on who you are…

… And we can design, I think, a breakthrough interface for AI. I do not think that the AI interface is chat. Chat, I do not think is the right interface because we want to interface that’s multimodal. It’s text, it’s image and it’s video and you can — it’s much faster than typing to be able to see what you want. So we think there’s a whole new interface. And also, I think it’s really important that we provide a lot of personalization, that we learn more about you, that you’re not just a unanimous customer. And that’s partly why we’re investing more and more in account profiles, personalization, really understanding the guests. We want to know more about every guest in Airbnb than any travel company knows about their customer in the world. And if we do that, we can provide much more personalized service and that our app can almost be like an AI concierge that can match to the local experiences, local homes, local places all over the world.

Airbnb’s management is not interested in building foundational AI models – they are only keen on building the interface (a similar comment was also made in the company’s 2023 first-quarter earnings call)

And so we’re not going to be building like large research labs to develop these large language models. Those are like infrastructure projects, building bridges. But we’re going to build the applications on top of the bridges, like the car. And I think Airbnb is best-in-class at designing interfaces. I think you’ve seen that over the last few years.

Airbnb’s management believes that the companies that will best succeed in AI are the most product-led companies

And I think the last thing I’ll just say about AI is I think the companies that will best succeed in AI, well, think of it this way, which company’s best adopted in mobile? Which company is best adopted in the Internet? It was the companies that were most innovative, the most product-led. And I think we are very much a product-led, design-led, technology-led company, and we always want to be on the frontier of new tech. So we’re working on that, and I think you’ll see some exciting things in the years to come.

Alphabet (NASDAQ: GOOG)

Alphabet is making AI helpful for everyone in four important ways

 At I/O, we shared how we are making AI helpful for everyone in 4 important ways: first, improving knowledge and learning…

…Second, we are helping people use AI to boost their creativity and productivity…

…Third, we are making it easier for others to innovate using AI…

…Finally, we are making sure we develop and deploy AI technology responsibly so that everyone can benefit.

2023 is the seventh year of Alphabet being an AI-first company and it knows how to incorporate AI into its products

This is our seventh year as an AI-first company, and we intuitively know how to incorporate AI into our products.

Nearly 80% of Alphabet’s advertisers use at least one AI-powered Search ads product 

In fact, today, nearly 80% of advertisers already use at least one AI-powered Search ads product.

Alphabet is using AI to help advertisers create campaigns and ads more easily in Google Ads and also help advertisers better understand their campaigns

Advertisers tell us they’re looking for more assistive experience to get set up with us faster. So at GML, we launched a new conversational experience in Google Ads powered by a LLM tuned specifically from ads data to make campaign construction easier than ever. Advertisers also tell us they want help creating high-quality ads that work in an instant. So we’re rolling out a revamped asset creation flow in Performance Max that helps customers adapt and scale their most successful creative concepts in a few clicks. And there’s even more with PMax. We launched a new asset insights and new search term insights that improve campaign performance understanding and new customer life cycle goals that led advertisers optimize for new and existing customers while maximizing sales. We’ve long said it’s all about reaching the right customer with the right creative at the right time. 

So later this year, Automatically Created Assets, which are already generating headlines and descriptions for search ads, will start using generative AI to create assets that are even more relevant to customer queries. Broad match also got updates. AI-based keyword prioritization ensures the right keyword, bid, budget, creative and landing page is chosen when there are multiple overlapping keywords eligible. And then to make it easier for advertisers to optimize visual storytelling and drive consideration in the mid funnel, we’re launching 2 new AI-powered ad solutions, Demand Gen and Video View campaigns, and both will include Shorts inventory. 

Alphabet’s management thinks the integration of LLMs (large language models) and generative AI make Alphabet’s core Search product even better

Large language models make them even more helpful models like PaLM 2 and soon Gemini, which we are building to be multimodal. These advances provide an opportunity to reimagine many of our products, including our most important product, Search. We are in a period of incredible innovation for Search, which has continuously evolved over the years. This quarter saw our next major evolution with the launch of the Search Generative Experience, or SGE, which uses the power of generative AI to make Search even more natural and intuitive. User feedback has been very positive so far. It can better answer the queries people come to us with today, while also unlocking entirely new types of questions that Search can answer. For example, we found that generative AI can connect the dots for people as they explore a topic or project, helping them weigh multiple factors and personal preferences before making a purchase or booking a trip. We see this new experience as another jumping off point for exploring the web, enabling users to go deeper to learn about a topic.

Alphabet’s management thinks the company has done even better in integrating generative AI into search than they thought it would be at this point in time

Look, on the Search Generative Experience, we definitely wanted to make sure we’re thinking deeply from first principles, while it’s exciting new technology, we’ve constantly been bringing in AI innovations into Search for the past few years, and this is the next step in that journey. But it is a big change so we thought about from first principles. It really gives us a chance to now not always be constrained in the way Search was working before, allowed us to think outside the box. And I see that play out in experience. So I would say we are ahead of where I thought we’d be at this point in time. The feedback has been very positive. We’ve just improved our efficiency pretty dramatically since the product launch. The latency has improved significantly. We are keeping a very high bar, and — but I would say we are ahead on all the metrics in terms of how we look at it internally.

Alphabet’s management believes that even with the introduction of generative AI (Search Generative Experience) in the company’s core Search product, advertising will still continue to play a critical role in the company’s business model and the monetisation of Search will not be harmed

Ads will continue to play an important role in this new search experience. Many of these new queries are inherently commercial in nature. We have more than 20 years of experience serving ads relevant to users’ commercial queries, and SGE enhances our ability to do this even better. We are testing and evolving placements and formats and giving advertisers tools to take advantage of generative AI…

…Users have commercial needs, and they are looking for choices, and there are merchants and advertisers looking to provide those choices. So those fundamentals are true in SGE as well. And we have a number of experiments in flight, including ads, and we are pleased with the early results we are seeing. And so we will continue to evolve the experience, but I’m comfortable at what we are seeing, and we have a lot of experience working through these transitions, and we’ll bring all those learnings here as well.

Alphabet’s management believes that Google Cloud is a leading platform for training and running inference of generative AI models with more than 70% of generative AI unicorns using Google Cloud

Our AI-optimized infrastructure is a leading platform for training and serving generative AI models. More than 70% of gen AI unicorns are Google Cloud customers, including Cohere, Jasper, Typeface and many more. 

Google Cloud uses both Nvidia chips as well as Google’s own TPUs (this combination helps customers get 2x better price performance than competitors)

We provide the widest choice of AI supercomputer options with Google TPUs and advanced NVIDIA GPUs, and recently launched new A3 AI supercomputers powered by NVIDIA’s H100. This enables customers like AppLovin to achieve nearly 2x better price performance than industry alternatives. 

Alphabet is seeing customers using Google Cloud’s AI capabilities for online travelling, retail marketing, anti-money laundering, drug discovery, and more

Among them, Priceline is improving trip planning capabilities. Carrefour is creating full marketing campaigns in a matter of minutes. And Capgemini is building hundreds of use cases to streamline time-consuming business processes. Our new Anti-Money Laundering AI helps banks like HSBC identify financial crime risk. And our new AI-powered target and lead identification suite is being applied at Cerevel to help enable drug discovery…

… I mentioned Duet AI earlier. Instacart is using it to improve customer service workflows. And companies like Xtend are scaling sales outreach and optimizing customer service.

Alphabet’s management thinks that open-source AI models will be important in the ecosystem and

Google Cloud will be offering not just first-party AI models, but also third-party and open source models

So similarly, you would see with AI, we will embrace — we will offer not just our first-party models, we’ll offer third-party models, including open source models. I think open source has a critical role to play in this ecosystem. Google contributes, we are one of the largest contributors to — if you look at hugging phase and in terms of the contribution there, when you look at projects like Android, Chromium and so on, Kubernetes and so on. So we’ll embrace that and we’ll stay at the cutting edge of technology, and I think that will serve us well for the long term.

Amazon (NASDAQ: AMZN)

Amazon’s management thinks generative AI is going to be transformative, but it’s still very early days in the adoption and success of generative AI, and consumer applications is only one opportunity in the area

It’s important to remember that we’re in the very early days of the adoption and success of generative AI, and that consumer applications is only one layer of the opportunity…

… I think it’s going to be transformative, and I think it’s going to transform virtually every customer experience that we know. But I think it’s really early. I think most companies are still figuring out how they want to approach it…

…What I would say is that we have had a very significant amount of business in AWS driven by machine learning and AI for several years. And you’ve seen that largely in the form of compute as customers have been doing a lot of machine learning training and then running their models and production on top of AWS and our compute instances. But you’ve also seen it in the form of the 20-plus machine learning services that we’ve had out there for a few years. I think when you’re talking about the big potential explosion in generative AI, which everybody is excited about, including us, I think we’re in the very early stages there. We’re a few steps into a marathon in my opinion. 

Amazon’s management sees LLMs (large language models) in generative AI as having three key layers and Amazon is participating heavily in all three: The first layer is the compute layer; the second would be LLMs-as-a-service; and the third would be the applications that run on top of LLMs, with ChatGPT being an example

We think of large language models in generative AI as having 3 key layers, all of which are very large in our opinion and all of which AWS is investing heavily in. At the lowest layer is the compute required to train foundational models and do inference or make predictions…

…We think of the middle layer as being large language models as a service…

…Then that top layer is where a lot of the publicity and attention have focused, and these are the actual applications that run on top of these large language models. As I mentioned, ChatGPT is an example. 

Amazon has AI compute instances that are powered by Nvidia H100 GPUs, but the supply of Nvidia chips is scarce, so management built Amazon’s own training (Trainium) and inference (Inferentia) chips and they are an appealing price performant option

Customers are excited by Amazon EC2 P5 instances powered by NVIDIA H100 GPUs to train large models and develop generative AI applications. However, to date, there’s only been one viable option in the market for everybody and supply has been scarce. That, along with the chip expertise we’ve built over the last several years, prompted us to start working several years ago on our own custom AI chips for training called Trainium and inference called Inferentia that are on their second versions already and are a very appealing price performance option for customers building and running large language models.

Amazon’s management optimistic that a lot of LLM training and inference will be running on Trainium and Inferentia in the future

We’re optimistic that a lot of large language model training and inference will be run on AWS’ Trainium and Inferentia chips in the future.

Amazon’s management believes that most companies that want to work with AI do not want to build foundational LLMs themselves as it is time consuming and expensive, and companies only want to customize the LLMs with their own data in a secure way (this view was also mentioned in Amazon’s 2023 first-quarter earnings call) 

Stepping back for a second, to develop these large language models, it takes billions of dollars and multiple years to develop. Most companies tell us that they don’t want to consume that resource building themselves. Rather, they want access to those large language models, want to customize them with their own data without leaking their proprietary data into the general model, have all the security, privacy and platform features in AWS work with this new enhanced model and then have it all wrapped in a managed service. 

AWS has a LLM-as-a-service called Bedrock that provides access to LLMs from Amazon and multiple startups; large companies are already using Bedrock to build generative AI applications; Bedrock allows customers to create conversation AI agents 

This is what our service Bedrock does and offers customers all of these aforementioned capabilities with not just one large language model but with access to models from multiple leading large language model companies like Anthropic, Stability AI, AI21 Labs, Cohere and Amazon’s own developed large language models called Titan. Customers, including Bridgewater Associates, Coda, Lonely Planet, Omnicom, 3M, Ryanair, Showpad and Travelers are using Amazon Bedrock to create generative AI application. And we just recently announced new capabilities from Bedrock, including new models from Cohere, Anthropic’s Claude 2 and Stability AI’s Stable Diffusion XL 1.0 as well as agents for Amazon Bedrock that allow customers to create conversational agents to deliver personalized up-to-date answers based on their proprietary data and to execute actions.

Amazon’s management believes that AWS is democratizing access to generative AI and is making it easier for companies to work with multiple LLMs

If you think about these first 2 layers I’ve talked about, what we’re doing is democratizing access to generative AI, lowering the cost of training and running models, enabling access to large language model of choice instead of there only being one option.

Amazon’s management sees coding companions as a compelling early example of a generative AI application and Amazon has CodeWhisperer, which is off to a very strong start

We believe one of the early compelling generative AI applications is a coding companion. It’s why we built Amazon CodeWhisperer, an AI-powered coding companion, which recommends code snippets directly in the code editor, accelerating developer productivity as they code. It’s off to a very strong start and changes the game with respect to developer productivity.

Every team in Amazon are building generative AI applications but management believes that most of these applications will be built by other companies, although these applications will be built on AWS

Inside Amazon, every one of our teams is working on building generative AI applications that reinvent and enhance their customers’ experience. But while we will build a number of these applications ourselves, most will be built by other companies, and we’re optimistic that the largest number of these will be built on AWS… 

…Coupled with providing customers with unmatched choices at these 3 layers of the generative AI stack as well as Bedrock’s enterprise-grade security that’s required for enterprises to feel comfortable putting generative AI applications into production, we think AWS is poised to be customers’ long-term partner of choice in generative AI…

…On the AI question, what I would tell you, every single one of our businesses inside of Amazon, every single one has multiple generative AI initiatives going right now. And they range from things that help us be more cost effective and streamlined in how we run operations in various businesses to the absolute heart of every customer experience in which we offer. And so it’s true in our stores business. It’s true in our AWS business. It’s true in our advertising business. It’s true in all our devices, and you can just imagine what we’re working on with respect to Alexa there. It’s true in our entertainment businesses, every single one. It is going to be at the heart of what we do. It’s a significant investment and focus for us.

Amazon’s management believes that (1) data is the core of AI, and companies want to bring generative AI models to data, not the other way around and (2) AWS has a data advantage

Remember, the core of AI is data. People want to bring generative AI models to the data, not the other way around. AWS not only has the broadest array of storage, database, analytics and data management services for customers, it also has more customers and data store than anybody else.

Amazon’s management is of the view that in the realm of generative AI as well as cloud computing in general, the more demand there is, the more capex Amazon needs to spend to invest in data centers for long-term monetisation; management wants the challenge of having more capex to spend on because that will mean that AWS customers are successful with building generative AI on top of AWS

And so it’s — like in AWS, in general, one of the interesting things in AWS, and this has been true from the very earliest days, which is the more demand that you have, the more capital you need to spend because you invest in data centers and hardware upfront and then you monetize that over a long period of time. So I would like to have the challenge of having to spend a lot more in capital in generative AI because it will mean that customers are having success and they’re having success on top of our services.

Apple (NASDAQ: AAPL)

Apple has been doing research on AI for years and has built these technologies as integral features of its products; management intends for Apple to continue investing in AI in the years ahead

If you take a step back, we view AI and machine learning as core fundamental technologies that are integral to virtually every product that we build. And so if you think about WWDC in June, we announced some features that will be coming in iOS 17 this fall, like Personal Voice and Live Voicemail. Previously, we had announced lifesaving features like fall detection and crash detection and ECG. None of these features that I just mentioned and many, many more would be possible without AI and machine learning. And so it’s absolutely critical to us.

And of course, we’ve been doing research across a wide range of AI technologies, including generative AI for years. We’re going to continue investing and innovating and responsibly advancing our products with these technologies with the goal of enriching people’s lives. And so that’s what it’s all about for us.

ASML (NASDAQ: ASML)

ASML’s management believes that AI has strengthened the long-term megatrends powering the growth of the semiconductor industry

Beyond 2024, it’s really the solid believe we have in the megatrends that are not going to go away. You can even argue that some of these megatrends, when you think about AI, are even more important than we thought, let’s say at the end of last year. But it’s not only AI, it’s also the energy transition, it’s the electrification of mobility, it’s industrial Internet Of Things. It’s everything that’s driven by sensors and actuators. So, effectively, we see very strong growth across the entire semiconductor space. Whether it’s mature or whether it’s advanced. Because of these megatrends we have still a very strong confidence in what we said at the end of last year, that by 2025 – depending on what market scenario you are choosing, higher or lower – we will have between €30 billion and €40 billion of sales and gross margin by that 2025 timeframe between 54% and 56%. And if you extend that then to 2030, we are still very confident that by that time, also dependent on a lower or higher market scenario, sales will be anywhere between €44 billion and €60 billion with gross margin between 56% and 60%. So, we have short-term cycles. This is what the industry is all about. But we have very strong confidence, even stronger confidence, in what the longer-term future is going to bring for this company.

ASML’s management thinks the world is at the beginning of an AI high-power compute wave, but AI will not be a huge driver of the company’s growth in 2024

But I think we’re at the beginning of this, you could say, AI high-power compute wave. So yes, you’ll probably see some of that in 2024. But you have to remember that we have some capacity there, which is called the current underutilization. So yes, we will see some of that, but that will be taken up, the particular demand, by the installed base. Now — and that will further accelerate. I’m pretty sure. But that will definitely mean that, that will be, you could say, the shift to customer by 2025. So I don’t see that or don’t particularly expect that, that will be a big driver for additional shipments in 2024, given the utilization situation that we see today.

Arista Networks (NYSE: ANET)

Arista Networks’ management is seeing AI workloads drive an upgrade from 400 gigabit networking ports to 800 gigabit ports

As we surpassed 75 million cumulative cloud networking ports, we are experiencing 3 refresh cycles with our customers, 100 gigabit migration in the enterprises, 200 and 400 gigabit migration in the cloud and 400 going to 800 gigabits for AI workloads…

…We had the same discussion when the world went to 400 gig. Are we switching for 100 to 400. The reality was the customers continue to buy both 100 and 400 for different use cases. [ 51T ] and 800 gig especially are being pulled by AI clusters, the AI teams, they’re very anxious to get their hands on it, move the data as quickly as possible and reduce their job completion times. So you’ll see early traction there.

At least one of Arista Networks’ major cloud computing customers is shifting capital expenditure from other cloud computing areas to AI-related areas

During the past couple of years, we have enjoyed significant increase in cloud CapEx to support our Cloud Titan customers for their ever-growing needs, tech refresh and expanded offerings. Each customer brings a different business and mix of AI networking and classic cloud networking for their compute and storage clusters. One specific Cloud Titan customer has signaled a slowdown in CapEx from previously elevated levels. Therefore, we expect near-term Cloud Titan demand to moderate with spend favoring their AI investments. 

Arista Networks is a founding member of a consortium that is promoting the use of Ethernet for networking needs in AI data centres

Arista is a proud founding member of the Ultra Ethernet Consortium that is on a mission to build open, multivendor AI networking at scale based on proven Ethernet and IP.

Arista Networks’ management thinks AI networking will be an extension of cloud networking in the future

In the decade ahead, AI networking will become an extension of cloud networking to form a cohesive and seamless front-end and back-end network.

Arista Networks’ management thinks that Ethernet – and not Inifiniband – is the right networking technology when it comes to the training of large language models (LLMs) because they involve a massive amount of data; but in the short run, management thinks Infiniband will be more widely adopted

Today, I would say, in the back end of the network, there are basically 3 classes of networks. One is very, very small networks that are within a server where customers use PCIe, CXL, there is proprietary NVIDIA-specific technologies like NVLink that Arista does not participate. Then there’s more medium clusters, you can think generative AI, mostly inference where they may well get built on Ethernet. For the extremely large clusters with large language training models, especially with the advent of ChatGPT 3 and 4 you’re not talking about not just billion parameters, but an aggregate of trillion parameters. And this is where Ethernet will shine. But today, the only technology that is available to customers is InfiniBand. So obviously, InfiniBand with 10, 15 years of similarity in an HPC environment is often being bundled with the GPU. But the right long-term technology is Ethernet, which is why I’m so proud of what the Ultra Ethernet Consortium and a number of vendors are doing to make that happen. So near term, there’s going to be a lot of InfiniBand and Arista will be watching that outside in…

…And what is their network foundation. In some cases, where they just need to go quick and fast, as I explained before, it would not be uncommon to just bundle their GPUs with an existing technology like InfiniBand. But where they’re really rolling out into 2025, they’re doing more trials and pilots with us to see what the performance is, to see what the drop is, to see how many they can connect, what’s the latency, what’s the better entropy, what’s the efficiency, et cetera. That’s where we are today.

Arista Networks’ management thinks that neither Ethernet nor Infiniband were purpose-built for AI

But longer term, Arista will be participating in an Ethernet [ AI ] network. And neither technology, I want to say, were perfectly designed for AI, InfiniBand was more focused on HPC and Ethernet was more focused on general purpose networking. So I think the work we are doing with the UEC to improve Ethernet for AI is very important.

Arista Networks’ management thinks that there’s a 10-year AI-drive growth opportunity for Ethernet networking technology

I think the way to look at our AI opportunity is it’s 10 years ahead of us. And we’ll have early customers in the cloud with very large data sets, trialing our Ethernet now. And then we will have more cloud customers, not only Titans, but other high-end Tier 2 cloud providers and enterprises with large data sets that would also trial us over time. So in 2025, we expect to have a large list of customers, of which Cloud Titans will still end up being some of the biggest but not the only ones.

Datadog (NASDAQ: DDOG)

Datadog has introduced functionalities related to generative AI and LLMs (large language models) on its platform that include (1) the ability for software teams to monitor the performance of their AI models, (2) an incident management copilot, and (3) new integrations across AI stacks including GPU infrastructure providers, vector databases, and more

To kick off our keynote, we launched our first innovation for generative AI and large language model. We showcased our LLM observability product, enabling ML engineers to safely deploy and manage the model production. This includes the motor catalog centralized place to view and manage every model in every state of our customer development pipeline; analysis and insight on model performance, which allows all engineers to identify and address performance and quality issue with the model themselves; and help identify model drift, the performance the performance degradation that happens over time as model interact with the world data. We also introduced Bits AI. Bits understands natural language and provide insights from across the Datadog platform as well as from our customers’ collaboration and documentation tools. Among its many features, Bits AI can act as an incident management copilot identifying and suggesting succes, generating synthetic tests and triggering workflows to automatically remediate critical issue. And we announced 15 new integrations across the next-generation AI stack from GPU infrastructure providers to Vector databases, motor vendors and orchestration frameworks.

Management is seeing Datadog get early traction with AI customers

And although it’s early days for everyone in this space, we are getting traction with AI customers. And in Q2, our next-gen AI customers contributed about 2% of ARR.

Datadog’s AI customers are those that are selling LLM services or companies that are built on differentiated AI technology

So it’s — you can see it as the customers that are either selling AI themselves. So that would be LM vendors and the like. Our customers whose whole business is so is built on differentiated AI technology. And we’ve been fairly selective in terms of who we put in a category because companies everywhere are very eager to said that they differentiate we are today. 

Datadog expanded a deal with one of the world’s largest tech companies that is seeing massive adoption of its new generative AI product and was using homegrown tools for tracking and observability, but those were slowing it down

Next, we signed a 7-figure expansion with 1 of the world’s largest tech companies. This customer is seeing massive adoption of its new generative AI product and needs to scale their GPU fleet to meet increasing demand for a workload. Using their homegrown tools were slowing them down and put at risk critical product launches. With Datadog, this team is able to programmatically manage new environments as they come online, track and alert on their service level objectives and provide real-time visibility for GPs.

Etsy (NASDAQ: ETSY)

Etsy is using machine learning (ML) models to better predict how humans would perceive the quality of a product

Our product teams are helping buyers more easily navigate the breadth and depth of our sellers’ inventory, leveraging the latest AI advances to improve our discovery and inspiration experiences while surfacing the very best of Etsy. These latest technologies, combined with training and guidance from our own talented team, is making the superhuman possible in terms of organizing and curating at scale, which I believe can unlock an enormous amount of growth in the years to come. One great example. Over the past quarter, we’ve more than doubled the size of our best of Etsy library, which is curated by expert merchandisers based on the visual appeal, uniqueness and apparent craftsmanship of an item. We’re now using this library to train our ML models to better predict the quality of items as perceived by humans. We’re seeing encouraging results from our first iterations on these models, and I’m optimistic that this work will have a material impact, helping us to surface the best of Etsy in every search.

Etsy’s use of ML has helped it to dramatically reduce the time it takes to resolve customer issues

Specific to our trust and safety work, advances in ML capabilities have enabled our enforcement models to detect an increasing number of policy violations, which, combined with human know-how, is starting to have a meaningful impact on the buyer and seller experience. Since Etsy Purchase Protection was launched about a year ago, we’ve reduced the issue resolution time for cases by approximately 85%, dramatically streamlining the service experience on the rare occasion that something goes wrong, demonstrating to buyers and sellers that we have their backs in these key moments. 

Etsy’s management wants to use AI to improve every touch point a customer has with Etsy

Of course, much of the focus was on the myriad ways we can continue to harness AI and ML technologies in almost every customer touch point, with the potential to further transform buyer-facing experiences like enhancing search and recommendations, seller tools like streamlining the listing process and assisting with answering customer queries, improving fraud detection and trust and safety models, et cetera. The opportunities are nearly endless.

Etsy has a small ML team and the company is streamlining its machine learning (ML) workflow so that it’s easy for any Etsy software engineering to deploy their own ML models without asking for help from the ML team

But all of this innovation also takes time and effort and relies on our relatively small but mighty team of ML experts, talent that is obviously in high demand. Historically, all new ML models have been created by this team of highly specialized data scientists. And the full process of creating a new model, from cleaning and organizing the data to training and testing the model, then putting it into production, could take as long as 4 months. That’s why we kicked off a major initiative over a year ago we call Democratizing ML with the goal to streamline and automate much of this work so that virtually any Etsy engineer can deploy their own ML models in a matter of days instead of months. And I’m thrilled to report that we’re starting to see the first prototypes from this effort come live now. For example, if you’re on the Etsy team working on buyer recommendations, you can now use a drag-and-drop modeling tool to create a brand-new recommendations module without needing our ML team to build that model for you. 

Etsy’s management is currently testing ML technology developed by other companies to drive its own efficiency

We’ve also been leveraging the investments that other companies, many of them are existing partners have invested already in machine learning. And so we’re doing a lot of beta testing and experimentation with other companies. And at the moment, that is coming at a very low cost to us. We would imagine that at some point, there will be some kind of license fee arrangement. But we are — typically, we do not invest in anything unless we see a high ROI for that investment.

Etsy’s management believes that generative AI can be good for the company’s search experience for consumers, but consumer-adoption and the AI-integration will take time

For buyers, the idea that the search experience can become more conversational, I think, can be a very big deal for Etsy, and maybe more for Etsy than for most people. I talked to 2 earnings calls ago now about how you don’t walk into a store and shout, “Dress blue, linen,” to a sales agent. You actually have a conversation with them that has more context. And I think that’s especially important in a place like Etsy, where we’ve got 115 million listings to choose from and no catalog. So the idea that it can be conversational, I think, can give a lot of context and really help. And I think a lot of the technology behind that is becoming a self-problem. What’s going to be longer is the consumer adoption curve. What do customers expect when they enter something into a search bar? And how do they get used to interacting with chatbots? And what’s the UI look like? And that’s something that I think we’re going to need to — we’re testing a lot right now. What do people expect? How do they like to interact with things? And in my experience now, having a few decades of consumer technology leadership, the consumer adoption curve is often the long pole in the tent, but I think over time, can yield really big gains for us.

Fiverr (NYSE: FVRR)

Fiverr Neo is a new matching service from Fiverr that provides better matching for search queries using data, AI, and conversational search

Essentially, what we’ve done with Fiverr Neo is to tackle head-zone, the largest challenge every market has, which is matching. Now being able to produce a great match is far more than just doing search. And search by definition is very limited because customers provide 3 or 4 awards. And based on that, you need to understand their intent, their need and everything surrounding that need. And providing good matching for us is really about not just pairing business with a professional or with an agency, but actually being able to produce a product and end result where the 2 parties to that transaction are very happy that they work together.To do this perfect match, you need a lot of information because that allows you to create a very, very precise type of match. And what we’ve developed with Fiverr Neo using the latest technologies alongside our deep data tech that we’ve developed along the years and the tens of millions of transactions that we’ve already processed and the learnings from that is a product that can have a human-like discussion where our technology deeply understands and can have a conversation that would guide the customer to define their exact needs.

Fiverr’s management is seeing high interest for AI services on the company’s marketplace

So on the AI services, pretty much the same as last quarter, meaning we’ve launched tens of categories around AI. The interest is very high. It’s very healthy. And we continue to invest in it. So basically introducing more and more categories that have to do with AI in general and Gen AI in particular. And our customers love it. They use it and we’re happy with what we’re seeing on that front.

Mastercard (NYSE: MA)

Mastercard’s management sees AI as a foundational technology for the company and the technology has been very useful for the company’s fraud-detection solutions, where Mastercard has helped at least 9 UK banks stop payment scams before funds leave a victim’s account

We recently launched our Consumer Fraud Risk solution, which leverages our latest AI capabilities and the unique network view of real-time payments I just mentioned to help banks predict and prevent payment scams. AI is a foundational technology used across our business and has been a game changer in helping identify such fraud patterns. We’ve partnered with 9 U.K. banks, including Barclays, Lloyds Bank, Halifax, Bank of Scotland, NatWest, Monzo and TSB to stop scam payments before funds leave a victim’s account.TSB, one of the first banks to adopt the solution, indicated that it has already dramatically increased its fraud detection since deploying the capability.

Meta Platforms (NASDAQ: META)

Meta’s management currently does not have a clear handle on how much AI-related capital expenditure is needed – it will depend on how fast Meta’s AI products grow

The other major budget point that we’re working through is what the right level of AI CapEx is to support our road map. Since we don’t know how quickly our new AI products will grow, we may not have a clear handle on this until later in the year…

…There’s also another component, which is the next-generation AI efforts that we’ve talked about around advanced research and gen AI, and that’s a place where we’re already standing up training clusters and inference capacity. But we don’t know exactly what we’ll need in 2024 since we don’t have any at-scale deployments yet of consumer business-facing features. And the scale of the adoption of those products is ultimately going to inform how much capacity we need.

Meta’s management is seeing the company’s investments in AI infrastructure paying off in the following ways: (1) Increase in engagement and monetisation of Reels; and (2) an increase in monetisation of automated advertising products

Investments that we’ve made over the years in AI, including the billions of dollars we’ve spent on AI infrastructure, are clearly paying off across our ranking and recommendation systems and improving engagement and monetization. AI-recommended content from accounts you don’t follow is now the fastest-growing category of content on Facebook’s Feed. Now since introducing these recommendations, they’ve driven a 7% increase in overall time spent on the platform. This improves the experience because you can now discover things that you might not have otherwise followed or come across.

Reels is a key part of this discovery engine. And Reels plays exceed 200 billion per day across Facebook and Instagram. We’re seeing good progress on Reels monetization as well, with the annual revenue run rate across our apps now exceeding $10 billion, up from $3 billion last fall.

Beyond Reels, AI is driving results across our monetization tools through our automated ads products, which we call Meta Advantage. Almost all our advertisers are using at least one of our AI-driven products. We’ve also deployed Meta Lattice, a new model architecture that learns to predict an ad’s performance across a variety of data sets and optimization goals. And we introduced AI Sandbox, a testing playground for generative AI-powered tools like automatic text variation, background generation and image outcropping.

Meta’s management believes the company is building leading foundational AI models, including Llama2, which is open-sourced; worth noting that Llama2 comes with a clause that large enterprises that sell Llama2 need to have a commercial agreement with Meta

Beyond the recommendations and ranking systems across our products, we’re also building leading foundation models to support a new generation of AI products. We’ve partnered with Microsoft to open source Llama 2, the latest version of our large language model and to make it available for both research and commercial use…

……in addition to making this open through the open source license, we did include a term that for the largest companies, specifically ones that are going to have public cloud offerings, that they don’t just get a free license to use this. They’ll need to come and make a business arrangement with us. And our intent there is we want everyone to be using this. We want this to be open. But if you’re someone like Microsoft or Amazon or Google, and you’re going to basically be reselling these services, that’s something that we think we should get some portion of the revenue for. So those are the deals that we intend to be making, and we’ve started doing that a little bit. I don’t think that, that’s going to be a large amount of revenue in the near term. But over the long term, hopefully, that can be something.

Meta’s management believes that open-sourcing allows Meta to benefit from (a) innovations that come from everywhere, in areas such as safety and efficiency, and (b) being able to attract potential employees

We have a long history of open sourcing our infrastructure and AI work from PyTorch, which is the leading machine learning framework, to models like Segment Anything, ImageBind and DINO to basic infrastructure as part of the Open Compute Project. And we found that open-sourcing our work allows the industry, including us, to benefit from innovations that come from everywhere. And these are often improvements in safety and security, since open source software is more scrutinized and more people can find and identify fixes for issues. The improvements also often come in the form of efficiency gains, which should hopefully allow us and others to run these models with less infrastructure investment going forward…

……One of the things that we’ve seen is that when you release these projects publicly and in open source, there tend to be a few categories of innovations that the community makes. So on the one hand, I think it’s just good to get the community standardized on the work that we’re doing. That helps with recruiting because a lot of the best people want to come and work at the place that is building the things that everyone else uses. It makes sense that people are used to these tools from wherever else they’re working. They can come here and build here. 

Meta is building new products itself using Llama and Llama2 will underpin a lot of new Meta products

So I’m really looking forward to seeing the improvements that the community makes to Llama 2. We are also building a number of new products ourselves using Llama that will work across our services…

…e wanted to get the Llama 2 model out now. That’s going to be — that’s going to underpin a lot of the new things that we’re building. And now we’re nailing down a bunch of these additional products, and this is going to be stuff that we’re working on for years.

Meta partnered with Microsoft to open-source Llama2 because Meta does not have a public cloud offering

We partnered with Microsoft specifically because we don’t have a public cloud offering. So this isn’t about us getting into that. It’s actually the opposite. We want to work with them because they have that and others have that, and that was the thing that we aren’t planning on building out.

Meta’s management thinks that AI be integrated into Meta’s products in the following ways: Help people connect, express themselves, create content, and get digital assistance in a better way (see also Point 30)

But you can imagine lots of ways that AI can help people connect and express themselves in our apps, creative tools that make it easier and more fun to share content, agents that act as assistance, coaches that can help you interact with businesses and creators and more. And these new products will improve everything that we do across both mobile apps and the metaverse, helping people create worlds and the avatars and objects that inhabit them as well.

Meta’s management expects the company to spend more on AI infrastructure in 2024 compared to 2023

We’re still working on our ’24 CapEx plans. We haven’t yet finalized that, and we’ll be working on that through the course of this year. But I mentioned that we expect that CapEx in ’24 will be higher than in ’23. We expect both data center spend to grow in ’24 as we ramp up construction on sites with the new data center architecture that we announced late last year. And then we certainly also expect to invest more in servers in 2024 for both AI workloads to support all of the AI work that we’ve talked about across the core AI ranking, recommendation work, along with the next-gen AI efforts. And then, of course, also our non-AI workloads, as we refresh some of our servers and add capacity just to support continued growth across the site.

There are three categories of products that Meta’s management plans to build with generative AI: (1) Building ads, (2) improving developer efficiency, and (3) building AI agents, especially for businesses so that businesses can interact with humans effectively (right now, human-to-business interaction is still very labour intensive)

I think that there are 3 basic categories of products or technologies that we’re planning on building with generative AI. One are around different kinds of agents, which I’ll talk about in a second. Two are just kind of generative AI-powered features.

So some of the canonical examples of that are things like in advertising, helping advertisers basically run ads without needing to supply as much creative or, say, if they have an image but it doesn’t fit the format, be able to fill in the image for them. So I talked about that a little bit upfront in my comments. But there’s stuff like that across every app. And then the third category of things, I’d say, are broadly focused on productivity and efficiency internally. So everything from helping engineers write code faster to helping people internally understand the overall knowledge base at the company and things like that. So there’s a lot to do on each of those zones.

For AI agents, specifically, I guess what I’d say is, and one of the things that’s different about how we think about this compared to some others in the industry is we don’t think that there’s going to be one single AI that people interact with, just because there are all these different entities on a day-to-day basis that people come across, whether they’re different creators or different businesses or different apps or things that you use. So I think that there are going to be a handful of things that are just sort of focused on helping people connect around expression and creativity and facilitating connections. I think there are going to be a handful of experiences around helping people connect to the creators who they care about and helping creators foster their communities.

And then the one that I think is going to have the fastest direct business loop is going to be around helping people interact with businesses. And you can imagine a world on this, where, over time, every business has as an AI agent that basically people can message and interact with. And it’s going to take some time to get there, right? I mean, this is going to be a long road to build that out. But I think that, that’s going to improve a lot of the interactions that people have with businesses, as well as if that does work, it should alleviate one of the biggest issues that we’re currently having around messaging monetization is that in order to — for a person to interact with a business. It’s quite human labor-intensive for a person to be on the other side of that interaction, which is one of the reasons why we’ve seen this take off in some countries where the cost of labor is relatively low. But you can imagine in a world where every business has an AI agent, that we can see the kind of success that we’re seeing in Thailand or Vietnam with business messaging could kind of spread everywhere. And I think that’s quite exciting.

Meta’s management believes that there will be both open and closed AI models in the ecosystem

I do think that there will continue to be both open and closed AI models. I think there are a bunch of reasons for this. There are obviously a lot of companies that their business model is to build a model and then sell access to it. So for them, making it open would undermine their business model. That is not our business model. We want to have the — like we view the model that we’re building as sort of the foundation for building products. So if by sharing it, we can improve the quality of the model and improve the quality of the team that we have that is working on that, that’s a win for our business of basically building better products. So I think you’ll see both of those models…

…But for our business model, at least, since we’re not selling access to this stuff, it’s a lot easier for us to share this with the community because it just makes our products better and other people’s…

…And it’s not just going to be like 1 thing is what everyone uses. I think different businesses will use different things for different reasons.

Meta’s management is aware that AI models could be dangerous if they become too powerful, but does not think the models are anywhere close to this point yet; he also thinks there are people who are genuinely concerned about AI safety, and AI companies who are trying to be opportunistic

There are a number of people who are out there saying that once the AI models get past a certain level of capability, it can become dangerous for them to become just in the hands of everyone openly. I think — what I think is pretty clear is that we’re not at that point today. I think that there’s consensus generally among people who are working on this in the industry and policy folks that we’re not at that point today. And it’s not exactly clear at what point you reach that. . So I think there are people who are kind of making that argument in good faith, who are actually concerned about the safety risk. So I think that there are probably some businesses that are out there making that argument because they want it to be more closed, because that’s their business, so I think we need to be wary of that.

Microsoft (NASDAQ: MSFT)

11,000 organisations are already using Azure OpenAI services, with nearly 100 new customers added each day during the quarter

We have great momentum across Azure OpenAI Service. More than 11,000 organizations across industries, including IKEA, Volvo Group, Zurich Insurance, as well as digital natives like FlipKart, Humane, Kahoot, Miro, Typeface, use the service. That’s nearly 100 new customers added every day this quarter…

…We’re also partnering broadly to scale this next generation of AI to more customers. Snowflake, for example, will increase its Azure spend as it builds new integrations with Azure OpenAI.

Microsoft’s management believes that every AI app has to start with data

Every AI app starts with data, and having a comprehensive data and analytics platform is more important than ever. Our intelligent data platform brings together operational databases, analytics and governance so organizations can spend more time creating value and less time integrating their data estate. 

Microsoft’s management believes that software developers are see Azure AI Studio as the tool of choice for AI software development

Now on to developers. New Azure AI Studio is becoming the tool of choice for AI development in this new era, helping organizations ground, fine-tune, evaluate and deploy models, and do so responsibly. VS Code and GitHub Copilot are category-leading products when it comes to how developers code every day. Nearly 90% of GitHub Copilot sign-ups are self-service, indicating strong organic interest and pull-through. More than 27,000 organizations, up 2x quarter-over-quarter, have chosen GitHub Copilot for Business to increase the productivity of their developers, including Airbnb, Dell and Scandinavian Airlines.

Microsoft is using AI for low-code, no-code software development tools to help domain experts automate workflows, create apps etc

We’re also applying AI across low-code, no-code tool chain to help domain experts automate workflows, create apps and web pages, build virtual agents, or analyze data using just natural language. Copilot in Power BI combines the power of large language models with an organization’s data to generate insights faster, and Copilot in Power Pages makes it easier to create secure low-code business websites. One of our tools that’s really taken off is Copilot in Power Virtual Agents, which is delivering one of the biggest benefits of this new area of AI, helping customer service agents be significantly more productive. HP and Virgin Money, for example, have both built custom chatbots with Copilot and Power Virtual Agents that were trained to answer complex customer inquiries. All-up, more than 63,000 organizations have used AI-powered capabilities in Power Platform, up 75% quarter-over-quarter.

The feedback Microsoft’s management has received for Microsoft 365 Copilot is that it is a gamechanger for productivity

4 months ago, we introduced a new pillar of customer value with Microsoft 365 Copilot. We are now rolling out Microsoft 365 Copilot to 600 paid customers through our early access program, and feedback from organizations like Emirates NBD, General Motors, Goodyear and Lumen is that it’s a game changer for employee productivity.

Microsoft’s management believes that revenue growth from the company’s AI services will be gradual

At a total company level, revenue growth from our Commercial business will continue to be driven by the Microsoft Cloud and will again outpace the growth from our Consumer business. Even with strong demand and a leadership position, growth from our AI services will be gradual as Azure AI scales and our copilots reach general availability dates. So for FY ’24, the impact will be weighted towards H2.

Microsoft’s management believes that AI will accelerate the growth of overall technology spending
We do think about what’s the long-term TAM here, right? I mean this is — you’ve heard me talk about this as a percentage of GDP, what’s going to be tech spend? If you believe that, let’s say, the 5% of GDP is going to go to 10% of GDP, maybe that gets accelerated because of the AI wave…

…And of course, I think one of the things that people often, I think, overlook is, and Satya mentioned it briefly when you go back to the pull on Azure, I think in many ways, lots of these AI products pull along Azure because it’s not just the AI solution services that you need to build an app. And so it’s less about Microsoft 365 pulling it along or any one Copilot. It’s that when you’re building these, it requires data and it requires the AI services. So you’ll see them pull both core Azure and AI Azure along with them. 

Microsoft’s management believes that companies need their own data in the cloud in order to utilise AI efficiently

Yes, absolutely. I think having your data, in particular, in the cloud is sort of key to how you can take advantage of essentially these new AI reasoning engines to complement, I’ll call it, your databases because these AI engines are not databases, but they can reason over your data and to help you then get more insights, more completions, more predictions, more summaries, and what have you.

Nvidia (NASDAQ: NVDA)

Nvidia is enjoying incredible demand for its AI chips

Data Center Compute revenue nearly tripled year-on-year, driven primarily by accelerating demand from cloud service providers and large consumer Internet companies for our HGX platform, the engine of generative AI and large language models. Major companies, including AWS, Google Cloud, Meta, Microsoft Azure and Oracle Cloud, as well as a growing number of GPU cloud providers, are deploying, in volume, HGX systems based on our Hopper and Ampere architecture Tensor Core GPUs. Networking revenue almost doubled year-on-year, driven by our end-to-end InfiniBand networking platform, the gold standard for AI. There is tremendous demand for NVIDIA Accelerated Computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs.

Nvidia is seeing tremendous demand for accelerated computing

There is tremendous demand for NVIDIA accelerated computing and AI platforms. Our supply partners have been exceptional in ramping capacity to support our needs. Our data center supply chain, including HGX with 35,000 parts and highly complex networking has been built up over the past decade.

Nvidia is seeing strong demand for AI from consumer internet companies as well as enterprises

Consumer Internet companies also drove the very strong demand. Their investments in data center infrastructure purpose-built for AI are already generating significant returns. For example, Meta recently highlighted that since launching Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA-powered instances in the cloud as well as demand for on-premise infrastructure. 

Nvidia’s management believes that virtually every industry can benefit from AI

Virtually, every industry can benefit from generative AI. For example, AI Copilot, such as those just announced by Microsoft, can boost the productivity of over 1 billion office workers and tens of millions of software engineers. Billions of professionals in legal services, sales, customer support and education will be available to leverage AI systems trained in their field. AI Copilot and assistants are set to create new multi-hundred billion dollar market opportunities for our customers.  

Nvidia’s management is seeing some of the earliest applications of generative AI in companies in marketing, media, and entertainment 

We are seeing some of the earliest applications of generative AI in marketing, media and entertainment. WPP, the world’s largest marketing and communication services organization, is developing a content engine using NVIDIA Omniverse to enable artists and designers to integrate generative AI into 3D content creation. WPP designers can create images from text prompts while responsibly trained generative AI tools and content from NVIDIA partners such as Adobe and Getty Images using NVIDIA Picasso, a foundry for custom generative AI models for visual design. Visual content provider, Shutterstock, is also using NVIDIA Picasso to build tools and services that enable users to create 3D scene background with the help of generative AI.

Nvidia’s management believes that Infiniband is a much better networking solution for AI compared to Ethernet

Thanks to its end-to-end optimization and in-network computing capabilities, InfiniBand delivers more than double the performance of traditional Ethernet for AI. For billions of dollar AI infrastructures, the value from the increased throughput of InfiniBand is worth hundreds of [indiscernible] for the network. In addition, only InfiniBand can scale to hundreds of thousands of GPUs. It is the network of choice for leading AI practitioners…

…We let customers decide what networking they would like to use. And for the customers that are building very large infrastructure, InfiniBand is, I hate to say it, kind of a no-brainer. And the reason for that because the efficiency of InfiniBand is so significant, some 10%, 15%, 20% higher throughput for $1 billion infrastructure translates to enormous savings. Basically, the networking is free. And so if you have a single application, if you will, infrastructure where it’s largely dedicated to large language models or large AI systems, InfiniBand is really, really a terrific choice.

Nvidia’s management thinks that general purpose computing is too costly and slow, and that the world will shift to accelerated computing, driven by the demand for generative AI; this shift from general purpose computing to accelerated computing contains massive economic opportunity

It is recognized for some time now that general purpose computing is just not and brute forcing general purpose computing. Using general purpose computing at scale is no longer the best way to go forward. It’s too energy costly, it’s too expensive, and the performance of the applications are too slow. 

And finally, the world has a new way of doing it. It’s called accelerated computing, and what kicked it into turbocharge is generative AI. But accelerated computing could be used for all kinds of different applications that’s already in the data center. And by using it, you offload the CPUs. You save a ton of money and order of magnitude, in cost and order of magnitude and energy and the throughput is higher. And that’s what the industry is really responding to.

Going forward, the best way to invest in the data center is to divert the capital investment from general purpose computing and focus it on generative AI and accelerated computing. Generative AI provides a new way of generating productivity, a new way of generating new services to offer to your customers, and accelerated computing helps you save money and save power. And the number of applications is, well, tons. Lots of developers, lots of applications, lots of libraries. It’s ready to be deployed. And so I think the data centers around the world recognize this, that this is the best way to deploy resources, deploy capital going forward for data centers…

…The world has something along the lines of about $1 trillion worth of data centers installed in the cloud, in enterprise and otherwise. And that $1 trillion of data centers is in the process of transitioning into accelerated computing and generative AI. We’re seeing 2 simultaneous platform shifts at the same time. One is accelerated computing. And the reason for that is because it’s the most cost-effective, most energy-effective and the most performant way of doing computing now. So what you’re seeing, and then all of a sudden, enabled by generative AI — enabled by accelerated compute and generative AI came along. And this incredible application now gives everyone 2 reasons to transition, to do a platform shift from general purpose computing, the classical way of doing computing, to this new way of doing computing, accelerated computing. It’s about $1 trillion worth of data centers, call it, $0.25 trillion of capital spend each year. You’re seeing the data centers around the world are taking that capital spend and focusing it on the 2 most important trends of computing today, accelerated computing and generative AI. And so I think this is not a near-term thing. This is a long-term industry transition, and we’re seeing these 2 platform shifts happening at the same time.

PayPal (NASDAQ: PYPL)

PayPal’s management believes the use of AI will allow the company to operate faster at lower cost

Our initial experiences with AI and continuing advances in our processes, infrastructure and product quality enable us to see a future where we do things better, faster and cheaper.

PayPal’s management believes that the use of AI has accelerated the company’s product innovation and improved developers’ productivity

As we discussed in our June investor meeting, we are meaningfully accelerating new product innovations into the market, scaling our A/B testing and significantly improving our time to market. We are now consistently delivering against our road map on schedule. This is the result of significant investments in our platform infrastructure and tools and enhanced set of measurements and performance indicators, hiring new talent and early successes using AI in our software development process…

There’s no question that AI is going to impact every single company and every function just as it will inside of PayPal. And we’ve been experimenting with a couple of hundred of our developers using tools from both Google, Microsoft as well as Amazon. And we are seeing 20% to 40% increases in engineering productivity. 

PayPal’s management believes that companies with unique, large data sets will have an advantage when using AI technologies; management sees PayPal as one of these companies

We believe that only those companies with unique and scaled data sets will be able to fully utilize the power of AI to drive actionable insights and differentiated value propositions for their customers…

…We capture 100% of the data flows, which really is feeding our AI engines. It’s fueling what will be our next-generation checkout. And most importantly, it’s fueling kind of our ability to have best-in-class auth rates in the industry and the lowest loss rates in the industry. 

Shopify (NASDAQ: SHOP)

Shopify’s management believes that entrepreneurship is entering an era where AI will become the most powerful sidekick for business creation

We are quickly positioning ourselves to build on the momentum we are seeing across our business, making purposeful change that support our core focus on commerce and unlock what we believe is a new era of data-driven entrepreneurship and growth, an era where AI becomes the most powerful sidekick for business creation.

Shopify recently introduced Shopify Magic, a suite of AI features that is integrated across Shopify’s products and workflows, and will soon launch Sidekick, an AI-powered chat interface commerce assistant; Shopify Magic is designed specifically for commerce, unlike other generative AI products

We recognize the immense potential of AI to transform the consumer landscape and commerce more broadly. And we are committed to harnessing its power to help our merchants succeed. We believe AI is making the impossible possible, giving everyone superpowers to be more productive, more creative, and more successful than ever before. So, of course, we are building that directly into Shopify. In our additions last week, we unveiled Shopify Magic, our suite of free AI-enabled features that are integrated across Shopify’s products and workflows, everything from inbox to online store builder and app store to merchandising to unlock creativity and increased productivity.

One of the most exciting products we will be launching soon in early access is our new AI-enabled commerce assistant, Sidekick. Powered by Shopify Magic, Sidekick is a new chat interface packed with advanced AI capabilities purposely built for commerce. Merchants will now have a commerce expert in their corner who is deeply competent, incredibly intelligent, and always available. With Sidekick, no matter your expertise or skillset, it allows entrepreneurs to use everyday language to have conversations that jump-start the creative process, tackle time-consuming tasks, and make smarter business decisions. By harnessing a deep understanding of systems and available data, Sidekick integrates seamlessly with the Shopify admin, enhancing and streamlining merchant operations. While we’re at the very early stages, the power of Sidekick is already incredible, and it’s developing fast…

……  I mean, unlike other generative AI products, Shopify Magic is specifically designed for commerce. And it’s not just embedded in one place, it’s embedded throughout the entire product. So, for example, the ability to generate blog posts instantaneously or write incredibly, high-converting product descriptions or create highly contextualized content for your business. That is where we feel like AI really can play a big role here in making merchants lives better..

… . And with Sidekick, you can do these incredible things like you can analyze sales and you can ideate on store design or you can even give instructions on how to run promotions.

Shopify’s management does not seem keen to raise the pricing of its services to account for the added value from new AI features such as Magic and Sidekick 

So, certainly there is opportunities for us to continuously review our pricing and figure out where the right pricing is. And we will continue to do that. But in terms of, you know, features like Magic and Sidekick, which are really excited about, remember, when our merchants do better, Shopify does better. That’s the business model. And so, the more that they can sell, the faster they can grow, the more we can share in that upside. But the other part that we talked about in the prepared remarks that’s just worthwhile mentioning again is that product attach rate. The fact that we’re still growing at — we’re still above 3%, which is really high, it means that as we introduce new products, new merchant solutions, whether it’s payment solutions, shipping, things like Audiences, anything like collabs, collective, more of our merchants are taking more of our solutions.

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

Management sees AI as a positive for TSMC and the company did see an increase in AI-related demand, but it was not enough to offset declines elsewhere

Moving into third quarter 2023, while we have recently observed an increase in AI-related demand, it is not enough to offset the overall cyclicality of our business…

… The recent increase in AI-related demand is directionally positive for TSMC. Generative AI requires higher computing power and interconnected bandwidth, which drives increasing semiconductor content. Whether using CPUs, GPUs or AI accelerator and related ASIC for AI and machine learning, the commonality is that it requires use of leading-edge technology and a strong foundry design ecosystem. These are all TSMC’s strengths…

…Of course, we have a model, basically. The short-term frenzy about the AI demand definitely cannot extrapolate for the long term. And neither can we predict the near future, meaning next year, how the sudden demand will continue or will flatten out. However, our model is based on the data center structure. We assume a certain percentage of the data center processor are AI processors, and based on that, we calculate the AI processor demand. And this model is yet to be fitted to the practical data later on. But in general, I think the — our trend of a big portion of data center processor will be AI processor is a sure thing. And will it cannibalize the data center processors? In the short term, when the CapEx of the cloud service providers are fixed, yes, it will. It is. But as for the long term, when their data service — when the cloud service is having the generative AI service revenue, I think they will increase the CapEx. That should be consistent with the long-term AI processor demand. And I mean the CapEx will increase because of the generative AI services…

…But again, let me emphasize that those kind of applications in the AI, be it CPUs, GPUs or AI accelerator or ASIC, they all need leading-edge technologies. And they all have one symptom: they are using the very large die size, which is TSMC’s strength. 

AI server processors currently account for just 6% of TSMC’s total revenue but is expected to grow at 50% annually in the next 5 years to become a low-teens percentage of TSMC’s total revenue

Today, server AI processor demand, which we define as CPUs, GPUs and AI accelerators that are performing training and inference functions accounts for approximately 6% of TSMC’s total revenue. We forecasted this to grow at close to 50% CAGR in the next 5 years and increase to low teens percent of our revenue.

AI has reinforced the view of TSMC’s management that there will be healthy long-term growth in the semiconductor industry in general, and TSMC’s business in particular

The insatiable need for energy-efficient computation is starting from data centers and we expect it will proliferate to edge and end devices over time, which will further long term — which will drive further long-term opportunities. We have already embedded a certain assumption for AI demand into our long-term CapEx and growth forecast. Our HPC platform is expected to be the main engine and the largest incremental contributor to TSMC’s long-term growth in the next several years. While the quantification of the total addressable opportunity is still ongoing, generative AI and large language model only reinforce the already strong conviction we have in the structural megatrend to drive TSMC’s long-term growth, and we will closely monitor the development for further potential upside.

TSMC currently can’t fulfil all the demand for certain AI chips because of the lack of product capacity, but the company is expanding capacity

For the AI, right now, we see very strong demand, yes. For the front-end part, we don’t have any problem to support. But for the back end, the advanced packaging side, especially for the CoWoS, we do have some very tight capacity to — very hard to fulfill 100% of what customers needed. So we are working with customers for the short term to help them to fulfill the demand, but we are increasing our capacity as quickly as possible. And we expect these tightening somewhat be released in next year, probably towards the end of next year. But in between, we’re still working closely with our customers to support their growth…

… I will not give you the exact number, but let me give you a roughly probably 2x of the capacity will be added…

… I think the second question is about the pricing of the — on the CoWoS. As I answer the question, we are increasing the capacity as soon as possible manner. Of course, that including actual cost. So in fact, we are working with our customers. And the most important thing for them right now is supply assurance. It’s a supply to meet their demand. So we are working with them. We do everything possible to increase the capacity. And of course, at the same time, we share our value.

It appears that TSMC is selling AI chips for a few hundred dollars apiece while its customers then go onto sell the chips for tens of thousands of dollars – but TSMC management is ok with that

Well, Charles, I used to make a joke on my customers say that I’m selling him a few hundred dollars per chip, and then he sold it back to me with USD 200,000. But let me say that we are happy to see customers doing very well. And if customers do well, TSMC does well. And of course, we work with them and we sell our value to them. And fundamentally, we want to say that we are able to address and capture a major portion of the market in terms of a semiconductor component in AI. Did I answer your question?

Tencent (NASDAQ: TCEHY)

Tencent is testing its own foundational model for generative AI, and Tencent Cloud will be facilitating the deployment of open-source models by other companies; the development progress of Tencent’s own foundational model is good

In generative AI, we are internally testing our own proprietary foundation model in different use cases and are providing Tencent Cloud Model-as-a-Service solutions to facilitate efficient deployment of open-source foundation models in multiple industry verticals…

…And in terms of just the development, I would say, there are multi initiatives that’s going on at the company. The first one, obviously, is building our own proprietary foundation model, and that is actually progressing very well. The training is actually on track and making very good progress…

… And in terms of additional efforts, we are also on the cloud side, providing MaaS solution for enterprises, right? So basically providing a marketplace so that different enterprise clients can choose different types of open source large models for them to customize for their own use with their own data. And we have a whole set of technology infrastructure as well as tools to help them to make the choice as well as to do the training and do the deployment. And we believe this is going to be a pretty high value added and high margin product for the enterprise clients. 

Tencent’s management thinks that AI is a multiplier for many of the company’s businesses

AI is — really the more we look at it, the more excited we are for that asset growth multiplier across our many businesses. It would serve to enhance efficiency and the quality of our user to user services and at the same time, you facilitate the improvement in terms of our ad targeting, data targeting and also cost-efficient production of a lot of our content. So there are really multiple ways through which we can benefit from the continued development of generative AI. 

Tencent’s management believes that the company’s MaaS for AI will first benefit large enterprises, but that it will subsequently also benefit companies of different sizes (although the smaller companies will benefit from using trained models via API versus training their own models)

In terms of the AI and Model-as-a-Service solution, we — besides — we think a lot of the industries will actually benefit from it, right? Initially, it would definitely be with larger companies…

…I think over time, as the industry become more mature, obviously, the medium-sized and smaller sized enterprises will probably benefit. But I don’t think they will be benefiting from using — training their own model, right? But then they would probably be benefiting from using the already trained models directly through APIs. So I think that’s sort of the way the industry will probably evolve over time. 

Tencent’s management believes that the company’s MaaS will provide a revenue stream that is recurring and high margin

I think, obviously, the revenue model is still evolving, but I would say, theoretically, what you talked about the high margin and high recurring revenue is going to be true because we are adding more value to the customers. And once the customers start using these services, right, it will be built into their interaction with their customers, which will be much more sticky than if it’s in their back-end systems. So I think that would probably be true. 

An important change Tencent has made to improve its advertising technology stack when using machine learning is to shift from CPUs (central processing units) to GPUs (graphics processing units)

If you look at the key changes or key things that we have done with respect to machine learning on ad platform, I think the traditional challenge for us is that we have many different platforms. We have many different types of inventories. We have a very large coverage of user base and with a lot of data, right? And all these things make it actually very complicated for us to target customers based on just rule-based or CPU-based targeting system, which was actually what we have been deploying .And a key change is that we have deployed a lot of GPUs, so moving from CPUs to GPUs and we have built a very large neural network to basically accept all these different complexities and be able to come up with the optimal solution. And as a result, our ad targeting becomes much more effective and much higher speed and more accurate in terms of targeting. And as a result, right now, it actually provides a very strong boost to our targeting ability and also the ROI that we can deliver through our ad systems. And as James talked about, this is sort of early stage of this deployment and continuous improvement of our technology, and I think this trend will continue.

Tesla (NASDAQ: TSLA)

Tesla’s management believes that (1) the company’s Autopilot service has a data-advantage, as AI models become a lot more powerful with more data, and (2) self-driving will be safer than human driving

And I mean, there are times where we see basically, in a neural net basically, it’s sort of, at a million training examples, it barely works at 2 million, it slightly works at 3 million. It’s like, “Wow, okay, we’re seeing something.” But then you get to like 10 million training examples, it’s like — it becomes incredible. So there’s just no substitute for a massive amount of data. And obviously, Tesla has more vehicles on the road that are collecting this data than all other companies combined by I think, maybe even an order of magnitude. So I think we might have 90% of all — a very big number…

So today, over 300 million miles have been driven using FSD Beta. That 300 million-mile number is going to seem small very quickly. It will soon be billions of miles, tens of billions of miles. And the FSD will go from being as good as a human to then being vastly better than a human. We see a clear path to full self-driving being 10x safer than the average human driver. 

Tesla’s management sees the Dojo training computer as a means to reduce the cost of neural net training and expects to spend more than US$1 billion on Dojo-related R&D through 2024

Our Dojo training computer is designed to significantly reduce the cost of neural net training. It is designed to — it’s somewhat optimized for the kind of training that we need, which is a video training. So we just see that the need for neural net training, again, talking of being a quasi-infinite of things, is just enormous. So I think having — we expect to use both NVIDIA and Dojo, to be clear. But there’s — we just see a demand for really advanced training resources. And we think we may reach in-house neural net training capability of 100 [ exoblocks ] by the end of next year…

…I think we will be spending something north of $1 billion over the next year on — through the end of next year, it’s well over $1 billion in Dojo. And yes, so I mean we’ve got a truly staggering amount of video data to do training on.

Around 5-6 Optimus bots – Tesla’s autonomous robots – have been made so far; Tesla’s management realised that it’s hard to find actuators that work well, and so Tesla had to design and manufacture its own actuators; the first Optimus with Tesla actuators should be made around November

Yes, I think we’re around 5 or 6 bots. I think there’s a — we were at 10, I guess. It depends on how many are working and what phase. But it’s sort of — yes, there’s more every month…  

…We found that there are actually no suppliers that can produce the actuators. There are no off-the-shelf actuators that work well for a humanoid robot at any price…

…So we’ve actually had to design our own actuators to integrate the motor, the power electronics, the controller, the sensors. And really, every one of them is custom designed. And then, of course, we’ll be using the same inference hardware as the car. But we, in designing these actuators, are designing them for volume production, so that they’re not just lighter, tighter and more capable than any other actuators whereof that exists in the world. But it’s also actually manufacturable. So we should be able to make them in volume. The first Optimus that is will have all of the Tesla designed actuators, sort of production candidate actuators, integrated and walking should be around November-ish. And then we’ll start ramping up after that.

Tesla is buying Nvidia chips as fast as Nvidia will deliver it – and Tesla’s management thinks that if Nvidia can deliver more chips, Tesla would not even need Dojo, but Nvidia can’t

But like I said, we’re also — we have some — we’re using a lot of NVIDIA hardware. We’ll continue to use — we’ll actually take NVIDIA hardware as fast as NVIDIA will deliver it to us. Tremendous respect for Jensen and NVIDIA. They’ve done an incredible job. And frankly, I don’t know, if they could deliver us enough GPUs, we might not need Dojo. But they can’t. They’ve got so many customers. They’ve been kind enough to, nonetheless, prioritize some of our GPU orders.

Elon Musk explained that his timing-projections for the actualisation of full self-driving has been too optimistic in the past because the next challenge is always many times harder than the last – he still expects Tesla’s full self-driving service to be better than human-driving by the end of this year, although he admits may be wrong yet again

Well, obviously, as people have sort of made fun of me, and perhaps quite fairly have made fun of me, my predictions about achieving full self-driving have been optimistic in the past. The reason I’ve been optimistic, what it tends to look like is we’ll make rapid progress with a new version of FSD, but then it will curve over logarithmically. So at first, logarithmic curve looks like this sort of fairly straight upward line, diagonal and up. And so if you extrapolate that, then you have a great thing. But then because it’s actually logarithmic, it curves over, and then there have been a series of stacked logarithmic curves. Now I know I’m the boy who cried FSD, but man, I think we’ll be better than human by the end of this year. That’s not to say we’re approved by regulators. And I’m saying then that, that would be in the U.S. because we’ve got to focus on one market first. But I think we’ll be better than human by the end of this year. I’ve been wrong in the past, I may be wrong this time.

The Trade Desk (NASDAQ: TSLA)

The use of AI is helping Trade Desk to surface advertising value for its customers

Of course, there are many other aspects of Kokai that we unveiled on [ 06/06 ], some of which are live and many of which we will be launching in the next few months. These indexes and other innovations, especially around the application of AI across our platform [ are helping us ] surface value more intuitively to advertisers. We are revamping our UX so that the campaign setup and optimization experience is even more intuitive with data next to decisions at every step. And we’re making it easier than ever for thousands of data, inventory, measurement and media partners to integrate with us. 

Trade Desk is using different AI models for specific applications instead of using one model for all purposes

You’ll recall that we launched AI in our platform in 2018 before it was trendy. And we call it then to and distributing that AI across the platform in a variety of different ways and different deep learning models so that we’re using that for very specific applications rather than trying to create one algo to rule them all, if you will, which is something we actually very — in a very disciplined way are trying to avoid. So we can create checks and balances in the way that the [ tech ] works, and we can make certain that AI is always providing improvements by essentially having A/B testing and better auditability

Visa (NYSE: V)

Visa is piloting a new AI-powered fraud capability for instant payments

First, our partnership with Pay.UK, the account-to-account payments operator in the U.K. was recently announced. We will be piloting our new fraud capability, RTP Prevent, which is uniquely built for instant payments with deep learning AI models. Using RTP Prevent, we can provide a risk score in real time so banks can decide whether to approve or reject the transaction on an RTP network. This is a great example of building and deploying entirely new solutions and our network of network strategy…

…So first of all, what we’ve done is we’ve built a real-time risk score. We’ve built it uniquely for instant payments, where there’s often unique cases of fraud in terms of how they work. We built it using deep learning AI models. And what it does is it enables banks to be able to decide whether to approve or reject the transaction in real time, which is a capability that most banks or most real-time payments networks around the world have been very hungry for. It’s a score from 1 to 99. It comes with an instant real-time code that explains the score. And what it does is it leverages our proprietary data that kind of we have used to enhance our own risk algorithms as well as the data that we see on a lot of our payment platforms, including Visa Direct. And one of the benefits of us bringing that to market is it integrates with the bank’s existing fraud and risk tools. Because we’re often providing these types of risk scores to banks and they’re ingesting them from us, it directly integrates into their fraud and risk tools, so the real-time information, their systems know how to use it. It can be automated into their decisioning algorithms and those types of things.

Wix (NASDAQ: WIX)

Wix has worked with AI for nearly a decade and management believes AI will be a key driver of Wix’s product strategy in the future

This quarter, we also continued to innovate and introduce new AI-driven tools in our pipeline. As mentioned last quarter, we have leveraged AI technology for nearly a decade, which has played a key role in driving user success for both Self Creators and Partners. By harnessing a variety of deep learning models trained on the incredible amount of data from the hundreds of millions of Wix sites, we’ve built out an impressive suite of AI and genAI products with the purpose of making the website building experience on Wix frictionless. As AI continues to evolve, we remain on the forefront of innovation with a number of AI and gen-AI driven products in our near-term pipeline, including AI Site Generator and AI Assistant for your business. AI is a key driver of our product and growth strategy for both Self Creators and Partners, and I’m excited for what is still to com

The introduction of generative AI products and features is improving the key performance indicators (KPIs) of Wix’s business

In regards to your question, if we see any tangible evidence that GenAI is actually improving business performance, then yes, we do. We — I’m not going to disclose all the details, but I’m just going to say that the thing we released in the first part of the year and late last year already are showing improvement in business KPIs. So it makes us very optimistic. And of course, the more we put those kind of technology in front of more users, we expect that factor to grow. But if you think about it right, the core value that Wix brings is reducing the friction when you try to build a website. And when you use that technology, that can do tremendously well in order to improve that core value. And then, of course, we expect the results to be significant.

Wix’s management believes that having generative AI technology alone is not sufficient for building a website

So the ones that we’ve seen until now are essentially doing the following, right? They take a template and they generate the text for the template and that — then they save that as a website. Essentially, they’re using ChatGPT to write text and then just put it inside of a template.

When we started, we did that. We’re now doing — with ChatGPT, we’re doing it since last, I think, November. And with ADI, we did it, of course, with algorithm less sophisticated. But even then, we didn’t just inject text to template. We actually created layouts around the text, which is the other way around, right? And that creates a huge difference in what we generate because when you fill text into a template, you are creating essentially artificial text that will fit the design. While in most cases, if you think about building a business, you do the other way around, you create your marketing messages and then you create a design, right, to fit that. And visually, it creates a massive defense efficiency of those websites and very different. So that is the first difference.

The other difference is that if you think about it, since probably 1998, you could write text in a word document and then save it as HTML, okay? So now you just build the website and you have the text and you have a very, very basic website. Of course, you cannot run your business on top of that because it doesn’t have everything you need to run a business. It doesn’t have analytics. It doesn’t have a contact form. It doesn’t have e-commerce. It doesn’t have transactions. All of those are the platform that makes it into a real business. And this is something that most of the tools — all the tools I have seen so far are lacking, right? They just build the page, which you could do in ’98, with word and just save it as HTML. So that’s another huge difference, right?

And the last part is the question of how do you edit. And this is a very important thing. A website is not something that you could edit once and you just publish it and you never go back. You constantly have things to do. You change products, you change services, you change addresses, you add things, you remove things. You need to add content, so Google will like you, and this is very, very important for finding your business in Google. And there’s a lot of other things, right? So you need to be able to edit the content.

Now when it comes to edit content, you don’t want to regenerate the website, okay, which [indiscernible] you see in all of those things that fill a template because it’s not only about filling a template, it’s now about editing the content. And this is the thing that we spend so much money on doing, right, to back in the technology, the e-commerce and then the ability to go in and point at something and edit or move it and drive it. So those are the things that created Wix, and those are, I think, still our differentiators.

Even if you generated a template with ChatGPT and it looks great. And for some magic origin actually, fit your value that the — marketing value that you want to put in your website. Editing it is not going to be possible with the current technology they use. And then even more than that, the ability to have all of the applications on top of it that you really need for our business, don’t exist.

Zoom Video Communications (NASDAQ: ZM)

There are promising signs that Zoom’s AI-related products are gaining traction with customers

Let me also thank Valmont Industries. Valmont came onboard as a Zoom customer a little over a year ago with Meetings and Phone and quickly became a major platform adopter, including Zoom One and Zoom Contact Center. And in Q2, with the goal of utilizing AI to better serve their employees, they added Zoom Virtual Agent due to its accuracy of intent understanding, ability to route issues to the correct agent, ease of use and quality of analytics…

But we’re really excited about the vision that we can take for them not only around, obviously, the existing platform but what’s also coming from an AI perspective. And I think our customers are finding that very attractive, as you’ve heard from the customers that Eric talked about seeing a lot of momentum of customers that were originally Meetings customers really moving either into Zoom One or adding on Zoom Phone and considering Contact Center as well.

Zoom’s management believes that the company has a differentiated AI strategy

And also our strategy is very differentiated, right? First of all, have a federated AI approach. And also the way we look at those AI features, how to help a customer improve productivity, that’s very important, right? And because the customer already like us, not like some others, right, who gave you a so-called free services and then your AI features price. That’s not our case, right? We really care about the customer value and also add more and more innovations.

Zoom’s management believes that AI integrations in the company’s products will be a key differentiator

And in terms of AI, not like other vendors, right, they already have Contact Center solution for a long, long time. When you look at AI kind of architecture and flexible, right, how to add AI to that to all those existing leaders the Contact Center. We already realized the importance of AI, right? That’s why we have a very flexible architecture. Not only do we build organic AI features but also acquired Solvvy and also the Virtual Agent and so on and so forth. Organic growth and also the acquisition certainly help us a lot in terms of product innovation. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, Apple, ASML, Datadog, Etsy, Fiverr, Mastercard, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Visa, Wix, Zoom. Holdings are subject to change at any time.