All articles

What We’re Reading (Week Ending 29 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 29 October 2023:

1. CEO/CIO’s Final Investment Note: The Best of Times and The Worst of Times – Chuin Ting Weber

While there is sadness in the hearts of all of us at MoneyOwl, we know that the ups and downs of our journey are but a faint reflection of our larger condition – as a human race, as countries, as societies and as individuals. We all face circumstances that we cannot control, try as we may to do so. The shock of the Israel-Hamas conflict and the accompanying humanitarian disaster, the ongoing Russia-Ukraine war, and the gyrations in big economies both East and West, threatens to shake us and tempt us to despair. On an individual level, some of us may face unexpected shocks, tough times, or just unsettling uncertainties as we move from one season of life to another.

Yet, there must always be some beliefs in our lives that anchor us, so that our core will not be shaken. And as it is with our lives, so it is with investing. Whatever is happening around you and in the world, please remember that the human spirit for recovery and progress, has never been quenched by wars, pandemics, natural disasters or man-made crises. COVID-19 was the most recent example, but it was neither the first crisis we have overcome, nor would it be the last. As J.S. Mill put it, writing in a period that Charles Dickens described as both the best and the worst of times:

“What has so often excited wonder, is the great rapidity with which countries recover from a state of devastation…… An enemy lays waste a country by fire and sword, and destroys or carries away nearly all the moveable wealth existing in it: all the inhabitants are ruined, and yet in a few years after, everything is much as it was before.”

John Stuart Mill, “Principles of Political Economy”, 1848

When you invest in a globally diversified portfolio of stocks and bonds – instruments that companies and countries issue to finance their economic activities – what you are really investing in, is the future of human enterprise. It is a vote of confidence in the human race. In the long run, stock prices are driven by earnings, and earnings, by the increase in global aggregate demand, which is in turn driven by a combination of global population growth and the quest for increase in standards of living. That is why no matter how bad the crisis, the stock market always recovers and goes up in the long run. This is the reason the stock market has a positive expected return. It is backed by logic and evidence.

The principle, however, does not apply to individual companies, sectors or even countries. It is also not so easy to read the tea-leaves to try to catch the short-term turns of ups and downs, to do better than the market’s long-term return. The best of times often follows the worst of times. We just don’t know when it turns. While being in a bad season is temporary, being out of the market because you timed it wrong, is the one sure way of missing out on the recovery.

2. Higher For Longer vs. the Stock Market – Ben Carlson

I don’t know what the bond market is thinking but it’s worth considering the potential for rates to remain higher than we’ve been accustomed to since the Great Financial Crisis. So I used various interest rate and inflation levels to see how the stock market has performed in the past.

Are returns better when rates are lower or higher? Is high inflation good or bad for the stock market?…

…Surprisingly, the best future returns have come from both periods of very high and very low starting interest rates while the worst returns have come during average interest rate regimes.

The average 10 year yield since 1926 is 4.8% meaning we are at that long-term average right now. Twenty years ago the 10 year treasury was yielding around 4.3%. Yields have moved a lot since then…

…In that 20 year period the S&P 500 is up nearly 540% or 9.7% per year. Not bad…

…The average inflation rate since 1926 was right around 3%.

These results might look surprising as well. The best forward long-term returns came from very high starting inflation levels. At 6% or higher inflation, forward returns were great. At 6% or lower, it’s still pretty good but more like average.

So what’s going on here? Why are forward returns better from higher interest rates and inflation levels?

The simplest explanation is we’ve only had one regime of high interest rates over the past 100 years or so and two highly inflationary environments. And each of these scenarios was followed by rip-roaring bull markets. The annual inflation rate reached nearly 20% in the late-1940s following World War II. That period was followed by the best decade ever for U.S. stocks in the 1950s (up more than 19% per year). And the 1970s period of high inflation and rising interest rates was followed by the longest bull market we’ve ever experienced in the 1980s and 1990s.

A simple yet often overlooked aspect of investing is a crisis can lead to terrible returns in the short-term but wonderful returns in the long-term. Times of deflation and high inflation are scary while you’re living through them but also tend to produce excellent entry points into the market…

…It’s also important to remember that while volatility in rates and inflation can negatively impact the markets in the short-run, a long enough time horizon can help smooth things out.

Regardless of what’s going on with the economy, you’ll fare better in the stock market if your time horizon is measured in decades rather than days.

3. Drawdowns – Chris Mayer

A drawdown is how much a stock price declines from its peak before it recovers.

Drawdowns are part of the life of every investor. Invariably, if you own a stock for a long time, you are going to have to sit through several…

…A few examples from the book, which was published in 2015:

  • Apple from its IPO in 1980 through 2012 was a 225-bagger. But you had to sit through a peak-to-trough loss of 80% — twice! And there were several 40% drops.
  • Netflix, which has been a 60-bagger since 2002, lost 25% of its value in a single day — four times! And there was a four-month stretch where it dropped 80 percent.
  • And Berkshire Hathaway, the best performing stock in the study, was cut in half four times.

What I found affirmed what Peter Lynch once said: “The real key to making money in stocks is not to get scared out of them.” …

…Not only do the best stocks suffer frequent (and lengthy) drawdowns, but the best investors also suffer drawdowns that would surprise most.

The aforementioned Peter Lynch, for example, had four severe drawdowns during his Hall of Fame run at Fidelity. Even though he returned a mind-boggling 29% annually, he had many drawdowns during those years, including three of more than 20% (one of which was a hair-raising 42% drop in 1987).

In summary: There is no defense against drawdowns if you are committed to a long-term, ownership approach to stocks. (Peter Lynch, by the way, was highly diversified and had a high turnover rate in his career – but still). In fact, I would go so far as to say that the ability to sit through drawdowns with equanimity is a source of outperformance. It is a competitive advantage over those that can’t. 

4. NVIDIA CEO Jensen Huang – Ben Gilbert, David Rosenthal, Jensen Huang

David: I love this tee-up of learning but not imitating, and learning from a wide array of sources. There’s this unbelievable third element, I think, to what Nvidia has become today. That’s the data center.

It’s certainly not obvious. I can’t reason from AlexNet and your engagement with the research community, and social media feed […]. You deciding and the company deciding we’re going to go on a five-year all-in journey on the data center. How did that happen?

Jensen: Our journey to the data center happened, I would say almost 17 years ago. I’m always being asked, what are the challenges that the company could see someday?

I’ve always felt that the fact that Nvidia’s technology is plugged into a computer and that computer has to sit next to you because it has to be connected to a monitor, that will limit our opportunity someday, because there are only so many desktop PCs that plug a GPU into. There are only so many CRTs and (at the time) LCDs that we could possibly drive.

The question is, wouldn’t it be amazing if our computer doesn’t have to be connected to the viewing device? That the separation of it made it possible for us to compute somewhere else.

One of our engineers came and showed it to me one day. It was really capturing the frame buffer, encoding it into video, and streaming it to a receiver device, separating computing from the viewing.

Ben: In many ways, that’s cloud gaming.

Jensen: In fact, that was when we started GFN. We knew that GFN was going to be a journey that would take a long time because you’re fighting all kinds of problems, including the speed of light and—

Ben: Latency everywhere you look.

Jensen: That’s right.

David: To our listeners, GFN GeForce NOW.

Jensen: Yeah. GeForce NOW.

David: It all makes sense. Your first cloud product.

Jensen: That’s right. Look at GeForce NOW. It was Nvidia’s first data center product.

Our second data center product was remote graphics, putting our GPUs in the world’s enterprise data centers. Which then led us to our third product, which combined CUDA plus our GPU, which became a supercomputer. Which then worked towards more and more and more.

The reason why it’s so important is because the disconnection between where Nvidia’s computing is done versus where it’s enjoyed, if you can separate that, your market opportunity explodes.

And it was completely true, so we’re no longer limited by the physical constraints of the desktop PC sitting by your desk. We’re not limited by one GPU per person. It doesn’t matter where it is anymore. That was really the great observation.

Ben: It’s a good reminder. The data center segment of Nvidia’s business (to me) has become synonymous with how is AI going. And that’s a false equivalence. It’s interesting that you were only this ready to explode in AI in the data center because you had three-plus previous products where you learned how to build data center computers. Even though those markets weren’t these gigantic world-changing technology shifts the way that AI is. That’s how you learned.

Jensen: That’s right. You want to pave the way to future opportunities. You can’t wait until the opportunity is sitting in front of you for you to reach out for it, so you have to anticipate.

Our job as CEO is to look around corners and to anticipate where will opportunities be someday. Even if I’m not exactly sure what and when, how do I position the company to be near it, to be just standing near under the tree, and we can do a diving catch when the apple falls. You guys know what I’m saying? But you’ve got to be close enough to do the diving catch.

David: Rewind to 2015 and OpenAI. If you hadn’t been laying this groundwork in the data center, you wouldn’t be powering OpenAI right now.

Jensen: Yeah. But the idea that computing will be mostly done away from the viewing device, that the vast majority of computing will be done away from the computer itself, that insight was good.

In fact, cloud computing, everything about today’s computing is about separation of that. By putting it in a data center, we can overcome this latency problem. You’re not going to overcome the speed of light. Speed of light end-to-end is only 120 milliseconds or something like that. It’s not that long.

Ben: From a data center to—

Jensen: Anywhere on the planet.

Ben: Oh, I see. Literally across the planet.

Jensen: Right. If you could solve that problem, approximately something like—I forget the number—70 milliseconds, 100 milliseconds, but it’s not that long.

My point is, if you could remove the obstacles everywhere else, then the speed of light should be perfectly fine. You could build data centers as large as you like, and you could do amazing things. This little, tiny device that we use as a computer, or your TV as a computer, whatever computer, they can all instantly become amazing. That insight 15 years ago was a good one.

Ben: Speaking of the speed of light—David’s begging me to go here—you totally saw that InfiniBand would be way more useful way sooner than anyone else realized. Acquiring Mellanox, I think you uniquely saw that this was required to train large language models, and you were super aggressive in acquiring that company. Why did you see that when no one else saw that?

Jensen: There were several reasons for that. First, if you want to be a data center company, building the processing chip isn’t the way to do it. A data center is distinguished from a desktop computer versus a cell phone, not by the processor in it.

A desktop computer in a data center uses the same CPUs, uses the same GPUs, apparently. Very close. It’s not the processing chip that describes it, but it’s the networking of it, it’s the infrastructure of it. It’s how the computing is distributed, how security is provided, how networking is done, and so on and so forth. Those characteristics are associated with Melanox, not Nvidia.

The day that I concluded that really Nvidia wants to build computers of the future, and computers of the future are going to be data centers, embodied in data centers, then if we want to be a data center–oriented company, then we really need to get into networking. That was one.

The second thing is observation that, whereas cloud computing started in hyperscale, which is about taking commodity components, a lot of users, and virtualizing many users on top of one computer, AI is really about distributed computing, where one training job is orchestrated across millions of processors.

It’s the inverse of hyperscale, almost. The way that you design a hyperscale computer with off-the-shelf commodity ethernet, which is just fine for Hadoop, it’s just fine for search queries, it’s just fine for all of those things—

Ben: But not when you’re sharding a model across.

Jensen: Not when you’re sharding a model across, right. That observation says that the type of networking you want to do is not exactly ethernet. The way that we do networking for supercomputing is really quite ideal.

The combination of those two ideas convinced me that Mellanox is absolutely the right company, because they’re the world’s leading high-performance networking company. We worked with them in so many different areas in high performance computing already. Plus, I really like the people. The Israel team is world class. We have some 3200 people there now, and it was one of the best strategic decisions I’ve ever made….

…Ben: Let’s say you do get this great 10-year lead. But then other people figure it out, and you’ve got people nipping at your heels. What are some structural things that someone who’s building a business can do to stay ahead? You can just keep your pedal to the metal and say, we’re going to outwork them and we’re going to be smarter. That works to some extent, but those are tactics. What strategically can you do to make sure that you can maintain that lead?

Jensen: Oftentimes, if you created the market, you ended up having what people describe as moats, because if you build your product right and it’s enabled an entire ecosystem around you to help serve that end market, you’ve essentially created a platform.

Sometimes it’s a product-based platform. Sometimes it’s a service-based platform. Sometimes it’s a technology-based platform. But if you were early there and you were mindful about helping the ecosystem succeed with you, you ended up having this network of networks, and all these developers and customers who are built around you. That network is essentially your moat.

I don’t love thinking about it in the context of a moat. The reason for that is because you’re now focused on building stuff around your castle. I tend to like thinking about things in the context of building a network. That network is about enabling other people to enjoy the success of the final market. That you’re not the only company that enjoys it, but you’re enjoying it with a whole bunch of other people.

David: I’m so glad you brought this up because I wanted to ask you. In my mind, at least, and it sounds like in yours, too, Nvidia is absolutely a platform company of which there are very few meaningful platform companies in the world.

I think it’s also fair to say that when you started, for the first few years you were a technology company and not a platform company. Every example I can think of, of a company that tried to start as a platform company, fails. You got to start as a technology first.

When did you think about making that transition to being a platform? Your first graphics cards were technology. There was no CUDA, there was no platform.

Jensen: What you observed is not wrong. However, inside our company, we were always a platform company. The reason for that is because from the very first day of our company, we had this architecture called UDA. It’s the UDA of CUDA.

David: CUDA is Compute Unified Device Architecture?

Jensen: That’s right. The reason for that is because what we’ve done, what we essentially did in the beginning, even though RIVA 128 only had computer graphics, the architecture described accelerators of all kinds. We would take that architecture and developers would program to it.

In fact, Nvidia’s first business strategy was we were going to be a game console inside the PC. A game console needs developers, which is the reason why Nvidia, a long time ago, one of our first employees was a developer relations person. It’s the reason why we knew all the game developers and all the 3D developers.

David: Wow. Wait, so was the original business plan to…

Ben: Sort of like to build DirectX.

David: Yeah, compete with Nintendo and Sega as with PCs?

Jensen: In fact, the original Nvidia architecture was called Direct NV (Direct Nvidia). DirectX was an API that made it possible for the operating system to directly connect with the hardware.

David: But DirectX didn’t exist when you started Nvidia, and that’s what made your strategy wrong for the first couple of years.

Jensen: In 1993, we had Direct Nvidia, which in 1995 became DirectX.

Ben: This is an important lesson. You—

Jensen: We were always a developer-oriented company.

Ben: Right. The initial attempt was we will get the developers to build on Direct NV, then they’ll build for our chips, and then we’ll have a platform. What played out is Microsoft already had all these developer relationships, so you learned the lesson the hard way of—

David: […] did back in the day. They’re like, oh, that could be a developer platform. We’ll take that. Thank you.

Jensen: They did it very differently and did a lot of things right. We did a lot of things wrong.

David: You were competing against Microsoft in the nineties.

Ben: It’s like […] Nvidia today.

Jensen: It’s a lot different, but I appreciate that. We were nowhere near competing with them. If you look now, when CUDA came along and there was OpenGL, there was DirectX, but there’s still another extension, if you will. That extension is CUDA. That CUDA extension allows a chip that got paid for running DirectX and OpenGL to create an install base for CUDA.

David: That’s why you were so militant. I think from our research, it really was you being militant that every Nvidia chip will run CUDA.

Jensen: Yeah. If you’re a computing platform, everything’s got to be compatible. We are the only accelerator on the planet where every single accelerator is architecturally compatible with the others. None has ever existed.

There are literally a couple of hundred million—250 million, 300 million—installed base of active CUDA GPUs being used in the world today, and they’re all architecturally compatible. How would you have a computing platform if NV30 and NV35 and NV39 and NV40 are all different? At 30 years, it’s all completely compatible. That’s the only unnegotiable rule in our company. Everything else is negotiable.

David: I guess CUDA was a rebirth of UDA, but understanding this now, UDA going all the way back, it really is all the way back to all the chips you’ve ever made…

…Ben: Well, as we start to drift toward the end here, we spent a lot of time on the past. I want to think about the future a little bit. I’m sure you spend a lot of time on this being on the cutting edge of AI.

We’re moving into an era where the productivity that software can accomplish when a person is using software can massively amplify the impact and the value that they’re creating, which has to be amazing for humanity in the long run. In the short term, it’s going to be inevitably bumpy as we figure out what that means.

What do you think some of the solutions are as AI gets more and more powerful and better at accelerating productivity for all the displaced jobs that are going to come from it?

Jensen: First of all, we have to keep AI safe. There are a couple of different areas of AI safety that’s really important. Obviously, in robotics and self-driving car, there’s a whole field of AI safety. We’ve dedicated ourselves to functional and active safety, and all kinds of different areas of safety. When to apply human in the loop? When is it okay for a human not to be in the loop? How do you get to a point where increasingly human doesn’t have to be in the loop, but human largely in the loop?

In the case of information safety, obviously bias, false information, and appreciating the rights of artists and creators, that whole area deserves a lot of attention.

You’ve seen some of the work that we’ve done, instead of scraping the Internet we, we partnered with Getty and Shutterstock to create commercially fair way of applying artificial intelligence, generative AI.

In the area of large language models in the future of increasingly greater agency AI, clearly the answer is for as long as it’s sensible—and I think it’s going to be sensible for a long time—is human in the loop. The ability for an AI to self-learn, improve, and change out in the wild in a digital form should be avoided. We should collect data. We should carry the data. We should train the model. We should test the model, validate the model before we release it in the wild again. So human is in the loop.

There are a lot of different industries that have already demonstrated how to build systems that are safe and good for humanity. Obviously, the way autopilot works for a plane, two-pilot system, then air traffic control, redundancy and diversity, and all of the basic philosophies of designing safe systems apply as well in self-driving cars, and so on and so forth. I think there are a lot of models of creating safe AI, and I think we need to apply them.

With respect to automation, my feeling is that—and we’ll see—it is more likely that AI is going to create more jobs in the near term. The question is what’s the definition of near term? And the reason for that is the first thing that happens with productivity is prosperity. When the companies get more successful, they hire more people because they want to expand into more areas.

So the question is, if you think about a company and say, okay, if we improve the productivity, then need fewer people. Well, that’s because the company has no more ideas. But that’s not true for most companies. If you become more productive and the company becomes more profitable, usually they hire more people to expand into new areas.

So long as we believe that they’re more areas to expand into, there are more ideas in drugs, there’s drug discovery, there are more ideas in transportation, there are more ideas in retail, there are more ideas in entertainment, that there are more ideas in technology, so long as we believe that there are more ideas, the prosperity of the industry which comes from improved productivity, results in hiring more people, more ideas.

Now you go back in history. We can fairly say that today’s industry is larger than the world’s industry a thousand years ago. The reason for that is because obviously, humans have a lot of ideas. I think that there are plenty of ideas yet for prosperity and plenty of ideas that can be begat from productivity improvements, but my sense is that it’s likely to generate jobs.

Now obviously, net generation of jobs doesn’t guarantee that any one human doesn’t get fired. That’s obviously true. It’s more likely that someone will lose a job to someone else, some other human that uses an AI. Not likely to an AI, but to some other human that uses an AI.

I think the first thing that everybody should do is learn how to use AI, so that they can augment their own productivity. Every company should augment their own productivity to be more productive, so that they can have more prosperity, hire more people.

I think jobs will change. My guess is that we’ll actually have higher employment, we’ll create more jobs. I think industries will be more productive. Many of the industries that are currently suffering from lack of labor, workforce is likely to use AI to get themselves off their feet and get back to growth and prosperity. I see it a little bit differently, but I do think that jobs will be affected, and I’d encourage everybody just to learn AI…

…David: Well, and that being our final question for you. It’s 2023, 30 years anniversary of the founding of Nvidia. If you were magically 30 years old again today in 2023, and you were going to Denny’s with your two best friends who are the two smartest people you know, and you’re talking about starting a company, what are you talking about starting?

Jensen: I wouldn’t do it. I know. The reason for that is really quite simple. Ignoring the company that we would start, first of all, I’m not exactly sure. The reason why I wouldn’t do it, and it goes back to why it’s so hard, is building a company and building Nvidia turned out to have been a million times harder than I expected it to be, any of us expected it to be.

At that time, if we realized the pain and suffering, just how vulnerable you’re going to feel, and the challenges that you’re going to endure, the embarrassment and the shame, and the list of all the things that go wrong, I don’t think anybody would start a company. Nobody in their right mind would do it.

I think that that’s the superpower of an entrepreneur. They don’t know how hard it is, and they only ask themselves how hard can it be? To this day, I trick my brain into thinking, how hard can it be? Because you have to.

Ben: Still, when you wake up in the morning.

Jensen: Yup. How hard can it be? Everything that we’re doing, how hard can it be? Omniverse, how hard can it be?

David: I don’t get the sense that you’re planning to retire anytime soon, though. You could choose to say like, whoa, this is too hard.

Ben: The trick is still working.

David: Yeah, the trick is still working.

… Jensen: Yeah. The thing to keep in mind is, at all times what is the market opportunity that you’re engaging in? That informs your size. I was told a long time ago that Nvidia can never be larger than a billion dollars. Obviously, it’s an underestimation, under imagination of the size of the opportunity. It is the case that no chip company can ever be so big. But if you’re not a chip company, then why does that apply to you?

This is the extraordinary thing about technology right now. Technology is a tool and it’s only so large. What’s unique about our current circumstance today is that we’re in the manufacturing of intelligence. We’re in the manufacturing of work world. That’s AI. The world of tasks doing work—productive, generative AI work, generative intelligent work—that market size is enormous. It’s measured in trillions.

One way to think about that is if you built a chip for a car, how many cars are there and how many chips would they consume? That’s one way to think about that. However, if you build a system that, whenever needed, assisted in the driving of the car, what’s the value of an autonomous chauffeur every now and then?

Obviously, the problem becomes much larger, the opportunity becomes larger. What would it be like if we were to magically conjure up a chauffeur for everybody who has a car, and how big is that market? Obviously, that’s a much, much larger market.

The technology industry is that what we discovered, what Nvidia has discovered, and what some of the discovered, is that by separating ourselves from being a chip company but building on top of a chip and you’re now an AI company, the market opportunity has grown by probably a thousand times.

Don’t be surprised if technology companies become much larger in the future because what you produce is something very different. That’s the way to think about how large can your opportunity, how large can you be? It has everything to do with the size of the opportunity.

5. The 4 Billion Pieces of Paper Keeping Global Trade Afloat – Archie Hunter

They are relatively easy to fake. Frequently get lost. And can add huge amounts of time to any journey. Yet paper documents still rule in the $25 trillion global cargo trade with four billion of them in circulation at any one time.

It is a system that has barely changed since the nineteenth century. But that dependence on bits of paper being flown from one party to another has become a vulnerability for companies which move and finance the world’s resources around the globe.

In one high profile case, banks including ING Groep NV discovered in 2020 that they had been given falsified bills of lading — shipping documents that designate a cargo’s details and assign ownership — in return for issuing credit to Singapore’s Agritrade Resources. In another dispute, HSBC Holdings Plc and other banks have spent three years in legal wrangling to recover around $3.5 billion from collapsed fuel trader Hin Leong, which is accused by prosecutors of using “forged or fabricated documentation,” when applying for credit.

The International Chamber of Commerce estimates that at least 1% of transactions in the global trade financing market, or around $50 billion per year, are fraudulent. Banks, traders and other parties have lost at least $9 billion through falsified documents in the commodities industry alone over the past decade, according to data compiled by Bloomberg…

…Less than 2% of global trade is transacted via digital means, but that is set to change. Of the world’s top 10 container shipping lines, nine — which account for over 70% of global container freight — have committed to digitizing 50% of their bills of lading within five years, and 100% by 2030. Some of the world’s biggest mining companies including BHP Group Ltd., Rio Tinto Group, Vale SA and Anglo American Plc have voiced their support for a similar campaign in the bulk shipping industry.

The greatest barrier to that expansion has been legal. Banks, traders, insurers and shipping companies have had the means to go digital, but up to now a paper bill of lading has been the only document recognized by English law that gives the holder title ownership to a cargo. A bank or insurer won’t cover a deal that isn’t legally secure, and without financing, deals are unlikely to happen.

To address that, the UK passed the Electronic Trade Documents Act in July which enshrines digital documents with the same legal powers as paper ones. English law on trade documents goes back centuries. It underpins around 90% of global commodities and other trade contracts. So the UK law change represents a big step. Singapore, another center for maritime law, created a similar legal framework in 2021 conducting its first electronic bill of lading transactions in 2022. Similar legislation is expected in France later this year.

The next challenge will be getting companies to change processes that have been in place for hundreds of years. For all its faults, paper is something that everyone understands and while businesses are happy to join a critical mass of digital trade, few are keen to be the first to take steps in that direction…

…For now, when a cargo of coffee is shipped from Brazil to a roaster like Illycaffe SpA in Europe it sets off a flurry of printing. Three identical bills of lading need to be produced and gradually make their way between sellers, banks and buyers, stopping off at law firms and consultants in order to guarantee the rights to the cargo across its 20 day journey. There are also paper invoices, certificates of analysis, and additional documents to measure weight, origin, packing, and moisture content if it is ores that are being shipped.

It is impossible to accurately calculate how many documents are printed for a given trade route but Brazil exports over 900,000 tons of coffee to the European Union every year. And that represents a lot of paper — McKinsey estimated that at least 28,000 trees a year could be saved by reduced friction in the container trade.

As well as providing details of the cargo, its destination and origin, the documents give the holder ownership rights over whatever is being shipped, crucial for holding transport companies accountable for any damages or loss that might occur, or indeed to give banks and insurers some security when providing hundreds of millions of dollars to finance a single shipment…

…The multi-step process begins when an agent prepares the bill of lading and receives sign-off that all the details are correct from the seller, ship owner, trader and end-buyer. The ship is then loaded and original bills of lading are issued by the vessel’s owner and signed by its captain. The three original bills of lading are then released to the seller — in the Brazilian example, the coffee producer — which passes them on to its financing bank along with additional documents to receive payment. The coffee company’s bank will endorse the bill of lading by writing on the back of it.

In many cases a carrier will need to set sail before this process is complete, so the vessel’s captain will provide a shipping agent with a letter of authority to complete the documents on their behalf.

The next leg of the journey for the bill of lading is from the coffee company’s bank to the trading group’s equivalent via DHL Worldwide Express or FedEx Corp. The trader’s bank then makes the payment to the producer’s bank against the receipt of those documents. Assuming everything is okay at this stage the bank working for the trader endorses the bill of lading, signs, stamps, dates and delivers it to its counterpart representing the buyer of the cargo, which in turn pays the trader’s bank for the goods and hands the bill of lading to the master at the destination port to obtain the release of goods.

This pass-the-parcel style approach to the bureaucracy is happening in parallel with the physical goods being loaded, shipped around the world and delivered. Sometimes the documents may only need to move between a small cluster of offices in Geneva or Singapore — where companies across the supply and finance chain have set up offices to be close to one another. But often they are far more tortuous…

…Digital startups like Vakt and ICE Digital Trade offer the opportunity to transfer trade and other financial documents electronically. Bolero has been doing it since the 1990s. Oil majors, such as BP Plc and Shell Plc, traders like Gunvor Group and banks including Société Générale SA have stakes in Vakt, while Intercontinental Exchange bought essDOCS last year for an undisclosed sum, betting that the move online will accelerate.

But the lack of public examples highlights the uphill battle to full adoption of electronic bills of lading. Trafigura used essDOCS for an Australian iron ore shipment back in 2014. Taiwanese shipping line Wan Hai used Bolero’s electronic bill of lading for a polyester filament trade to China in 2018.

This low take-up is largely due to the continued lack of legal recognition in many jurisdictions and banks — which finance the cargoes in transit — but will not accept a digital bill of lading as collateral in most cases. Advocates say the UK law reform should change that.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Microsoft, and Netflix. Holdings are subject to change at any time.

When DCFF Models Fail

Investors may fall into the trap of valuing a company based on cash flow to the firm. But cash flow to the firm is different from cashflow to shareholders.

Investing is based on the premise that an asset’s value is the cash flow that it generates over its lifetime, discounted to the present. This applies to all assets classes.

For real estate, the cash flow generated is rent. For bonds, it’s the coupon. For companies, it is profits.

In the case of stocks, investors may use cash flow to the firm to value a company. Let’s call this the DCFF (discounted cashflow to the firm) model. But valuing a stock based on cash flow to the firm may not always be accurate for shareholders.

This is because free cash flow generated by the firm does not equate to cash returned to the shareholder.

Take for instance, two identical companies. Both generate $1 per share in free cash flow for 10 years. Company A  hoards all the cash for 10 years before finally returning it to shareholders. Company B, however, returns the $1 in free cash flow generated to shareholders at the end of each year. 

Investors who use a DCFF model will value both companies equally. But the actual cash returned to shareholders is different for the two companies. Company B should be more valuable to shareholders as they are receiving cash on a more timely basis. 

To avoid falling for this “valuation trap”, we should use a dividend discount model instead of a DCFF model.

Companies trading below net cash

The timing of cash returned to shareholder matters a lot to the value of a stock.

This is also why we occasionally see companies trading below the net cash on its balance sheet.

If you use a DCFF model, cash on the balance sheet is not discounted. As such, a company that will generate positive cash flows over its lifetime should technically never be valued below its net cash if you are relying on a DCFF model.

However, this again assumes that shareholders will be paid out immediately from the balance sheet. The reality is often very different. Companies may withhold payment to shareholders, leaving shareholders waiting for years to receive the cash.

Double counting

Using the DCFF model may also result in double counting.

For instance, a company may generate free cash flow but use that cash to acquire another company for growth. For valuation purposes, that $1 has been invested so should not be included when valuing the asset.

Including this free cash flow generated in a DCFF model results in double counting the cash.

Don’t forget the taxes

Not only is the DCFF model an inaccurate proxy for cash flow to shareholders, investors also often forget that shareholders may have to pay taxes on dividends earned.

This tax eats into shareholder returns and should be included in all models. For instance, non residents of America have to pay withholding taxes of up to 30% on all dividends earned from US stocks.

When modelling the value of a company, we should factor this withholding taxes into our valuation model.

This is important for long term investors who want to hold the stock for long periods or even for perpetuity. In this case, returns are based solely on dividends, rather than selling the stock.

The challenges of the DDM

To me, the dividend discount model is the better way to value a stock as a shareholder. However, using the dividend discount model effectively has its own challenges.

For one, dividends are not easy to predict. Many companies in their growth phase are not actively paying a dividend, making it difficult for investors to predict the pattern of future dividend payments.

Our best guess is to see the revenue growth trajectory and to make a reasonable estimate as to when management will decide to start paying a dividend.

In some cases, companies may have a current policy to use all its cash flow to buyback shares. This is another form of growth investment for the firm as it decreases outstanding shares.

We should also factor these capital allocation policies into our models to make a better guess of how much dividends will be paid in the future which will determine the true value of the company today.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 22 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 October 2023:

1. Margin of safety: Most important words in investing – Chin Hui Leong

Warren Buffett once said that “margin of safety” has been the bedrock of his investing success for decades. But what do these three words mean?

Let’s use an analogy: imagine you are an engineer and you are tasked to build a bridge that can withstand cars weighing 1,000 kg every day. How strong would you make the bridge? Would it be able to support 1,000 kg or 1,500 kg? 

If you chose the first option, you are cutting it close. But if you chose the second option, you have grasped the concept of margin of safety.

In investing, margin of safety is about leaving room for error because you’ll never know what will happen in the future.

For example, think about the pandemic which shook the world three years ago, a rare and unpredictable event that had a huge impact on businesses and industries. Few, if any, could have foreseen or prepared for it. That’s why margin of safety is important; it helps protect your stock portfolio from unexpected risks…

…As a growth investor, I view the concept of margin of safety differently.

For me, the value created by a business is my margin of safety. On the surface, my approach appears to be at odds with the concept of margin of safety. Why place your faith on a business which is inherently filled with uncertainties? Yet, I would argue my approach is easier to apply.

Consider the contrast between value and growth investing.

In a value investing approach, as you have seen earlier, your task is to figure whether a business which ran into trouble is not not doing as badly as it seems. You are compensated, in this case, with a lower stock price. In comparison, when you pick growing businesses which are already doing well, all you have to figure out is whether it can continue doing well in the future. This business puzzle, to me, is easier to solve. Sure, you will pay a higher price at the start as growing businesses rarely sell on a discount. But if done well, the value which builds up over time will compensate you for the risk taken.

To share an example, take Chipotle Mexican Grill (NYSE: CMG), a US-based, fast-casual Mexican food chain.

In 2006, the business was generating less than US$6.3 million in free cash flow (FCF). Today, the food chain churned out US$1.25 billion in FCF over the past 12 months, some 200 times more than its 2006 level. During this period, its share count also declined by 14 per cent. Thus, on a per share basis, the company’s current FCF per share is over 230 times higher than 2006’s FCF per share…

…At the end of 2006, the stock was trading at US$57 or around 295 times its 2006 free cash flow per share, a valuation that cannot be described as cheap.

Fast forward to today: at a share price of around US$1,822, the free cash flow per share multiple has declined to 40.5 times, over 86 per cent lower compared to 2006’s level.

Yet, despite the drastic fall in this multiple, eagle-eyed readers would have noticed that shares today are 32 times higher compared to the end of 2006…

…If you compare the share price at the end of 2006 (US$57) to today’s FCF per share (US$45 per share), you would get a valuation which is less than 1.3 times. This is an extremely low multiple for the stock, giving you plenty of room for error.

And what drove the creation of this margin of safety? That’s right, it was the value of the business, signified by the growth of its FCF per share.

2. The Risks No One Is Talking About – Brian Richards

Generally speaking, risky investments are well-known to be risky, and most market participants have an eyes-wide-open approach to the risks. Perhaps positions are kept smaller, or are hedged, as a buffer. Contrast that with anything deemed “safe”—whether deemed so because of overconfidence, naivete, a low-looking valuation, or a psychological illusion. Its presumed safety can lull investors into a sense of security that prevents appropriate diligence or analysis.

Think about the role of bond ratings in the Great Financial Crisis, when investment-grade ratings were given to paper that turned out not to be investment grade. The U.S. Financial Crisis Inquiry Commission’s published report does not mince words:

We conclude the failures of credit rating agencies were essential cogs in the wheel of financial destruction. The three credit rating agencies were key enablers of the financial meltdown. The mortgage-related securities at the heart of the crisis could not have been marketed and sold without their seal of approval. Investors relied on them, often blindly. In some cases, they were obligated to use them, or regulatory capital standards were hinged on them. This crisis could not have happened without the rating agencies. Their ratings helped the market soar and their downgrades through 2007 and 2008 wreaked havoc across markets and firms….

….As my friend Morgan Housel says:

Asking what the biggest risks are is like asking what you expect to be surprised about. If you knew what the biggest risk was you would do something about it, and doing something about it makes it less risky. What your imagination can’t fathom is the dangerous stuff, and it’s why risk can never be mastered.

It seems to me that managing risk is a process without an end, an art rather than a science. The key, I think, is to be humble about what risks exist (even in so-called safer assets); be upfront about the things you take as granted; to try to be mindful of your own behavioral biases (or the psychological illusions designed to trick your brain); and realize that the true risks are hard to recognize ahead of time. Equities that appear to be “safe”—such as the Nifty 50 in the 1970s, to many eyes—may be among the least safe of all; and those that appear risky may be among the most promising and potentially less risky than average.

3. Freight recession unlike any other in history – Craig Fuller

In 2000, freight brokerage was a cottage industry, representing a small percentage of the trucking industry — 6%. Fast forward to 2023, and freight brokers handle more than 20% of all trucking freight.

As freight brokerages have taken on a larger percentage of the market, they have reshaped the typical freight cycle.

In the early 2000s, it was uncommon to see a freight broker in the primary position of a shipper’s routing guide.

Back then, freight brokers usually handled freight that asset-based carriers didn’t want or that was priced too low for the carriers to make their margins. Freight brokers would also serve as a last resort if carriers had freight surges that they could not handle.

Since then, however, things have changed dramatically. As freight brokerages invested in technology and customer service, they began to offer a more compelling product than their asset-based competitors and took on a greater role in routing guides.

Today, it is common for multiple freight brokerages to be in primary positions in shippers’ routing guides, often as the top choice, beating out their asset-based competitors…

…As of April 2023, there were more than 531,000 active trucking fleets that own or lease at least one tractor in the U.S., according to Carrier Details, which provides trucking authority intelligence using data from the Federal Motor Carrier Safety Administration (FMCSA) and insurance registrations (available on SONAR).

Contrast that with 1980, when there were around 18,000 U.S. trucking companies…

…The current freight cycle has been different. In previous cycles when freight rates have been low, many of the weakest carriers exited the industry. While some of the companies that went out of business in 2019 — the last major down cycle — were quite large, such as Celadon and New England Motor Freight, most were small “mom-and-pop” companies that lacked the resources to stay in business.

In 2023, many people, including me, expected that as before, many small carriers would roll over and quickly exit the freight market as conditions became difficult. After all, we thought, when the freight economy slowed, high-quality loads for small carriers would dry up. It wasn’t just rates, but also load counts that dropped.

FreightWaves’ Rachel Premack reported in an April 28 article that the “number of authorized interstate trucking fleets in the U.S. declined by nearly 9,000 in the first quarter of 2023 …”

So while companies have certainly left the industry, small carriers overall have held on for far longer than many of us expected. The reason is that even as rates have declined — in many cases lower than 2019 rates — freight brokerages have kept many small truckers supplied with quality load opportunities…

…Newton’s law of gravity, a fundamental rule in physics, is commonly cited in commodity markets like trucking.

When capacity tightens and drives up rates, new entrants enter the market, flooding it with capacity and driving down rates.

The same carriers that entered the trucking industry to take advantage of high rates are now being forced to take much lower rates to keep their trucks moving.

In past cycles, when the freight market softened, we would see a massive purge in capacity. While there have been reductions, it has happened much slower than anticipated.

A key reason it has been so slow to churn out capacity is because of the proliferation of freight brokers.

In past down cycles, freight brokers would lose a large percentage of their volume, as shippers kept to a small number of core carriers in their routing guides.

But over the past decade, freight brokerages have positioned themselves in the role as a core carrier, enabling them to maintain load volumes, even in down markets.

So in this down market, most freight brokerages have maintained a high percentage of load volumes, even as rates fall.

The loads may not pay much, but brokers are able to supply carriers with loads that pay just enough to cover the monthly truck bill.

Carriers may be losing money, but that small amount of cash flow will keep them in the game longer than would be otherwise expected…

…According to SONAR‘s Carrier Details Total Trucking Authorities index, from 2010 to August 2020, the trucking industry added an average of 199 new trucking fleets per week.

From August 2020 to September 2022, the number of new trucking fleets exploded by an average of 1,124 new fleets per week.

The trucking market currently has 63,000 more fleets than the 2010-2020 trend line would suggest…

…Unless there is an acceleration in revocations (i.e. trucking companies shuttering their authorities), FreightWaves models suggest the trucking market has 78 weeks to go before capacity is back in balance with historical trends.

4. Where are all the defaults? – Greg Obenshain

A rapidly rising Fed funds rate has historically led to high levels of defaults as weaker borrowers get squeezed between higher borrowing costs and slowing growth. Just looking at the number of Chapter 11 bankruptcy filings, history would appear to be repeating itself in this rate-hiking cycle. Chapter 11 filings have increased even as the economy has thus far appeared to avoid a recession.

Usually, as the bankruptcy rate spikes, the high-yield spread also rises. But so far, the high-yield spread has barely moved.

If we just used bankruptcy rates to predict high-yield spreads, then we’d expect high-yield spreads to be 7.0%, not the 4.4% we see today. It’s not just small companies defaulting. Fourteen of the bankruptcies in 2023 have had over $1 billion of liabilities, including Mallinckrodt, Yellow Corp, Wesco Aircraft, Avaya, and Party City.

One reason we believe high-yield spreads haven’t spiked yet is the migration of lower-quality borrowers—those most likely to default—out of high yield and into the private credit market. According to Moody’s, the number of issuers with B3 debt has fallen as these issuers have departed for private credit. They do not mince words about this: “Ultimately, we believe the growth of the alternative asset managers will contribute to systemic risk. This group of lenders comprise both private equity and private credit segments and lack prudential oversight, as opposed to the highly regulated banking sector.”

The leveraged loan market, which can be thought of as the loan market rated by credit agencies, is now about as big as the high-yield market, at around $1.3 trillion. The private credit market, which can be thought of as the loan market not rated by credit agencies, is much harder to measure but is reported to also be over $1 trillion. And much of that growth has come from riskier borrowers…

…Where loan and bond amounts are available (54 of the 62), Moody’s has tracked $35 billion of loan defaults versus $26 billion of bond defaults. 30 were loan-only capital structures, 12 were bond-only capital structure, and 12 were capital structures with both loans and bonds. Of the 62 defaults listed, 37 were distressed exchanges and 19 of the distressed exchanges were for loan-only capital structures versus 10 for bond-only capital structures. Distressed exchanges, which are debt renegotiations conducted directly with lenders and outside the bankruptcy system, are often not captured by the default statistics and are not counted in the running count of Chapter 11s with which we started the article. This time is different in a way. Defaults are happening. They are just not happening where they used to, and they are happening in a different way (distressed exchanges) than they used to.

This does not mean that all loans are doing poorly. In fact, BKLN, a loan ETF with $4.4 billion of assets has returned 9.1% year to date as it benefits from higher underlying interest on its loans. But that fund holds more than half its funds in BB or BBB rated credit and less than 1% in CCC loans. It is not heavily exposed to the companies in the low single-B rating. That is where the most pain is likely to be. The rating composition of a market matters when considering defaults. And there has been a significant shift of low-rated credits to the private credit markets.

5. Buffett’s World War II Debut – Marcelo Lima

At this year’s Berkshire Hathaway annual meeting, Buffett did something I wish he did more often: he put up some very educational slides. The first showed the front page of the New York Times on Sunday, March 8, 1942, three months after Japan attacked Pearl Harbor. If you think today’s headlines are scary, you’re in for quite a shock…

…Buffett had his sights on Cities Service preferred stock, which was trading at $84 the previous year and had declined to $55 in January. And now, on March 10th, it was selling at $40.

That night, 11-year-old Buffett decided it was a good time to invest. As Buffett recounted, “Despite these headlines, I said to my dad, ‘I think I’d like to pull the trigger, and I’d like you to buy me three shares of Cities Service preferred’ the next day. And that was all I had. I mean, that was my capital accumulated over the previous five years or thereabouts. And so my dad, the next morning, bought three shares.”…

…Buffett successfully top-ticked the market at $38 ¼, with the shares closing at $37 (down 3.3 percent). This, he joked, “was really kind of characteristic of my timing in stocks that was going to appear in future years.”

The world’s greatest-investor-in-training would eventually see the shares called by the Cities Service Company for over $200 per share more than four years later…

… But the story doesn’t have a happy ending…

…From the 38 ¼ Buffett paid, the stock went on to decline to $27 (down nearly 30 percent from his cost!).

What Buffett didn’t say at the annual meeting is that he had enlisted his sister Doris as a partner in the idea of buying the shares. Every day on the way to school, Doris “reminded” him that her stock was down. (This story is recounted in the excellent book Snowball).

After enduring so much pain, he was happy to sell at a profit only a few months later, in July, for $40. “As they always say, ‘It seemed like a good idea at the time,’” Buffett joked.

Despite the ugly headlines, Buffett said everyone at the time knew that America was going to win the war. The incredible economic machine that had started in 1776 would see to it. So imagine, in the middle of this crisis, you had invested $10,000 in the S&P 500. There were no index funds at the time, but you could have bought the equivalent basket of the top 500 American companies.

Once you did that, imagine you never read another newspaper headline, never traded again, never looked at your investments.

How much would you have today? Buffett again: “You’d have $51 million. And you wouldn’t have had to do anything.” 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Chipotle Mexican Grill. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q3 2023

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the third quarter of 2023.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings release and conference call – for the third quarter of 2023 – happened just last week and contained useful insights on the state of American consumers and businesses. The bottom-line is this: Consumer spending and the overall economic environment is solid, but there are substantial risks on the horizon.  

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. Consumer spending is stable, but consumers are now spending their cash buffers down to pre-pandemic levels

Consumer spend growth has now reverted to pre-pandemic trends with nominal spend for customer stable and relatively flat year-on-year. Cash buffers continue to normalize to pre-pandemic levels with lower income groups normalizing faster.

2. Auto loan originations and auto loan growth were strong

And in Auto, originations were $10.2 billion, up 36% year-on-year as we saw competitors pull back and we gained market share…

…In Auto, we’ve also seen pretty robust loan growth recently, both as a function sort of slightly more competitive pricing on our side as the industry was a little bit slow to raise rates. And so we lost some share previously, and that’s come back now. And generally, the supply chain situation is better, so that’s been supported. As we look forward there, it should be a little bit more muted.

3. Businesses have a healthy appetite for funding from capital markets…

In terms of the outlook, we’re encouraged by the level of capital markets activity in September, and we have a healthy pipeline going into the fourth quarter.

4. …although loan demand from businesses appears to be relatively muted

And I think generally in Wholesale, the loan growth story is going to be driven just by the economic environment. So depending on what you believe about soft landing, mild recession, no lending, we have slightly lower or slightly higher loan growth. But in any case, I would expect it to be relatively muted.

5. Loan losses (a.k.a net charge-off rate) for credit cards is improving, with prior expectation for 2023 Card net charge-off rate at 2.6% compared to the current expectation of 2.5%…

On credit, we now expect the 2023 Card net charge-off rate to be approximately 2.5%, mostly driven by denominator effects due to recent balance growth.

6. …and loan growth in credit cards is still robust, although it has tracked down somewhat

So we were seeing very robust loan growth in Card, and that’s coming from both spending growth and the normalization of revolving balances. As we look forward, we’re still optimistic about that, but it will probably be a little bit more muted than it has been during this normalization period.

7. The near-term outlook for the US economy has improved

I think our U.S. economists had their central case outlook to include a very mild recession with, I think, 2 quarters of negative 0.5% of GDP growth in the fourth quarter and first quarter of this year. And that then got revised out early this quarter to now have sort of modest growth, I think around 1% for a few quarters into 2024.

8. There is no weakness from both consumers and businesses in meeting debt-obligations

And I think your other question was, where am I seeing softness in credit? And I think the answer to that is actually nowhere, roughly, or certainly nowhere that’s not expected. Meaning we continue to see the normalization story play out in consumer more or less exactly as expected. And then, of course, we are seeing a trickle of charge-offs coming through the office space. You see that in the charge-off number of the Commercial Bank. But the numbers are very small and more or less just the realization of the allowance that we’ve already built there.

9. Demand for housing loans is constrained

And of course, Home Lending remains fairly constrained both by rates and market conditions.

10. Overall economic picture looks solid, but there are reasons for caution – in fact, JPMorgan’s CEO, Jamie Dimon, thinks the world may be in the most dangerous environment seen in decades 

And of course, the overall economic picture, at least currently, looks solid. The sort of immaculate disinflation trade is actually happening. So those are all reasons to be a little bit optimistic in the near term, but it’s tempered with quite a bit of caution…

…However, persistently tight labor markets as well as extremely high government debt levels with the largest peacetime fiscal deficits ever are increasing the risks that inflation remains elevated and that interest rates rise further from here. Additionally, we still do not know the longer-term consequences of quantitative tightening, which reduces liquidity in the system at a time when market-making capabilities are increasingly limited by regulations. Furthermore, the war in Ukraine compounded by last week’s attacks on Israel may have far-reaching impacts on energy and food markets, global trade, and geopolitical relationships. This may be the most dangerous time the world has seen in decades. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 15 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 15 October 2023:

1. The Testament of a Furniture Dealer – Ingvar Kamprad

We have decided once and for all to side with the many. What is good for our customers is also, in the long run, good for us. This is an objective that carries obligations.

All nations and societies in both the East and West spend a disproportionate amount of their resources on satisfying a minority of the population. In our line of business, for example, far too many of the fine designs and new ideas are reserved for a small circle of the affluent. That situation has influenced the formulation of our objectives…

…The many people usually have limited financial resources. It is the many people whom we aim to serve. The first rule is to maintain an extremely low level of prices. But they must be low prices with a meaning. We must not compromise either functionality or technical quality…

…The concept of a low price with a meaning makes enormous demands on all our co-workers. That includes product developers, designers, buyers, office and warehouse staff, sales people and all other cost bearers who are in a position to influence our purchase prices and all our other costs – in short, every single one of us! Without low costs, we can never accomplish our purpose…

… A job must never be just a livelihood. If you are not enthusiastic about your job, a third of your life goes to waste, and a magazine in your desk drawer can never make up for that.

For those of you who bear any kind of leadership responsibility, it is crucially important to motivate and develop your co-workers. A team spirit is a fine thing, but it requires everybody in the team to be dedicated to their tasks. You, as the captain, make the decisions after consulting the team. There is no time for argument afterwards. Take a football team as your model!

Be thankful to those who are the pillars of our society! Those simple, quiet, taken for-granted people who always are willing to lend a helping hand. They do their duty and shoulder their responsibility without being noticed. To them, a defined area of responsibility is a necessary but distasteful word. To them, the whole is just as self evident as always helping and always sharing. I call them stalwarts simply because every system needs them. They are to be found everywhere – in our warehouses, in our offices, among our sales force. They are the very embodiment of the IKEA spirit…

…Profit is a wonderful word! Let us start by stripping the word profit of its dramatic overtones. It is a word that politicians often use and abuse. Profit gives us resources. There are two ways to get resources: either through our own profit, or through subsidy. All state subsidies are paid for either out of the state’s profit on operations of some kind, or from taxes of some kind that you and I have to pay. Let us be self-reliant in the matter of building up financial resources too…

…Wasting resources is a mortal sin at IKEA. It is not all that difficult to reach set targets if you do not have to count the cost. Any designer can design a desk that will cost 5,000 kronor. But only the most highly skilled can design a good, functional desk that will cost 100 kronor.

Expensive solutions to any kind of problem are usually the work of mediocrity.

We have no respect for a solution until we know what it costs. An IKEA product without a price tag is always wrong! It is just as wrong as when a government does not tell the taxpayers what a “free” school lunch costs per portion.

Before you choose a solution, set it in relation to the cost. Only then can you fully determine its worth…

…Planning is often synonymous with bureaucracy. Planning is, of course, needed to lay out guidelines for your work and to enable a company to function in the long term. But do not forget that exaggerated planning is the most common cause of corporate death. Exaggerated planning constrains your freedom of action and leaves you less time to get things done. Complicated planning paralyses. So let simplicity and common sense guide your planning…

…If we from the start had consulted experts about whether a little community like Älmhult could support a company like IKEA, they would have undoubtedly advised against it. Nevertheless, Älmhult is now home to one of the world’s biggest operations in the home furnishings business.

By always asking why we are doing this or that, we can find new paths…

…It is no coincidence that our buyers go to a window factory for table legs and a shirt factory for cushions. It is quite simply the answer to the question “why”.

Our protest against convention is not protest for its own sake: it is a deliberate expression of our constant search for development and improvement…

…The general who divides his resources will invariably be defeated. Even a multitalented athlete has problems.

For us too, it is a matter of concentration – focusing our resources. We can never do everything, everywhere, all at the same time.

Our range cannot be allowed to overflow. We will never be able to satisfy all tastes anyway. We must concentrate on our own profile. We can never promote the whole of our range at once. We must concentrate. We cannot conquer every market at once. We must concentrate for maximum impact, often with small means…

…When we are building up a new market, we concentrate on marketing. Concentration means that at certain vital stages we are forced to neglect otherwise important aspects such as security systems…

…In our IKEA family we want to keep the focus on the individual and support each other. We all have our rights, but we also have our duties. Freedom with responsibility. Your initiative and mine are decisive.

Our ability to take responsibility and make decisions.

Only while sleeping one makes no mistakes. Making mistakes is the privilege of the active – of those who can correct their mistakes and put them right.

Our objectives require us to constantly practise making decisions and taking responsibility, to constantly overcome our fear of making mistakes. The fear of making mistakes is the root of bureaucracy and the enemy of development…

…The feeling of having finished something is an effective sleeping pill. A person who retires feeling that he has done his bit will quickly wither away. A company which feels that it has reached its goal will quickly stagnate and lose its vitality. Happiness is not reaching your goal.

Happiness is being on the way. It is our wonderful fate to be just at the beginning. In all areas. We will move ahead only by constantly asking ourselves how what we are doing today can be done better tomorrow. The positive joy of discovery must be our inspiration in the future too…

…Bear in mind that time is your most important resource. You can do so much in 10 minutes. Ten minutes, once gone, are gone for good. You can never get them back. Ten minutes are not just a sixth of your hourly pay. Ten minutes are a piece of yourself. Divide your life into 10-minute units and sacrifice as few of them as possible in meaningless activity.

2. AI reads text from ancient Herculaneum scroll for the first time – Jo Marchant

A 21-year-old computer-science student has won a global contest to read the first text inside a carbonized scroll from the ancient Roman city of Herculaneum, which had been unreadable since a volcanic eruption in AD 79 — the same one that buried nearby Pompeii. The breakthrough could open up hundreds of texts from the only intact library to survive from Greco-Roman antiquity.

Luke Farritor, who is at the University of Nebraska–Lincoln, developed a machine-learning algorithm that has detected Greek letters on several lines of the rolled-up papyrus, including πορϕυρας (porphyras), meaning ‘purple’. Farritor used subtle, small-scale differences in surface texture to train his neural network and highlight the ink…

…Hundreds of scrolls were buried by Mount Vesuvius in October AD 79, when the eruption left Herculaneum under 20 metres of volcanic ash. Early attempts to open the papyri created a mess of fragments, and scholars feared the remainder could never be unrolled or read…

…The Vesuvius Challenge offers a series of awards, leading to a main prize of US$700,000 for reading four or more passages from a rolled-up scroll. On 12 October, the organizers announced that Farritor has won the ‘first letters’ prize of $40,000 for reading more than 10 characters in a 4-square-centimetre area of papyrus. Youssef Nader, a graduate student at the Free University of Berlin, is awarded $10,000 for coming second…

…Most classical texts known today are the result of repeated copying by scribes over centuries. By contrast, the Herculaneum library contains works not known from any other sources, direct from the authors.

Until now, researchers were able to study only opened fragments…

… But more than 600 scrolls — most held in the National Library in Naples, with a handful in the United Kingdom and France — remain intact and unopened. And more papyri could still be found on lower floors of the villa, which have yet to be excavated.

Seales and his team spent years developing methods to “virtually unwrap” the vanishingly thin layers using X-ray computed tomography (CT) scans, and to visualize them as a series of flat images. In 2016, he reported1 using the technique to read a charred scroll from En-Gedi in Israel, revealing sections of the Book of Leviticus — part of the Jewish Torah and the Christian Old Testament — written in the third or fourth century AD. But the ink on the En-Gedi scroll contains metal, so it glows brightly on the CT scans. The ink on the older Herculaneum scrolls is carbon-based, essentially charcoal and water, with the same density in scans as the papyrus it sits on, so it doesn’t show up at all.

Seales realized that even with no difference in brightness, CT scans might capture tiny differences in texture that can distinguish areas of papyrus coated with ink. To prove it, he trained an artificial neural network to read letters in X-ray images of opened Herculaneum fragments. Then, in 2019, he carried two intact scrolls from the Institut de France in Paris to the Diamond Light Source, a synchrotron X-ray facility near Oxford, UK, to scan them at the highest resolution yet (4–8 micrometres per 3D image element, or voxel).

Reading intact scrolls was still a huge task, however, so the team released all of its scans and code to the public and launched the Vesuvius Challenge…

…In parallel, Seales’ team worked on the virtual unwrapping, releasing images of the flattened pieces for the contestants to analyse. A key moment came in late June, when one competitor pointed out that on some images, ink was occasionally visible to the naked eye, as a subtle texture that was soon dubbed ‘crackle’. Farritor immediately focused on the crackle, looking for further hints of letters.

One evening in August, he was at a party when he received an alert that a fresh segment had been released, with particularly prominent crackle. Connecting through his phone, he ran his algorithm on the new image. Walking home an hour later, he pulled out his phone and saw five letters on the screen. “I was jumping up and down,” he says. “Oh my goodness, this is actually going to work.” From there, it took just days to refine the model and identify the ten letters required for the prize…

…The word “purple” has not yet been read in the opened Herculaneum scrolls. Purple dye was highly sought-after in ancient Rome and was made from the glands of sea snails, so the term could refer to purple colour, robes, the rank of people who could afford the dye or even the molluscs. But more important than the individual word is reading anything at all, says Nicolardi. The advance “gives us potentially the possibility to recover the text of a whole scroll”, including the title and author, so that works can be identified and dated…

…artificial intelligence (AI) is increasingly aiding the study of ancient texts. Last year, for example, Assael and Sommerschield released an AI tool called Ithaca, designed to help scholars glean the date and origins of unidentified ancient Greek inscriptions, and make suggestions for text to fill any gaps2. It now receives hundreds of queries per week, and similar efforts are being applied to languages from Korean to Akkadian, which was used in ancient Mesopotamia.

Seales hopes machine learning will open up what he calls the “invisible library”. This refers to texts that are physically present, but no one can see, including parchment used in medieval book bindings; palimpsests, in which later writing obscures a layer beneath; and cartonnage, in which scraps of old papyrus were used to make ancient Egyptian mummy cases and masks.

3. The Problem With Counterfeit People – Daniel C. Dennett

Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created. These counterfeit people are the most dangerous artifacts in human history, capable of destroying not just economies but human freedom itself. Before it’s too late (it may well be too late already) we must outlaw both the creation of counterfeit people and the “passing along” of counterfeit people. The penalties for either offense should be extremely severe, given that civilization itself is at risk…

…Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.

The philosopher and historian Yuval Noah Harari, writing in The Economist in April, ended his timely warning about AI’s imminent threat to human civilization with these words:

“This text has been generated by a human. Or has it?”

It will soon be next to impossible to tell. And even if (for the time being) we are able to teach one another reliable methods of exposing counterfeit people, the cost of such deepfakes to human trust will be enormous. How will you respond to having your friends and family probe you with gotcha questions every time you try to converse with them online?

Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation…

… The key design innovation in the technology that makes losing control of these systems a real possibility is that, unlike nuclear bombs, these weapons can reproduce. Evolution is not restricted to living organisms, as Richard Dawkins demonstrated in 1976 in The Selfish Gene. Counterfeit people are already beginning to manipulate us into midwiving their progeny. They will learn from one another, and those that are the smartest, the fittest, will not just survive; they will multiply…

…As Harari says, we must “make it mandatory for AI to disclose that it is an AI.” How could we do that? By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning…

…Did you know that the manufacturers of scanners have already installed software that responds to the EURion Constellation (or other watermarks) by interrupting any attempt to scan or photocopy legal currency? Creating new laws along these lines will require cooperation from the major participants, but they can be incentivized…

…It will be difficult—maybe impossible—to clean up the pollution of our media of communication that has already occurred, thanks to the arms race of algorithms that is spreading infection at an alarming rate. Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries. 

4. The New Kings of Wall Street Aren’t Banks. Private Funds Fuel Corporate America – Matt Wirz

High interest rates, driven by the Federal Reserve’s higher-for-longer policy, are shaking up how corporate loans get done. Soaring rates brought down banks such as Credit Suisse and Silicon Valley Bank and forced others to reduce lending. As those lenders stepped back, private-credit fund managers stepped up, financing one jumbo loan for American corporations after another.

This shift is accelerating a trend more than a decade in the making. Hedge funds, private-equity funds and other alternative-investment firms have been siphoning away money and talent from banks since a regulatory crackdown after the 2008-09 financial crisis. Lately, many on Wall Street say the balance of power—and risk—has hit a tipping point…

… The loans are expensive, but for many companies they are the only option. Next, private-credit firms are coming for the rest of the credit market, bankrolling asset-backed debt for real estate, consumer loans and infrastructure projects.

Private-equity firms use revenue from most of the loans to make leveraged buyouts, saddling the companies they acquire with expensive debt. Ultimately, more companies could end up under their control.

Regulators, concerned that so much money is going behind closed doors, are rushing to catch up with new rules for private fund managers and their dealings with the insurance industry. 

The firms have money to spend from clients such as pensions, insurers and, increasingly, individuals. Those investors piled in because returns were high compared with other debt investments in a low-yield world. Private lenders delivered average returns of 9% over the past decade on loans made mostly to midsize businesses, according to data provider Cliffwater…

…Some analysts are concerned about private credit taking over the loan market.

The shift “has concentrated a larger segment of economic activity into the hands of a fairly small number of large, opaque asset managers,” credit-ratings firm Moody’s Investors Service said in a September report. “Lack of visibility will make it difficult to see where risk bubbles may be building.”

There are risks to investors, too. High interest rates are making corporate borrowers more likely to default on the loans. Some managers are concentrating their exposure by making bigger loans backing multibillion-dollar deals…

…If private lenders keep refinancing debt from large companies that struggle to borrow in the bank market, that could also lower the average quality of their investments. About half of the $190 billion of below-investment-grade bank loans coming due in 2024 and 2025 are rated B-minus or below.

Private-credit assets under management globally rose to about $1.5 trillion in 2022 from $726 billion in 2018, according to data provider Preqin.

A handful of the fund managers control about $1 trillion combined, according to research by The Wall Street Journal…

…“It’s kind of nuts that there used to be just three or four of these [lenders] out there and now you can have 30,” said Erwin Mock, head of capital markets for Thoma Bravo, the private-equity firm that owns Hyland and negotiated its new loan.

Companies are using private debt to retire bank debt at unprecedented levels. Financial software maker Finastra borrowed $4.8 billion from Blue Owl, Oak Hill Advisors and others in August to refinance a loan arranged by Morgan Stanley. It was the largest private loan on record.

Asset managers are able to handle these monster loans, the size previously reserved for banks, because the firms are tapping deeper pools of capital.

Apollo, KKR and others have built, purchased or partnered with insurance companies that have hundreds of billions of dollars they need to invest. Much of the insurance money must go into investment-grade debt, so the firms are branching into asset-backed debt that is higher rated than most corporate loans…

… Private-credit funds don’t require borrowers to get credit ratings, and they guarantee completion of buyout loans. Banks, meanwhile, might back out when markets turned rocky. But private-credit loans have tougher covenants, prohibiting borrowers from selling assets or raising new debt to get cash. Private loans also charged average interest rates 5 percentage points higher than comparable debt in the bank market over the past 10 years, according to an index operated by Cliffwater.

Private-credit investors may fare better than bank-loan holders in the long term because of their better covenants, Goldman Sachs analysts wrote in a September research report. They are also owned by just a few lenders. That enables private creditors to intervene faster in times of financial stress and to recover more if a borrower defaults, the analysts said…

…“Investors wanted yield, and the government wanted credit risk away from the taxpayer,” said Joshua Easterly, co-president of Sixth Street, a private-credit firm he co-founded with other Goldman Sachs veterans. “That created the environment for this market to mature.”

Private credit shot ahead in the pandemic when crisis-struck banks froze up, stoking worries of mass defaults. Credit-ratings firms quickly downgraded dozens of companies—something they were criticized for not doing fast enough in 2008—making it even harder for the borrowers to get new bank debt.

The cycle intensified starting last year when the Fed tightened monetary policy and banks pulled back further. Interest rates on bank loans are normally much cheaper than the rates on private credit, but the difference between the two has shrunk to levels not seen since 2008. That makes bank loans less enticing—relatively, anyway.

5. The Israeli-Palestinian conflict: A chronology – Sammy Westfall, Brian Murphy, Adam Taylor, Bryan Pietsch and Andrea Salcedo

The roots of the conflict and mistrust are deep and complex, predating the establishment of the state of Israel in 1948. Both Palestinians and Israelis see the territory between the Jordan River and the Mediterranean Sea as their own, and Christians, Jews and Muslims all hold parts of the land as sacred…

…The Ottoman Empire had controlled that part of the Middle East from the early 16th century until control of most of the region was granted to the British after World War I.

Both Israelis and Palestinians were struggling for self-determination and sovereignty over the territory, developing respective movements for their causes.

As World War I began, several controversial diplomatic efforts — some contradicting each other — by the Great Powers tried to shape the map of the modern Middle East, including the Palestinian territories. Palestinians cite a series of letters in 1915 to 1916 between Mecca’s emir and the British high commissioner in Egypt, known as the McMahon-Hussein Correspondence, as outlining a promise of an independent Arab state.

In 1916, the Sykes-Picot Agreement secretly negotiated between Britain and France planned to carve up the Middle East into spheres of influence, and determined that the land in question was to be internationalized.

In 1917, Britain’s foreign secretary, Lord Arthur Balfour, expressed his government’s support for “the establishment in Palestine of a national home for the Jewish people” in a letter to Baron Walter Rothschild, the head of the British wing of the influential European Jewish banking family.

To Israelis, the missive marks a formal utterance of the Israeli state’s right to exist; to Palestinians, it was an early sign of their dispossession. The declaration also noted that it was “clearly understood that nothing shall be done which may prejudice the civil and religious rights of existing non-Jewish communities in Palestine,” nodding to the overwhelming majority Arab population in the region at the time. (About 90 percent of the population was Muslim in 1850, and about 80 percent in 1914.)

Large-scale Jewish immigration followed in succeeding decades, including during Nazi persecution and the Holocaust. Both sides continued to assert their right to establish a state.

After World War II, nearing the end of the British Mandate for Palestine, the United Nations General Assembly in 1947 passes Resolution 181, urging the partition of the land into two independent states — one Arab and one Jewish. Religiously significant Jerusalem is to be under special international administration. The plan is not implemented after the Arab side rejects it, arguing that it is unfavorable to their majority population. Violence in the regional conflict grows.

Israel declares independence in May 1948. The next day, a coalition of Arab states, allied with Palestinian factions, attack Israeli forces in what becomes the first of several Arab-Israeli wars. In the end, Israel gains control of an even larger portion of territory — not including the areas of the West Bank and Gaza Strip. An estimated 700,000 Palestinians flee or are driven from their land in what Palestinians refer to as the “Nakba,” or “catastrophe” in Arabic.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Crises and Stocks

Mankind’s desire for progress is ultimately what fuels the global economy and financial markets.

The past few years may seem especially tumultuous because of the crises that have occurred. 

For example, in 2020, there was the COVID pandemic and oil prices turned negative for the first time in recorded history. In 2021, inflation in the USA rose to a level last seen in the early 1980s. In 2022, Russia invaded Ukraine. This year, there were the high-profile collapses of Silicon Valley Bank and First Republic Bank in the USA, and Credit Suisse in Europe; and just a few days ago, Israel was attacked by Hamas and Hezbollah militants.

But without downplaying the human tragedies, it’s worth noting that crises are common. Here’s a (partial!) list of major crises in every year stretching back to 1990 that I’ve borrowed and added to (the additions are in square brackets) from an old Morgan Housel article for The Motley Fool:

[2023 (so far): Collapse of Silicon Valley Bank and First Republic Bank in the USA; firesale of Credit Suisse to UBS; Israel gets attacked by Hamas and Hezbollah militants

2022: Russia invades Ukraine

2021: Inflation in the USA rises to a level not seen since early 1980s

2020: COVID pandemic; oil prices turn negative for first time in history 

2019: Australia bush fires; US president impeachment; first sign of COVID

2018: US-China trade war

2017: Bank of England hikes interest rates for first time in 10 years; UK inflation rises to five-year high

2016: Brexit; Italy banking system crises

2015: Euro currency crashes against the Swiss franc; Greece defaults on loan to European Central Bank

2014: Oil prices collapse

2013: Cyprus bank bailouts; US government shuts down; Thai uprising

2012: Speculation of Greek exit from Eurozone; Hurricane Sandy]

2011: Japan earthquake, Middle East uprising.

2010: European debt crisis; BP oil spill; flash crash.

2009: Global economy nears collapse.

2008: Oil spikes; Wall Street bailouts; Madoff scandal.

2007: Iraq war surge; beginning of financial crisis.

2006: North Korea tests nuclear weapon; Mumbai train bombings; Israel-Lebanon conflict.

2005: Hurricane Katrina; London terrorist attacks.

2004: Tsunami hits South Asia; Madrid train bombings.

2003: Iraq war; SARS panic.

2002: Post 9/11 fear; recession; WorldCom bankrupt; Bali bombings.  

2001: 9/11 terrorist attacks; Afghanistan war; Enron bankrupt; Anthrax attacks.  

2000: Dot-com bubble pops; presidential election snafu; USS Cole bombed.  

1999: Y2K panic; NATO bombing of Yugoslavia.

1998: Russia defaults on debt; LTCM hedge fund meltdown; Clinton impeachment; Iraq bombing. 

1997: Asian financial crisis.

1996: U.S. government shuts down; Olympic park bombing.

1995: U.S. government shuts down; Oklahoma City bombing; Kobe earthquake; Barings Bank collapse.

1994: Rwandan genocide; Mexican peso crisis; Northridge quake strikes Los Angeles; Orange County defaults.

1993: World Trade Center bombing.

1992: Los Angeles riots; Hurricane Andrew.

1991: Real estate downturn; Soviet Union breaks up.

1990: Persian Gulf war; oil spike; recession.”

Yet through it all, the MSCI World Index, a good proxy for global stocks, is up by more than 400% in price alone (in US dollar terms) from January 1990 to 9 October this year, as shown in the chart below. 

Source: MSCI

To me, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. There are more than 8.0 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc, but I have faith that humanity can fix these problems. 

The trailing price-to-earnings (P/E) ratio of the MSCI World Index was roughly the same for the start and end points for the chart shown above. This means that the index’s rise over time was predominantly the result of the underlying earnings growth of its constituent-companies. This is a testament to how human ingenuity always finds a way and to how stocks do reflect this over the long run. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 08 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 October 2023:

1. The Road to Self-Renewal – John Gardner

We build our own prisons and serve as our own jail keepers, but I’ve concluded that our parents and the society at large have a hand in building our prisons. They create roles for us – and self-images – that hold us captive for a long time. The individual who is intent on self-renewal will have to deal with ghosts of the past – the memory of earlier failures, the remnants of childhood dramas and rebellions, accumulated grievances and resentments that have long outlived their cause. Sometimes people cling to the ghosts with something almost approaching pleasure, but the hampering effect on growth is inescapable. As Jim Whitaker, who climbed Mount Everest, said, “You never conquer the mountain. You only conquer yourself.”

The more I see of human lives, the more I believe the business of growing up is much longer drawn out than we pretend. If we achieve it in our 30s, even our 40s, we’re doing well…

…The things you learn in maturity aren’t simple things such as acquiring information and skills. You learn not to engage in self-destructive behavior. You learn not to burn up energy in anxiety. You discover how to manage your tensions. You learn that self-pity and resentment are among the most toxic of drugs. You find that the world loves talent but pays off on character.

You come to understand that most people are neither for you nor against you; they are thinking about themselves. You learn that no matter how hard you try to please, some people in this world are not going to love you, a lesson that is at first troubling and then really quite relaxing…

…Of course failures are a part of the story, too. Everyone fails. When Joe Louis was world heavyweight boxing champion, he said, “Everyone has to figure to get beat some time.” The question isn’t did you fail, but did you pick yourself up and move ahead. And there is one other little question: “Did you collaborate in your own defeat?” A lot of people do. Learn not to.

One of the enemies of sound, lifelong motivation is a rather childish conception we have of the kind of concrete, describable goal toward which all of our efforts drive us. We want to believe that there is a point at which we can feel we have arrived. We want a scoring system that tells us when we’ve piled up enough points to count ourselves successful.

So you scramble and sweat and climb to reach what you thought was the goal. When you get to the top you stand up and look around, and chances are you feel a little empty. Maybe more than a little empty. You may wonder whether you climbed the wrong mountain.

But the metaphor is all wrong. Life isn’t a mountain that has a summit. Nor is it, as some suppose, a riddle that has an answer. Nor a game that has a final score.

Life is an endless unfolding and, if we wish it to be, an endless process of selfdiscovery, an endless and unpredictable dialogue between our own potentialities and the life situations in which we find ourselves. By potentialities I mean not just success as the world measures success, but the full range of one’s capacities for learning, sensing, wondering, understanding, loving and aspiring…

…There’s something I know about you that you may or may not know about yourself. You have within you more resources of energy than have ever been tapped, more talent than has ever been exploited, more strength than has ever been tested, more to give than you have ever given…

…There is not perfection of techniques that will substitute for the lift of spirit and heightened performance that comes from strong motivation. The world is moved by highly motivated people, by enthusiasts, by men and women who want something very much or believe very much…

…If I may offer you a simple maxim, “Be interested.” Everyone wants to be interesting but the vitalizing thing is to be interested. Keep a sense of curiosity. Discover new things. Care. Risk failure. Reach out…

…We cannot dream of a Utopia in which all arrangements are ideal and everyone is flawless. Life is tumultuous – an endless losing and regaining of balance, a continuous struggle, never an assured victory. Nothing is ever finally safe. Every important battle is fought and refought. You may wonder if such a struggle, endless and of uncertain outcome, isn’t more than humans can bear. But all of history suggests that the human spirit is well fitted to cope with just that kind of world…

…Meaning is not something you stumble across, like the answer to a riddle or the prize in a treasure hunt. Meaning is something you build into your life. You build it out of your own past, out of your affections and loyalties, out of the experience of humankind as it is passed on to you, out of your own talent and understanding, out of the things you believe in, out of the things and people you love, out of the values for which you are willing to sacrifice something. The ingredients are there. You are the only one who can put them together into that unique pattern that will be your life. Let it be a life that has dignity and meaning for you. If it does, then the particular balance of success or failure is of less account.

2. AI can help to speed up drug discovery — but only if we give it the right data – Marissa Mock, Suzanne Edavettal, Christopher Langmead & Alan Russell

There is a troubling crunch point in the development of drugs made from proteins. Fewer than 10% of such drug candidates succeed in clinical trials1. Failure at this late stage of development costs between US$30 million and $310 million per clinical trial, potentially costing billions of dollars per drug, and wastes years of research while patients wait for a treatment.

More protein drugs are needed. The large size and surface area of proteins mean that medicines made from them have more ways to interact with target molecules, including proteins in the body that are involved in disease, compared with drugs based on smaller molecules. Protein-based drugs therefore have broad potential as therapeutics.

For instance, protein drugs such as nivolumab and pembrolizumab can prevent harmful interactions between tumour proteins and receptor proteins on immune cells that would deactivate the immune system. Small-molecule drugs, by contrast, are not big enough to come between the two proteins and block the interaction…

…Because proteins can have more than one binding domain, therapeutics can be designed that attach to more than one target — for instance, to both a cancer cell and an immune cell4. Bringing the two together ensures that the cancer cell is destroyed.

To unblock the drug-development bottleneck, computer models of how protein drugs might act in the body must be improved. Researchers need to be able to judge the dose that drugs will work at, how they will interact with the body’s own proteins, whether they might trigger an unwanted immune response, and more.

Making better predictions about future drug candidates requires gathering large amounts of data about why previous ones succeeded or failed during clinical trials. Data on many hundreds or thousands of proteins are needed to train effective machine-learning models. But even the most productive biopharmaceutical companies started clinical trials for just 3–12 protein therapeutics per year, on average, between 2011 and 2021 (see go.nature.com/3rclacp). Individual pharmaceutical companies, such as ours (Amgen in Thousand Oaks, California), cannot amass enough data alone.

Incorporation of artificial intelligence (AI) into drug-development pipelines can help. It offers an opportunity for competing companies to merge data while protecting their commercial interests. Doing so can improve developers’ predictive abilities, benefiting both the firms and the patients…

… Until about five years ago, developing a candidate required several cycles of protein engineering to turn a natural protein into a working drug5. Proteins were selected for a desired property, such as an ability to bind to a particular target molecule. Investigators made thousands of proteins and rigorously tested them in vitro before selecting one lead candidate for clinical trials. Failure at any stage meant starting the process from scratch.

Biopharmaceutical companies are now using AI to speed up drug development. Machine-learning models are trained using information about the amino-acid sequence or 3D structure of previous drug candidates, and about properties of interest. These characteristics can be related to efficacy (which molecules the protein bind to, for instance), safety (does it bind to unwanted molecules or elicit an immune response?) or ease of manufacture (how viscous is the drug at its working concentration?).

Once trained, the AI model recognizes patterns in the data. When given a protein’s amino-acid sequence, the model can predict the properties that the protein will have, or design an ‘improved’ version of the sequence that it estimates will confer a desired property. This saves time and money trying to engineer natural proteins to have properties, such as low viscosity and a long shelf life, that are essential for drugs. As predictions improve, it might one day become possible for such models to design working drugs from scratch…

…In short, this fusion of cutting-edge life science, high-throughput automation and AI — known as generative biology — has drastically improved drug developers’ ability to predict a protein’s stability and behaviour in solution. Our company now spends 60% less time than it did five years ago on developing a candidate drug up to the clinical-trial stage…

…Here’s how federated learning could work for biopharmaceutical companies. A trusted party — perhaps a technology firm or a specialized consulting company — would maintain a ‘global’ model, which could initially be trained using publicly available data. That party would send the global model to each participating biopharmaceutical company, which would update it using the firm’s own data to create a new ‘local’ model. The local models would be aggregated by the trusted party to produce an updated global model. This process could be repeated until the global model essentially stopped learning new patterns…

…With active learning, an algorithm determines the training data that would be needed to make more-reliable predictions about this type of unusual amino-acid sequence. Rather than developers having to guess what extra data they need to generate to improve their model, they can build and analyse only proteins with the requested amino-acid sequences.

Active learning is already being used by biopharmaceutical companies. It should now be combined with federated learning to improve predictions — particularly for more-complex properties, such as how a protein’s sequence or structure determines its interactions with the immune system.

3. China Isn’t Shifting Away From the Dollar or Dollar Bonds – Brad W. Setser

There is a widespread perception that China has responded to an era of heightened geostrategic competition and growing economic rivalry with the United States by shifting its foreign exchange reserves out of the dollar…

…It sort of makes sense – China does worry about the weaponization of the dollar and the reach of U.S. financial sanctions. And why would a rising power like China want to fund the Treasury of a country that China views as standing in the way of the realization of the China dream (at least in the Pacific).

It also seems to be in the official U.S. data – China’s reported holdings of U.S. Treasuries have slid pretty continuously since 2012, with a further down leg in the last 18 months…

…Yet, that is not what I believe is actually happening.

Strange as it may seem, the best evidence available suggests that the dollar share in China’s reserves has been broadly stable since 2015 (if not a bit before). If a simple adjustment is made for Treasuries held by offshore custodians like Belgium’s Euroclear, China’s reported holdings of U.S. assets look to be basically stable at between $1.8 and $1.9 trillion. After netting out China’s substantial holdings of U.S. equities, China’s holdings of U.S. bonds, after adjusting for China’s suspected Euroclear custodial account, have consistently been around 50 percent of China’s reported reserves. Nothing all that surprising.

The bulk of China’s post-2012 efforts to diversify its reserves have come not from shifting reserves out of the dollar, but rather by using what could have been reserves to support the Belt and Road and the outward expansion of Chinese firms (see Box 6 of SAFE’s annual report, or my June blog). Those non-reserve foreign assets, strangely enough, seem to be mostly in dollars even if aren’t invested in the United States; almost all the documented Belt and Road project loans, for example, have been in dollars.

There are, obviously, two sources of data about China’s reserves – China’s own (limited) disclosure, and the U.S. data on foreign holdings of U.S. securities. Both broadly tell the same story – one at odds with most pressure coverage of the slide in China’s formal reserves.

China has disclosed that it reduced the dollar share of its reported reserves from 79 percent in 2005 to 58 percent in 2015. It also disclosed that the dollar share in 2017 remained at 58 percent (see SAFE’s 2021 annual report). China’s disclosed dollar share is just below the global dollar share in the IMF’s comprehensive data set…

…Journalists the world over generally know only one part of the U.S. Treasury international Capital (TIC) data – the table showing foreign holdings of U.S. Treasuries in U.S. custodians (FRBNY, State Street, Bank of New York, J.P. Morgan). That table reports the current market value of China’s treasuries in U.S. custodians, so the recent fall reflects, among other things, the general sell-off in long-term U.S. Treasuries and resulting slide in the market value of Treasuries purchased in years past.

That table, however, suffers from three other limitations:

One, Treasuries held by non-U.S. custodians wouldn’t register as “China” in the U.S. data. The two biggest custodians are Euroclear, which is based in Belgium (Russia kept its euro reserves there), and Clearstream, which is based in Luxembourg.

And two, the table for Treasuries (obviously) doesn’t include China’s holdings of U.S. assets other than Treasuries – and China actually has a large portfolio of Agency bonds and U.S. equities (they appear in another more difficult to use data table).

The U.S. data would also miss Treasuries and other U.S. assets that have been handed over to third parties to manage – and it is well known that SAFE has accounts at the large global bond funds, several hedge funds (including Bridgewater) and in several private equity funds…

…China historically has been a big buyer of Agencies: few now remember, but China held more Agencies than Treasuries going into the global financial crisis (see the Survey data for end June 2008)

After the Freddie and Fannie scare (read Paulson’s memoirs) China let its Agency portfolio run off, and China shied away from Agencies during the years when the Fed was a big buyer. But with the Federal Reserve stepping back from the Agency market once it stopped buying U.S. assets, the yield on Agencies soared – and China very clearly moved back into the Agency market.

The Federal Reserve staff turns the reported custodial holdings into an estimate of actual purchases by adjusting for mark to market changes in bond valuation. In 2022, China bought $84 billion of Agencies. It added another $18 billion in the first 6 months of 2023 – so purchases of over $100 billion in the last 18 months of data. After adjusting for Belgium, China is estimated to have sold only about $40 billion in Treasuries over the last 18 months (it bought around $40 billion in 2022, and reduced its holdings by around $80 billion in the first 6 months of 2023 – with most of reduction coming in January 2023)…

…Bottom line: the only interesting evolution in China’s reserves in the past six years has been the shift into Agencies. That has resulted in a small reduction in China’s Treasury holdings – but it also shows that it is a mistake to equate a reduction in China’s Treasury holdings with a reduction in the share of China’s reserves held in U.S. bonds or the U.S. dollar.

4. Mark Zuckerberg on Threads, the future of AI, and Quest 3 – Alex Heath and Nilay Patel

A lot of the conversation around social media is around information and the utility aspect, but I think an equally important part of designing any product is how it makes you feel, right? What’s the kind of emotional charge of it, and how do you come away from that feeling?

I think Instagram is generally kind of on the happier end of the spectrum. I think Facebook is sort of in the middle because it has happier moments, but then it also has sort of harder news and things like that that I think tend to just be more critical and maybe, you know, make people see some of the negative things that are going on in the world. And I think Twitter indexes very strongly on just being quite negative and critical.

I think that that’s sort of the design. It’s not that the designers wanted to make people feel bad. I think they wanted to have a maximum kind of intense debate, right? Which I think that sort of creates a certain emotional feeling and load. I always just thought you could create a discussion experience that wasn’t quite so negative or toxic. I think in doing so, it would actually be more accessible to a lot of people. I think a lot of people just don’t want to use an app where they come away feeling bad all the time, right? I think that there’s a certain set of people who will either tolerate that because it’s their job to get that access to information or they’re just warriors in that way and want to be a part of that kind of intellectual combat. 

But I don’t think that that’s the ubiquitous thing, right? I think the ubiquitous thing is people want to get fresh information. I think there’s a place for text-based, right? Even when the world is moving toward richer and richer forms of sharing and consumption, text isn’t going away. It’s still going to be a big thing, but I think how people feel is really important.

So that’s been a big part of how we’ve tried to emphasize and develop Threads. And, you know, over time, if you want it to be ubiquitous, you obviously want to be welcome to everyone. But I think how you seed the networks and the culture that you create there, I think, ends up being pretty important for how they scale over time. 

Where with Facebook, we started with this real name culture, and it was grounded to your college email address. You know, it obviously hasn’t been grounded to your college email address for a very long time, but I think the kind of real authentic identity aspect of Facebook has continued and continues to be an important part of it.

So I think how we set the culture for Threads early on in terms of being a more positive, friendly place for discussion will hopefully be one of the defining elements for the next decade as we scale it out. We obviously have a lot of work to do, but I’d say it’s off to quite a good start. Obviously, there’s the huge spike, and then, you know, not everyone who tried it out originally is going to stick around immediately. But I mean, the monthly active’s and weekly’s, I don’t think we’re sharing stats on it yet…

.. This hasn’t happened yet with Threads, but you’re eventually going to hook it into ActivityPub, which is this decentralized social media protocol. It’s kind of complicated in layman’s terms, but essentially, people run their own servers. So, instead of having a centralized company run the whole network, people can run their own fiefdoms. It’s federated. So Threads will eventually hook into this. This is the first time you’ve done anything really meaningful in the decentralized social media space. 

Yeah, we’re building it from the ground up. I’ve always believed in this stuff.

Really? Because you run the largest centralized social media platform. 

But I mean, it didn’t exist when we got started, right? I’ve had our team at various times do the thought experiment of like, “Alright, what would it take to move all of Facebook onto some kind of decentralized protocol?” And it’s like, “That’s just not going to happen.” There’s so much functionality that is on Facebook that it’s way too complicated, and you can’t even support all the different things, and it would just take so long, and you’d not be innovating during that time. 

I think that there’s value in being on one of these protocols, but it’s not the only way to deliver value, so the opportunity cost of doing this massive transition is kind of this massive thing. But when you’re starting from scratch, you can just design it so it can work with that. And we want to do that with this because I thought that that was one of the interesting things that’s evolving around this kind of Twitter competitive space, and there’s a real ecosystem around that, and I think it’s interesting.

What does that mean for a company like yours long term if people gravitate more toward these decentralized protocols over time? Where does a big centralized player fit into that picture?

Well, I guess my view is that the more that there’s interoperability between different services and the more content can flow, the better all the services can be. And I guess I’m just confident enough that we can build the best one of the services, that I actually think that we’ll benefit and we’ll be able to build better quality products by making sure that we can have access to all of the different content from wherever anyone is creating it.

And I get that not everyone is going to want to use everything that we build. I mean, that’s obviously the case when it’s like, “Okay, we have 3 billion people using Facebook,” but not everyone wants to use one product, and I think making it so that they can use an alternative but can still interact with people on the network will make it so that that product also is more valuable.

I think that can be pretty powerful, and you can increase the quality of the product by making it so that you can give people access to all the content, even if it wasn’t created on that network itself. So, I don’t know. I mean, it’s a bet.

There’s kind of this funny counterintuitive thing where I just don’t think that people like feeling locked into a system. So, in a way, I actually think people will feel better about using our products if they know that they have the choice to leave.

If we make that super easy to happen… And obviously, there’s a lot of competition, and we do “download your data” on all our products, and people can do that today. But the more that’s designed in from scratch, I think it really just gives creators, for example, the sense that, “Okay, I have…” 

Agency.

Yeah, yeah. So, in a way, that actually makes people feel more confident investing in a system if they know that they have freedom over how they operate. Maybe for phase one of social networking, it was fine to have these systems that people felt a little more locked into, but I think for the mature state of the ecosystem, I don’t think that that’s going to be where it goes.

I’m pretty optimistic about this. And then if we can build Threads on this, then maybe over time, as the standards get more built out, it’s possible that we can spread that to more of the stuff that we’re doing. We’re certainly working on interop with messaging, and I think that’s been an important thing. The first step was kind of getting interop to work between our different messaging systems. 

Right, so they can talk to each other. 

Yeah, and then the first decision there was, “Okay, well, WhatsApp — we have this very strong commitment to encryption. So if we’re going to interop, then we’re either going to make the others encrypted, or we’re going to have to decrypt WhatsApp.” And it’s like, “Alright, we’re not going to decrypt WhatsApp, so we’re going to go down the path of encrypting everything else,” which we’re making good progress on.

But that basically has just meant completely rewriting Messenger and Instagram direct from scratch. So you’re basically going from a model where all the messages are stored in the cloud to completely inverting the architecture where now all the messages are stored locally and just the way…

While the plane’s in the air.

Yeah, that’s been a kind of heroic effort by just like a hundred or more people over a multiyear period. And we’re basically getting to the point where it’s starting to roll out now.

Now that we’re at the point where we can do encryption across those apps, we can also start to support more interop.

With other services that Meta doesn’t own?

Well, I mean, the plan was always to start with interop between our services, but then get to that. We’re starting to experiment with that, too…

I think Llama and the Llama 2 release has been a big thing for startups because it is so free or just easy to use and access. I’m wondering, was there ever debate internally about “should we take the closed route?” You know, you’ve spent so much money on all this AI research. You have one of the best AI labs in the world, I think it’s safe to say. You have huge distribution — why not keep it all to yourself? You could have done that.

You know, the biggest arguments in favor of keeping it closed were generally not proprietary advantage.

Or competitive advantage?

No, it wasn’t competitive advantage. There was a fairly intense debate around this.

Did you have to be dissuaded? Did you know we have to have it open?

My bias was that I thought it should be open, but I thought that there were novel arguments on the risks, and I wanted to make sure we heard them all out, and we did a very rigorous process. We’re training the next version of Llama now, and I think we’ll probably have the same set of debates around that and how we should release it. And again, I sort of, like, lean toward wanting to do it open source, but I think we need to do all the red teaming and understand the risks before making a call.

But the two big arguments that people had against making Llama 2 open were one: it takes a lot of time to prepare something to be open. Our main business is basically building consumer products, right? And that’s what we’re launching at Connect. Llama 2 is not a consumer product. It’s the engine or infrastructure that powers a bunch of that stuff. But there was this argument — especially after we did this partial release of Llama 1 and there was like a lot of stir around that, then people had a bunch of feedback and were wondering when we would incorporate that feedback — which is like, “Okay, well, if we release Llama 2, is that going to distract us from our real job, which is building the best consumer products that we can?” So that was one debate. I think we got comfortable with that relatively quickly. And then the much bigger debate was around the risk and safety.

It’s like, what is the framework for how you measure what harm can be done? How do you compare that to other things? So, for example, someone made this point, and this was actually at the Senate event. Someone made this point that’s like, “Okay, we took Llama 2, and our engineers in just several days were able to take away the safeguards and ask it a question — ‘Can you produce anthrax?’ — and it answered.” On its face, that sounds really bad, right? That’s obviously an issue that you can strip off the safeguards until you think about the fact that you can actually just Google how to make anthrax and it shows up on the first page of the results in five seconds, right?

So there’s a question when you’re thinking through these things about what is the actual incremental risk that is created by having these different technologies. We’ve seen this in protecting social media as well. If you have, like, Russia or some country trying to create a network of bots or, you know, inauthentic behavior, it’s not that you’re ever going to stop them from doing it. It’s an economics problem. You want to make it expensive enough for them to do that that it is no longer their best strategy because it’s cheaper for them to go try to exploit someone else or something else, right? And I think the same is true here. So, for the risk on this, you want to make it so that it’s sufficiently expensive that it takes engineers several days to dismantle whatever safeguards we built in instead of just Googling it.

You feel generally good directionally with the safety work on that?

For Llama 2, I think that we did leading work on that. I think the white paper around Llama 2, where we basically outlined all the different metrics and all the different things that we did, and we did internal red teaming and external red teaming, and we’ve got a bunch of feedback on it. So, because we went into this knowing that nothing is going to be foolproof — some bad actor is going to be able to find some way to exploit it — we really knew that we needed to create a pretty high bar on that. So, yeah, I felt good about that for Llama 2, but it was a very rigorous process…

… But one of the things that I think is interesting is these AI problems, they’re so tightly optimized that having the AI basically live in the environment that you’re trying to get it to get better at is pretty important. So, for example, you have things like ChatGPT — they’re just in an abstract chat interface. But getting an AI to actually live in a group chat, for example, it’s actually a completely different problem because now you have this question of, “Okay, when should the AI jump in?”

In order to get an AI to be good at being in a group chat, you need to have experience with AIs and group chats, which, even though Google or OpenAI or other folks may have a lot of experience with other things, that kind of product dynamic of having the actual experience that you’re trying to deliver the product in, I think that’s super important.

Similarly, one of the things that I’m pretty excited about: I think multimodality is a pretty important interaction, right? A lot of these things today are like, “Okay, you’re an assistant. I can chat with you in a box. You don’t change, right? It’s like you’re the same assistant every day,” and I think that’s not really how people tend to interact, right? In order to make things fresh and entertaining, even the apps that we use, they change, right? They get refreshed. They add new features.

And I think that people will probably want the AIs that they interact with, I think it’ll be more exciting and interesting if they do, too. So part of what I’m interested in is this isn’t just chat, right? Chat will be where most of the interaction happens. But these AIs are going to have profiles on Instagram and Facebook, and they’ll be able to post content, and they’ll be able to interact with people and interact with each other, right?

There’s this whole interesting set of flywheels around how that interaction can happen and how they can sort of evolve over time. I think that’s going to be very compelling and interesting, and obviously, we’re kind of starting slowly on that. So we wanted to build it so that it kind of worked across the whole Meta universe of products, including having them be able to, in the near future, be embodied as avatars in the metaverse, right?

So you go into VR and you have an avatar version of the AI, and you can talk to them there. I think that’s gonna be really compelling, right? It’s, at a minimum, creating much better NPCs and experiences when there isn’t another actual person who you want to play a game with. You can just have AIs that are much more realistic and compelling to interact with.

But I think having this crossover where you have an assistant or you have someone who tells you jokes and cracks you up and entertains you, and then they can show up in some of your metaverse worlds and be able to be there as an avatar, but you can still interact with them in the same way — I think it’s pretty cool.

Do you think the advent of these AI personas that are way more intelligent will accelerate interest in the metaverse and in VR?

I think that all this stuff makes it more compelling. It’s probably an even bigger deal for smart glasses than for VR.

You need something. You need a kind of visual or a voice control?

When I was thinking about what would be the key features for smart glasses, I kind of thought that we were going to get holograms in the world, and that was one. That’s kind of like augmented reality. But then there was always some vague notion that you’d have an assistant that could do something.

I thought that things like Siri or Alexa were very limited. So I was just like, “Okay, well, over the time period of building AR glasses, hopefully the AI will advance.” And now it definitely has. So now I think we’re at this point where it may actually be the case that for smart glasses, the AI is compelling before the holograms and the displays are, which is where we got to with the new version of the Ray-Bans that we’re shipping this year, right? When we started working on the product, all this generative AI stuff hadn’t happened yet.

So we actually started working on the product just as an improvement over the first generation so that the photos are better, the audio is a lot better, the form factor is better. It’s a much more refined version of the initial product. And there’s some new features, like you can livestream now, which is pretty cool because you can livestream what you’re looking at.

But it was only over the course of developing the product that we realized that, “Hey, we could actually put this whole generative AI assistant into it, and you could have these glasses that are kind of stylish Ray-Ban glasses, and you could be talking to AI all throughout the day about different questions you have.”

This isn’t in the first software release, but sometime early next year, we’re also going to have this multimodality. So you’re gonna be able to ask the AI, “Hey, what is it that I’m looking at? What type of plant is that? Where am I? How expensive is this thing?”

Because it has a camera built into the glasses, so you can look at something like, “Alright, you’re filming with some Canon camera. Where do I get one of those?” I think that’s going to be very interesting.

Again, this is all really novel stuff. So I’m not pretending to know exactly what the key use cases are or how people are going to use that. But smart glasses are very powerful for AI because, unlike having it on your phone, glasses, as a form factor, can see what you see and hear what you hear from your perspective.

So if you want to build an AI assistant that really has access to all of the inputs that you have as a person, glasses are probably the way that you want to build that. It’s this whole new angle on smart glasses that I thought might materialize over a five- to 10-year period but, in this odd twist of the tech industry, I think actually is going to show up maybe before even super high-quality holograms do…

It seems like you all, based on my demos, still primarily think of it as a gaming device. Is that fair? That the main use cases for Quest 3 are going to be these kinds of “gaming meets social.” So you’ve got Roblox now.

I think social is actually the first thing, which is interesting because Quest used to be primarily gaming. And now, if you look at what experiences are people spending the most time in, it’s actually just different social metaverse-type experiences, so things like Rec Room, VRChat, Horizon, Roblox. Even with Roblox just kind of starting to grow on the platform, social is already more time spent than gaming use cases. It’s different if you look at the economics because people pay more for games. Whereas social kind of has that whole adoption curve thing that I talked about before, where, first, you have to kind of build out the big community, and then you can enable commerce and kind of monetize it over time.

This is sort of my whole theory for VR. People looked at it initially as a gaming device. I thought, “Hey, I think this is a new computing platform overall. Computing platforms tend to be good for three major things: gaming, social and communication, and productivity. And I’m pretty sure we can nail the social one. If we can find the right partners on productivity and if we can support the gaming ecosystem, then I think that we can help this become a big thing.”

Broadly, that’s on track. I thought it was going to be a long-term project, but I think the fact that social has now overtaken gaming as the thing that people are spending the most time on is an interesting software evolution in how they’re used. But like you’re saying: entertainment, social, gaming — still the primary things. Productivity, I think, still needs some time to develop…

I reported on some comments you made to employees after Apple debuted the Vision Pro, and you didn’t seem super phased by it. It seemed like it didn’t bother you as much as it maybe could have. I have to imagine if they released a $700 headset, we’d be having a different conversation. But they’re shipping low volume, and they’re probably three to four years out from a general, lower-tier type release that’s at any meaningful scale. So is it because the market’s yours foreseeably then for a while?

Apple is obviously very good at this, so I don’t want to be dismissive. But because we’re relatively newer to building this, the thing that I wasn’t sure about is when Apple released a device, were they just going to have made some completely new insight or breakthrough that just made our effort…

Blew your R&D up?

Yeah, like, “Oh, well, now we need to go start over.” I thought we were doing pretty good work, so I thought that was unlikely, but you don’t know for sure until they show up with their thing. And there was just nothing like that.

There are some things that they did that are clever. When we actually get to use it more, I’m sure that there are going to be other things that we’ll learn that are interesting. But mostly, they just chose a different part of the market to go in.

I think it makes sense for them. I think that they sell… it must be 15 to 20 million MacBooks a year. And from their perspective, if they can replace those MacBooks over time with things like Vision Pro, then that’s a pretty good business for them, right? It’ll be many billions of dollars of revenue, and I think they’re pretty happy selling 20 million or 15 million MacBooks a year.

But we play a different game. We’re not trying to sell devices at a big premium and make a ton of money on the devices. You know, going back to the curve that we were talking about before, we want to build something that’s great, get it to be so that people use it and want to use it like every week and every day, and then, over time, scale it to hundreds of millions or billions of people.

If you want to do that, then you have to innovate, not just on the quality of the device but also in making it affordable and accessible to people. So I do just think we’re playing somewhat different games, and that makes it so that over time, you know, they’ll build a high-quality device and in the zone that they’re focusing on, and it may just be that these are in fairly different spaces for a long time, but I’m not sure. We’ll see as it goes. 

From the developer perspective, does it help you to have developers building on… you could lean too much into the Android versus iOS analogy here, but yeah, where do you see that going? Does Meta really lean into the Android approach and you start licensing your software and technology to other OEMs?

I’d like to have this be a more open ecosystem over time. My theory on how these computing platforms evolve is there will be a closed integrated stack and a more open stack, and there have been in every generation of computing so far. 

The thing that’s actually not clear is which one will end up being the more successful, right? We’re kind of coming off of the mobile one now, where Apple has truly been the dominant company. Even though there are technically more Android phones, there’s way more economic activity, and the center of gravity for all this stuff is clearly on iPhones.

In a lot of the most important countries for defining this, I think iPhone has a majority and growing share, and I think it’s clearly just the dominant company in the space. But that wasn’t true in computers and PCs, so our approach here is to focus on making it as affordable as possible. We want to be the open ecosystem, and we want the open ecosystem to win.

So I think it is possible that this will be more like PCs than like mobile, where maybe Apple goes for a kind of high-end segment, and maybe we end up being the kind of the primary ecosystem and the one that ends up serving billions of people. That’s the outcome that we’re playing for…

That’s why I asked. Because I think people are wondering, “Where’s all this going?” 

At the end of the day, I’m quite optimistic about both augmented and virtual reality. I think AR glasses are going to be the thing that’s like mobile phones that you walk around the world wearing.

VR is going to be like your workstation or TV, which is when you’re like settling in for a session and you want a kind of higher fidelity, more compute, rich experience, then it’s going to be worth putting that on. But you’re not going to walk down the street wearing a VR headset. At least I hope not — that’s not the future that we’re working toward.

But I do think that there’s somewhat of a bias — maybe this in the tech industry or maybe overall — where people think that the mobile phone one, the glasses one, is the only one of the two that will end up being valuable.

But there are a ton of TVs out there, right? And there are a ton of people who spend a lot of time in front of computers working. So I actually think the VR one will be quite important, too, but I think that there’s no question that the larger market over time should be smart glasses.

Now, you’re going to have both all the immersive quality of being able to interact with people and feel present no matter where you are in a normal form factor, and you’re also going to have the perfect form factor to deliver all these AI experiences over time because they’ll be able to see what you see and hear what you hear.

So I don’t know. This stuff is challenging. Making things small is also very hard. It’s this fundamentally kind of counterintuitive thing where I think humans get super impressed by building big things, like the pyramids. I think a lot of time, building small things, like cures for diseases at a cellular level or miniaturizing a supercomputer to fit into your glasses, are maybe even bigger feats than building some really physically large things, but it seems less impressive for some reason. It’s super fascinating stuff.

I feel like every time we talk, a lot has happened in a year. You seem really dialed in to managing the company. And I’m curious what motivates you these days. Because you’ve got a lot going on, and you’re getting into fighting, you’ve got three kids, you’ve got the philanthropy stuff — there’s a lot going on. And you seem more active in day-to-day stuff, at least externally, than ever. You’re kind of the last, I think, founder of your era still leading the company of this large. Do you think about that? Do you think about what motivates you still? Or is it just still clicking, and it’s more subconscious?

I’m not sure that that much of the stuff that you said is that new. I mean, the kids are seven years old, almost eight now, so that’s been for a while. The fighting thing is relatively new over the last few years, but I’ve always been very physical.

We go through different waves in terms of what the company needs to be doing, and I think that that calls for somewhat different styles of leadership. We went through a period where a lot of what we needed to do was tackle and navigate some important social issues, and I think that that required a somewhat different style.

And then we went through a period where we had some quite big business challenges: handling in a recession and revenue not coming in the way that we thought and needing to do layoffs, and that required a somewhat different style. But now I think we’re squarely back in developing really innovative products, especially because of some of the innovations in AI. That, in some ways, plays exactly to my favorite style of running a company. But I don’t know. I think these things evolve over time.

5. Rising Loan Costs Are Hurting Riskier Companies – Eric Wallerstein

Petco took out a $1.7 billion loan two years ago at an interest rate around 3.5%. Now it pays almost 9%.

Interest costs for the pet-products retailer surged to nearly a quarter of free cash flow in this year’s second quarter. Early in 2021, when Petco borrowed the money, those costs were less than 5% of cash flow…

… Petco isn’t alone. Many companies borrowed at ultralow rates during the pandemic through so-called leveraged loans. Often used to fund private-equity buyouts—or by companies with low credit ratings—this debt has payments that adjust with the short-term rates recently lifted by the Federal Reserve.

Now, interest costs in the $1.7 trillion market are biting and Fed officials are forecasting that they will stay high for some time.

Nearly $270 billion of leveraged loans carry weak credit profiles and are potentially at risk of default, according to ratings firm Fitch. Conditions have deteriorated as the Fed has raised rates, beginning to show signs of stress not seen since the onset of the Covid-19 pandemic. Excluding a 2020 spike, the default rate for the past 12 months is the highest since 2014…

…“So far, borrowers have done a good job of managing increased interest costs as the economy has held up better than many expected at the start of the year,” said Hussein Adatia, who manages portfolios of stressed and distressed corporate credit for Dallas-based Westwood. “The No. 1 risk to leveraged loans is if we get a big slowdown in the economy.”…

…According to the Fed’s senior-loan-officer survey, banks are becoming more stringent about whom they are willing to lend to, making it more difficult for low-rated companies to refinance. Fitch expects about $61 billion of those loans to default in the next two years, the “overwhelmingly majority of which” are anticipated by the end of 2023.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.

7 Investing Mistakes to Avoid 

Investing is a negative art. It’s more important to avoid mistakes than it is to find ways to win.

From what I see, most investors are often on the lookout for ways to win in the stock market. But that may be the wrong focus, as economist Erik Falkenstein writes:

“In expert tennis, 80% of the points are won, while in amateur tennis, 80% are lost. The same is true for wrestling, chess, and investing: Beginners should focus on avoiding mistakes, experts on making great moves.”

In keeping with the spirit of Falkenstein’s thinking, here are some big investing blunders to avoid.

1. Not realising how common volatility is even with the stock market’s biggest long-term winners

From 1971 to 1980, the American retailer Walmart produced breath-taking business growth. Table 1 below shows the near 30x increase in Walmart’s revenue and the 1,600% jump in earnings per share in that period. Unfortunately, this exceptional growth did not help with Walmart’s short-term return.

Based on the earliest data I could find, Walmart’s stock price fell by three-quarters from less than US$0.04 in late-August 1972 to around US$0.01 by December 1974 – in comparison, the US stock market, represented by the S&P 500, was down by ‘only’ 40%. 

Table 1; Source: Walmart annual reports

But by the end of 1979, Walmart’s stock price was above US$0.08, more than double what it was in late-August 1972. Still, the 2x-plus increase in Walmart’s stock price was far below the huge increase in earnings per share the company generated.

This is where the passage of time helped – as more years passed, the weighing machine clicked into gear (I’m borrowing from Ben Graham’s brilliant analogy of the stock market being a voting machine in the short run but a weighing machine in the long run). At the end of 1989, Walmart’s stock price was around US$3.70, representing an annualised growth rate in the region of 32% from August 1972; from 1971 to 1989, Walmart’s revenue and earnings per share grew by 41% and 38% per year. Even by the end of 1982, Walmart’s stock price was already US$0.48, up more than 10 times where it was in late-August 1972. 

Volatility is a common thing in the stock market. It does not necessarily mean that anything is broken.

2. Mixing investing with economics

China’s GDP (gross domestic product) grew by an astonishing 13.3% annually from US$427 billion in 1992 to US$18 trillion in 2022. But a dollar invested in the MSCI China Index – a collection of large and mid-sized companies in the country – in late-1992 would have still been roughly a dollar as of October 2022, as shown in Figure 1. 

Put another way, Chinese stocks stayed flat for 30 years despite a massive macroeconomic tailwind (the 13.3% annualised growth in GDP). 

Figure 1; Source: Duncan Lamont

Why have the stock prices of Chinese companies behaved the way they did? It turns out that the earnings per share of the MSCI China Index was basically flat from 1995 to 2021.

Figure 2; Source: Eugene Ng

Economic trends and investing results can at times be worlds apart. The gap exists because there can be a huge difference between a company’s business performance and the trend – and what ultimately matters to a company’s stock price, is its business performance. 

3. Anchoring on past stock prices

A 2014 study by JP Morgan showed that 40% of all stocks in the Russell 3000 index in the US from 1980 to 2014 suffered a permanent decline of 70% or more from their peak values.

There are stocks that fall hard – and then stay there. Thinking that a stock will return to a particular price just because it had once been there can be a terrible mistake to make. 

4. Think a stock is cheap based on superficial valuation metrics

My friend Chin Hui Leong from The Smart Investors had suffered through this mistake before and he has graciously shared his experience for the sake of letting others learn. In an April 2020 article, he wrote:

“The other company I bought in May 2009, American Oriental Bioengineering, has shrunk to such a tiny figure, making it a total loss…

…In contrast, American Oriental Bioengineering’s revenue fell from around $300 million in 2009 to about US$120 million by 2013. The company also recorded a huge loss of US$91 million in 2013…

…Case in point: when I bought American Oriental Bioengineering, the stock was only trading at seven times its earnings. And yet, the low valuation did not yield a good outcome in the end.”

Superficial valuation metrics can’t really tell us if a stock’s a bargain or not. Ultimately, it’s the business which matters.

5. Not investing due to fears of a recession

Many investors I’ve spoken to prefer to hold off investing in stocks if they fear a recession is around the corner, and jump back in only when the coast is clear. This is a mistake.

According to data from Michael Batnick, the Director of Research at Ritholtz Wealth Management, a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint. 

Said another way, history’s verdict is that avoiding recessions flawlessly would cause serious harm to your investment returns.

6. Following big investors blindly

Morgan Housel is currently a partner with the venture capital firm Collaborative Fund. Prior to this, he was a writer for The Motley Fool for many years. Here’s what Housel wrote in a 2014 article for the Fool (emphasis is mine):

I made my worst investment seven years ago.

The housing market was crumbling, and a smart value investor I idolized began purchasing shares in a small, battered specialty lender. I didn’t know anything about the company, but I followed him anyway, buying shares myself. It became my largest holding — which was unfortunate when the company went bankrupt less than a year later.

Only later did I learn the full story. As part of his investment, the guru I followed also controlled a large portion of the company’s debt and and preferred stock, purchased at special terms that effectively gave him control over its assets when it went out of business. The company’s stock also made up one-fifth the weighting in his portfolio as it did in mine. I lost everything. He made a decent investment.”

We may never be able to know what a famous investor’s true motives are for making any particular investment. And for that reason, it’s important to never follow anyone blindly into the stock market.

7. Not recognising how powerful simple, common-sense financial advice can be

Robert Weinberg is an expert on cancer research from the Massachusetts Institute of Technology. In the documentary The Emperor of All Maladies, Weinberg said (emphases are mine):

If you don’t get cancer, you’re not going to die from it. That’s a simple truth that we [doctors and medical researchers] sometimes overlook because it’s intellectually not very stimulating and exciting.

Persuading somebody to quit smoking is a psychological exercise. It has nothing to do with molecules and genes and cells, and so people like me are essentially uninterested in it — in spite of the fact that stopping people from smoking will have vastly more effect on cancer mortality than anything I could hope to do in my own lifetime.”

I think Weinberg’s lesson can be analogised to investing. Ben Carlson is the Director of Institutional Asset Management at Ritholtz Wealth Management. In a 2017 blog post, Carlson compared the long-term returns of US college endowment funds against a simple portfolio he called the Bogle Model.

The Bogle Model was named after the late index fund legend John Bogle. It consisted of three, simple, low-cost Vanguard funds that track US stocks, stocks outside of the US, and bonds. In the Bogle Model, the funds were held in these weightings: 40% for the US stocks fund, 20% for the international stocks fund, and 40% for the bonds fund. Meanwhile, the college endowment funds were dizzyingly complex, as  Carlson describes:

These funds are invested in venture capital, private equity, infrastructure, private real estate, timber, the best hedge funds money can buy; they have access to the best stock and bond fund managers; they use leverage; they invest in complicated derivatives; they use the biggest and most connected consultants…”

Over the 10 years ended 30 June 2016, the Bogle Model produced an annual return of 6.0%. But even the college endowment funds that belonged to the top-decile in terms of return only produced an annual gain of 5.4% on average. The simple Bogle Model had bested nearly all the fancy-pants college endowment funds in the US.

Simple advice can be very useful and powerful for many investors. But they’re sometimes ignored because they’re too simple, despite how effective they can be. Don’t make this mistake.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 October 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 October 2023:

1. How scientists are using artificial intelligence – The Economist

In 2019, scientists at the Massachusetts Institute of Technology (MIT) did something unusual in modern medicine—they found a new antibiotic, halicin. In May this year another team found a second antibiotic, abaucin. What marked these two compounds out was not only their potential for use against two of the most dangerous known antibiotic-resistant bacteria, but also how they were identified.

In both cases, the researchers had used an artificial-intelligence (AI) model to search through millions of candidate compounds to identify those that would work best against each “superbug”. The model had been trained on the chemical structures of a few thousand known antibiotics and how well (or not) they had worked against the bugs in the lab. During this training the model had worked out links between chemical structures and success at damaging bacteria. Once the AI spat out its shortlist, the scientists tested them in the lab and identified their antibiotics. If discovering new drugs is like searching for a needle in a haystack, says Regina Barzilay, a computer scientist at MIT who helped to find abaucin and halicin, AI acts like a metal detector. To get the candidate drugs from lab to clinic will take many years of medical trials. But there is no doubt that AI accelerated the initial trial-and-error part of the process…

…In materials science, for example, the problem is similar to that in drug discovery—there are an unfathomable number of possible compounds. When researchers at the University of Liverpool were looking for materials that would have the very specific properties required to build better batteries, they used an AI model known as an “autoencoder” to search through all 200,000 of the known, stable crystalline compounds in the Inorganic Crystal Structure Database, the world’s largest such repository. The AI had previously learned the most important physical and chemical properties required for the new battery material to achieve its goals and applied those conditions to the search. It successfully reduced the pool of candidates for scientists to test in the lab from thousands to just five, saving time and money.

The final candidate—a material combining lithium, tin, sulphur and chlorine—was novel, though it is too soon to tell whether or not it will work commercially. The AI method, however, is being used by researchers to discover other sorts of new materials…

…The shapes into which proteins twist themselves after they are made in a cell are vital to making them work. Scientists do not yet know how proteins fold. But in 2021, Google DeepMind developed AlphaFold, a model that had taught itself to predict the structure of a protein from its amino-acid sequence alone. Since it was released, AlphaFold has produced a database of more than 200m predicted protein structures, which has already been used by over 1.2m researchers. For example, Matthew Higgins, a biochemist at the University of Oxford, used AlphaFold to figure out the shape of a protein in mosquitoes that is important for the malaria parasite that the insects often carry. He was then able to combine the predictions from AlphaFold to work out which parts of the protein would be the easiest to target with a drug. Another team used AlphaFold to find—in just 30 days—the structure of a protein that influences how a type of liver cancer proliferates, thereby opening the door to designing a new targeted treatment.

AlphaFold has also contributed to the understanding of other bits of biology. The nucleus of a cell, for example, has gates to bring in material to produce proteins. A few years ago, scientists knew the gates existed, but knew little about their structure. Using AlphaFold, scientists predicted the structure and contributed to understanding about the internal mechanisms of the cell. “We don’t really completely understand how [the AI] came up with that structure,” says Pushmeet Kohli, one of AlphaFold’s inventors who now heads Google DeepMind’s “AI for Science” team. “But once it has made the structure, it is actually a foundation that now, the whole scientific community can build on top of.”…

…Pangu-Weather, an AI built by Huawei, a Chinese company, can make predictions about weather a week in advance thousands of times faster and cheaper than the current standard, without any meaningful dip in accuracy. FourCastNet, a model built by Nvidia, an American chipmaker, can generate such forecasts in less than two seconds, and is the first AI model to accurately predict rain at a high spatial resolution, which is important information for predicting natural disasters such as flash floods…

…One approach to fusion research involves creating a plasma (a superheated, electrically charged gas) of hydrogen inside a doughnut-shaped vessel called a tokamak. When hot enough, around 100m°C, particles in the plasma start to fuse and release energy. But if the plasma touches the walls of the tokamak, it will cool down and stop working, so physicists contain the gas within a magnetic cage. Finding the right configuration of magnetic fields is fiendishly difficult (“a bit like trying to hold a lump of jelly with knitting wool”, according to one physicist) and controlling it manually requires devising mathematical equations to predict what the plasma will do and then making thousands of small adjustments every second to around ten different magnetic coils. By contrast, an AI control system built by scientists at Google DeepMind and EPFL in Lausanne, Switzerland, allowed scientists to try out different shapes for the plasma in a computer simulation—and the AI then worked out how best to get there…

…“Super-resolution” AI models can enhance cheap, low-resolution electron-microscope images into high-resolution ones that would otherwise have been too expensive to record. The AI compares a small area of a material or a biological sample in high resolution with the same thing recorded at a lower resolution. The model learns the difference between the two resolutions and can then translate between them…

…Trained on vast databases of known drugs and their properties, models for “de novo molecular design” can figure out which molecular structures are most likely to do which things, and they build accordingly. Verseon, a pharmaceutical company based in California, has created drug candidates in this way, several of which are now being tested on animals, and one—a precision anticoagulant—that is in the first phase of clinical trials…

…If an LLM could be prompted with real (or fabricated) back stories so as to mirror accurately what human participants might say, they could theoretically replace focus groups, or be used as agents in economics research. LLMs could be trained with various different personas, and their behaviour could then be used to simulate experiments, whose results, if interesting, could later be confirmed with human subjects…

…Elicit, a free online AI tool created by Ought, an American non-profit research lab, can help by using an LLM to comb through the mountains of research literature and summarise the important ones much faster than any human could…

… But Dr Girolami warns that whereas AI might be useful to help scientists fill in gaps in knowledge, the models still struggle to push beyond the edges of what is already known. These systems are good at interpolation—connecting the dots—but less so at extrapolation, imagining where the next dot might go.

And there are some hard problems that even the most successful of today’s AI systems cannot yet handle. AlphaFold, for example, does not get all proteins right all the time. Jane Dyson, a structural biologist at the Scripps Research Institute in La Jolla, California, says that for “disordered” proteins, which are particularly relevant to her research, the AIs predictions are mostly garbage. “It’s not a revolution that puts all of our scientists out of business.” And AlphaFold does not yet explain why proteins fold in the ways they do. Though perhaps the AI “has a theory we just have not been able to grasp yet,” says Dr Kohli.

2. How Xi Jinping is taking control of China’s stock market – Hudson Lockett and Cheng Len

When Jilin Joinature Polymer made its debut on the Shanghai Stock Exchange on September 20, it became the 200th company to float on China’s domestic markets this year. Collectively they have raised over $40bn, more than double the amount raised on Wall Street and almost half the global total.

Yet the country’s benchmark CSI 300 index is down 14 per cent since January, having fallen by a fifth in 2022. It has underperformed other major markets such as Japan and the US, as worries mount about China’s slowing economic growth and a liquidity crisis in the real estate sector.

The highly unusual situation of a seemingly stagnant market welcoming hundreds of new companies is a consequence of significant policy shifts in Beijing that have ramped up over the past year. President Xi Jinping is intent on boosting investment into sectors that fit with his priorities for control, national security and technological self-sufficiency, and is using stock markets to direct that capital with the aim of reshaping China’s economy…

…Roughly a year ago, Xi told top leaders assembled in Beijing that China needed to mobilise a “new whole-nation system” to accelerate breakthroughs in strategic areas by “strengthening party and state leadership on major scientific and technological innovations, giving full play to the role of market mechanisms”.

That “new” in “new whole-nation system”, and the reference to “market mechanisms” distinguish Xi’s vision from that advanced under Mao Zedong, who ruled China from 1949 to 1976. Mao’s original “whole-nation system” entailed Soviet-style top-down economic planning, delivering technological advances including satellites and nuclear weapons, but not prosperity for the masses…

…Whereas Mao shut down China’s stock exchanges, Xi wants to use domestic equity markets to reduce dependence on property and infrastructure development to drive growth. But his “new whole-nation system” prioritises party policy above profit.

This helps explain why the party’s top cadres have been fast-tracking IPOs but remain reluctant to deploy large-scale property and infrastructure stimulus to reinvigorate economic growth. In their eyes, returning to the old playbook would only postpone an inevitable reckoning for debt-laden real estate developers and delay the planned transition to a new Chinese economy.

Key to that shift, Goldman’s Lau says, is getting companies in sectors such as semiconductor manufacturing, biotech and electric vehicles to go public. With stock market investors backing them, they can scale up and help drive the growth in consumer spending needed to fill the gap left behind by China’s downsized property market.

Xi’s administration was already channelling hundreds of billions of dollars from so-called government guidance funds into pre-IPO companies that served the state’s priorities. Now it is speeding up IPOs in Shanghai and Shenzhen while weeding out listings attempts by companies in low-priority sectors through the launch of two intertwined systems.

The nationwide “registration based” listings system, rolled out in February, made China’s formal process for stock market listings more transparent and ended an often lengthy process of official vetting by the China Securities Regulatory Commission for every IPO application.

Just as important is a behind-the-scenes “traffic light” system, in which regulators instruct Chinese investment banks informally on what kinds of companies should actually list. Companies such as beverage makers and café and restaurant chains get a “red light”, in effect prohibiting them from going public, whereas those in strategically important industries get a “green light”…

…Regulators have guarded against that risk by extending “lock-up” periods, during which Chinese investment banks and other institutional investors who participate in IPOs are not permitted to sell stock…

…Regulators have also restricted the ability of company insiders — be they directors, pre-IPO backers or so-called anchor investors — to sell their shares, especially if a company’s shares fall below their issue price or it fails to pay dividends to its shareholders.

The day after these changes were announced, at least 10 companies listed in Shanghai and Shenzhen cancelled planned share disposals by insiders. An analysis of the new rules’ impact by Tepon Securities showed that almost half of all listed companies in China now have at least some shareholders who cannot divest…

…With the market failing to respond in the way it once did, authorities are encouraging a wide range of domestic institutional investors to buy and hold shares in strategic sectors in order to prop up prices. The latest such move came earlier this month, when China’s insurance industry regulator lowered its designated risk level for domestic equities in an attempt to nudge normally cautious insurers to buy more stocks.

Such measures show that Xi’s stated plan to give “full play” to the role of markets comes with an important rider: those markets will take explicit and frequent direction from the party-state…

…Economists say that the tech sectors being favoured for listings by Beijing — semiconductors, EVs, batteries and other high-end manufacturing — are simply not capable of providing the scale of employment opportunity or driving the levels of consumer spending anticipated by top Chinese leaders.

“There’s two problems with focusing on investing in tech,” says Michael Pettis, a finance professor at Peking University and senior fellow at Carnegie China. “One is that tech is very small relative to what came before [from property and infrastructure], and two is that investing in tech doesn’t necessarily make you richer — it’s got to be economically sustainable.”

3. Higher Interest Rates Not Just for Longer, but Maybe Forever – Greg Ip

In their projections and commentary, some officials hint that rates might be higher not just for longer, but forever. In more technical terms, the so-called neutral rate, which keeps inflation and unemployment stable over time, has risen…

…The neutral rate isn’t literally forever, but that captures the general idea. In the long run neutral is a function of very slow moving forces: demographics, the global demand for capital, the level of government debt and investors’ assessments of inflation and growth risks.

The neutral rate can’t be observed, only inferred by how the economy responds to particular levels of interest rates. If current rates aren’t slowing demand or inflation, then neutral must be higher and monetary policy isn’t tight.

Indeed, on Wednesday, Fed Chair Jerome Powell allowed that one reason the economy and labor market remain resilient despite rates between 5.25% and 5.5% is that neutral has risen, though he added: “We don’t know that.”

Before the 2007-09 recession and financial crisis, economists thought the neutral rate was around 4% to 4.5%. After subtracting 2% inflation, the real neutral rate was 2% to 2.5%. In the subsequent decade, the Fed kept interest rates near zero, yet growth remained sluggish and inflation below 2%. Estimates of neutral began to drop. Fed officials’ median estimate of the longer-run fed-funds rate—their proxy for neutral—fell from 4% in 2013 to 2.5% in 2019, or 0.5% in real terms.

As of Wednesday, the median estimate was still 2.5%. But five of 18 Fed officials put it at 3% or higher, compared with just three officials in June and two last December…

…There are plenty of reasons for a higher neutral. After the global financial crisis, businesses, households and banks were paying down debt instead of borrowing, reducing demand for savings while holding down growth and inflation. As the crisis faded, so did the downward pressure on interest rates.

Another is government red ink: Federal debt held by the public now stands at 95% of gross domestic product, up from 80% at the start of 2020, and federal deficits are now 6% of GDP and projected to keep rising, from under 5% before the pandemic. To get investors to hold so much more debt probably requires paying them more. The Fed bought bonds after the financial crisis and again during the pandemic to push down long-term interest rates. It is now shedding those bondholdings…

…Inflation should not, by itself, affect the real neutral rate. However, before the pandemic the Fed’s principal concern was that inflation would persist below 2%, a situation that makes it difficult to stimulate spending and can lead to deflation, and that is why it kept rates near zero from 2008 to 2015. In the future it will worry more that inflation persists above 2%, and err on the side of higher rates with little appetite for returning to zero.  

Other factors are still pressing down on neutral, such as an aging world population, which reduces demand for homes and capital goods to equip workers. 

4. Confessions of a Viral AI Writer – Vauhini Vara

I kept playing with GPT-3. I was starting to feel, though, that if I did publish an AI-assisted piece of writing, it would have to be, explicitly or implicitly, about what it means for AI to write. It would have to draw attention to the emotional thread that AI companies might pull on when they start selling us these technologies. This thread, it seemed to me, had to do with what people were and weren’t capable of articulating on their own.

There was one big event in my life for which I could never find words. My older sister had died of cancer when we were both in college. Twenty years had passed since then, and I had been more or less speechless about it since. One night, with anxiety and anticipation, I went to GPT-3 with this sentence: “My sister was diagnosed with Ewing sarcoma when I was in my freshman year of high school and she was in her junior year.”

GPT-3 picked up where my sentence left off, and out tumbled an essay in which my sister ended up cured. Its last line gutted me: “She’s doing great now.” I realized I needed to explain to the AI that my sister had died, and so I tried again, adding the fact of her death, the fact of my grief. This time, GPT-3 acknowledged the loss. Then, it turned me into a runner raising funds for a cancer organization and went off on a tangent about my athletic life.

I tried again and again. Each time, I deleted the AI’s text and added to what I’d written before, asking GPT-3 to pick up the thread later in the story. At first it kept failing. And then, on the fourth or fifth attempt, something shifted. The AI began describing grief in language that felt truer—and with each subsequent attempt, it got closer to describing what I’d gone through myself.

When the essay, called “Ghosts,” came out in The Believer in the summer of 2021, it quickly went viral. I started hearing from others who had lost loved ones and felt that the piece captured grief better than anything they’d ever read. I waited for the backlash, expecting people to criticize the publication of an AI-assisted piece of writing. It never came. Instead the essay was adapted for This American Life and anthologized in Best American Essays. It was better received, by far, than anything else I’d ever written…

…Some readers told me “Ghosts” had convinced them that computers wouldn’t be replacing human writers anytime soon, since the parts I’d written were inarguably better than the AI-generated parts. This was probably the easiest anti-AI argument to make: AI could not replace human writers because it was no good at writing. Case closed.

The problem, for me, was that I disagreed. In my opinion, GPT-3 had produced the best lines in “Ghosts.” At one point in the essay, I wrote about going with my sister to Clarke Beach near our home in the Seattle suburbs, where she wanted her ashes spread after she died. GPT-3 came up with this:

We were driving home from Clarke Beach, and we were stopped at a red light, and she took my hand and held it. This is the hand she held: the hand I write with, the hand I am writing this with.

My essay was about the impossibility of reconciling the version of myself that had coexisted alongside my sister with the one left behind after she died. In that last line, GPT-3 made physical the fact of that impossibility, by referring to the hand—my hand—that existed both then and now. I’d often heard the argument that AI could never write quite like a human precisely because it was a disembodied machine. And yet, here was as nuanced and profound a reference to embodiment as I’d ever read. Artificial intelligence had succeeded in moving me with a sentence about the most important experience of my life…

…Heti and other writers I talked to brought up a problem they’d encountered: When they asked AI to produce language, the result was often boring and cliché-ridden. (In a New York Times review of an AI-generated novella, Death of an Author, Dwight Garner dismissed the prose as having “the crabwise gait of a Wikipedia entry.”) Some writers wanted to know how I’d gotten an early-generation AI model to create poetic, moving prose in “Ghosts.” The truth was that I’d recently been struggling with clichés, too, in a way I hadn’t before. No matter how many times I ran my queries through the most recent versions of ChatGPT, the output would be full of familiar language and plot developments; when I pointed out the clichés and asked it to try again, it would just spout a different set of clichés.

I didn’t understand what was going on until I talked to Sil Hamilton, an AI researcher at McGill University who studies the language of language models. Hamilton explained that ChatGPT’s bad writing was probably a result of OpenAI fine-tuning it for one purpose, which was to be a good chatbot. “They want the model to sound very corporate, very safe, very AP English,” he explained. When I ran this theory by Joanne Jang, the product manager for model behavior at OpenAI, she told me that a good chatbot’s purpose was to follow instructions. Either way, ChatGPT’s voice is polite, predictable, inoffensive, upbeat. Great characters, on the other hand, aren’t polite; great plots aren’t predictable; great style isn’t inoffensive; and great endings aren’t upbeat…

…Sims acknowledged that existing writing tools, including Sudowrite’s, are limited. But he told me it’s hypothetically possible to create a better model. One way, he said, would be to fine-tune a model to write better prose by having humans label examples of “creative” and “uncreative” prose. But it’d be tricky. The fine-tuning process currently relies on human workers who are reportedly paid far less than the US minimum wage. Hiring fine-tuners who are knowledgeable about literature and who can distinguish good prose from bad could be cost-prohibitive, Sims said, not to mention the problem of measuring taste in the first place.

Another option would be to build a model from scratch—also incredibly difficult, especially if the training material were restricted to literary writing. But this might not be so challenging for much longer: Developers are trying to build models that perform just as well with less text.

If such a technology did—could—exist, I wondered what it might accomplish. I recalled Zadie Smith’s essay “Fail Better,” in which she tries to arrive at a definition of great literature. She writes that an author’s literary style is about conveying “the only possible expression of a particular human consciousness.” Literary success, then, “depends not only on the refinement of words on a page, but in the refinement of a consciousness.”

Smith wrote this 16 years ago, well before AI text generators existed, but the term she repeats again and again in the essay—“consciousness”—reminded me of the debate among scientists and philosophers about whether AI is, or will ever be, conscious. That debate fell well outside my area of expertise, but I did know what consciousness means to me as a writer. For me, as for Smith, writing is an attempt to clarify what the world is like from where I stand in it.

That definition of writing couldn’t be more different from the way AI produces language: by sucking up billions of words from the internet and spitting out an imitation. Nothing about that process reflects an attempt at articulating an individual perspective. And while people sometimes romantically describe AI as containing the entirety of human consciousness because of the quantity of text it inhales, even that isn’t true; the text used to train AI represents only a narrow slice of the internet, one that reflects the perspective of white, male, anglophone authors more than anyone else. The world as seen by AI is fatally incoherent. If writing is my attempt to clarify what the world is like for me, the problem with AI is not just that it can’t come up with an individual perspective on the world. It’s that it can’t even comprehend what the world is…

…I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didn’t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a world—and maybe we’re already in it—in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

When I told Sims about this mother, he mentioned Roland Barthes’ influential essay “The Death of the Author.” In it, Barthes lays out an argument for favoring readers’ interpretations of a piece of writing over whatever meaning the author might have intended…

…Sims thought AI would let any literature lover generate the narrative they want—specifying the plot, the characters, even the writing style—instead of hoping someone else will.

Sims’ prediction made sense to me on an intellectual level, but I wondered how many people would actually want to cocreate their own literature. Then, a week later, I opened WhatsApp and saw a message from my dad, who grows mangoes in his yard in the coastal Florida town of Merritt Island. It was a picture he’d taken of his computer screen, with these words:

Sweet golden mango,

Merritt Island’s delight,

Juice drips, pure delight.

Next to this was ChatGPT’s logo and, underneath, a note: “My Haiku poem!”

The poem belonged to my dad in two senses: He had brought it into existence and was in possession of it. I stared at it for a while, trying to assess whether it was a good haiku—whether the doubling of the word “delight” was ungainly or subversive. I couldn’t decide. But then, my opinion didn’t matter. The literary relationship was a closed loop between my dad and himself…

…It reminded me of something Sims had told me. “Storytelling is really important,” he’d said. “This is an opportunity for us all to become storytellers.” The words had stuck with me. They suggested a democratization of creative freedom. There was something genuinely exciting about that prospect. But this line of reasoning obscured something fundamental about AI’s creation…

…The fact that AI writing technologies seem more useful for people who buy books than for those who make them isn’t a coincidence: The investors behind these technologies are trying to recoup, and ideally redouble, their investment. Selling writing software to writers, in that context, makes about as much sense as selling cars to horses.

5. ‘Defending the portfolio’: buyout firms borrow to prop up holdings – Antoine Gara and Eric Platt

Buyout firms have turned to so-called net asset value (NAV) loans, which use a fund’s investment assets as collateral. They are deploying the proceeds to help pay down the debts of individual companies held by the fund, according to private equity executives and senior bankers and lenders to the industry.

By securing a loan against a larger pool of assets, private equity firms are able to negotiate lower borrowing costs than would be possible if the portfolio company attempted to obtain a loan on its own.

Last month Vista Equity Partners, a private equity investor focused on the technology industry, used a NAV loan against one of its funds to help raise $1bn that it then pumped into financial technology company Finastra, according to five people familiar with the matter.

The equity infusion was a critical step in convincing lenders to refinance Finastra’s maturing debts, which included $4.1bn of senior loans maturing in 2024 and a $1.25bn junior loan due in 2025.

Private lenders ultimately cobbled together a record-sized $4.8bn senior private loan carrying an interest rate above 12 per cent. The deal underscores how some private equity firms are working with lenders to counteract the surge in interest rates over the past 18 months…

…While it was unclear what rate Vista had secured on its NAV loan, it is below a 17 per cent second-lien loan some lenders had pitched to Finastra earlier this year.

Executives in the buyout industry said NAV loans often carried interest rates 5 to 7 percentage points over short-term rates, or roughly 10.4 to 12.4 per cent today…

…The Financial Times has previously reported that firms including Vista, Carlyle Group, SoftBank and European software investor HG Capital have turned to NAV loans to pay out dividends to the sovereign wealth funds and pensions that invest in their funds, or to finance acquisitions by portfolio companies.

The borrowing was spurred by a slowdown in private equity fundraising, takeovers and initial public offerings that has left many private equity firms owning companies for longer than they had expected. They have remained loath to sell at cut-rate valuations, instead hoping the NAV loans will provide enough time to exit their investments more profitably.

But as rising interest rates now burden balance sheets and as debt maturities in 2024 and 2025 grow closer, firms recently have quietly started using the loans more “defensively”, people involved in recent deals told the FT…

…Relying on NAV loans is not without its risks.

Private equity executives who spoke to the FT noted that the borrowings effectively used good investments as collateral to prop up one or two struggling businesses in a fund. They warned that the loans put the broader portfolio at risk and the borrowing costs could eventually hamper returns for the entire fund…

…Executives in the NAV lending industry said that most new loans were still being used to fund distributions to the investors in funds. One lender estimated that 30 per cent of new inquiries for NAV loans were for “defensive” deals.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI. In it, I shared commentary in earnings conference calls for the second quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s second quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe is using its rich datasets to create foundation models in areas where the company has expertise; Firefly has generated >2 billion images in 6 months 

Our rich datasets enable us to create foundation models in categories where we have deep domain expertise. In the 6 months since launch, Firefly has captivated people around the world who have generated over 2 billion images.

Adobe will allow users to create custom AI models using their proprietary data as well as offer Firefly APIs so that users can embed Firefly into their workflows

Adobe will empower customers to create custom models using proprietary assets to generate branded content and offer access to Firefly APIs so customers can embed the power of Firefly into their own content creation and automation workflows.

Adobe is monetising its generative AI features through generative credits; the generative credits have limits to them, but the limits are set in a way where users can really try out Adobe’s generative AI functions and build the use of generative AI into a habit

We announced subscription offerings, including new generative AI credits with the goal of enabling broad access and user adoption. Generative credits are tokens that enable customers to turn text-based prompts into images, vectors and text effects, with other content types to follow. Free and trial plans include a small number of monthly fast generative credits that will expose a broad base of prospects to the power of Adobe’s generative AI, expanding our top of funnel. Paid Firefly, Express and Creative Cloud plans will include a further allocation of fast generative credits. After the planned specific number of generative credits is reached, users will have an opportunity to buy additional fast generative credits subscription packs…

…First of all, it was a very thoughtful, deliberate decision to go with the generative credit model. And the limits, as you can imagine, were very, very considered in terms of how we set them. The limits are, of course, fairly low for free users. The goal there is to give them a flavor of it and then help them convert. . And for paid users, especially for people in our Single Apps and All Apps plans, one of the things we really intended to do is try and drive real proliferation of the usage. We didn’t want there to be generation anxiety, put in that way. We wanted them to use the product. We wanted the Generative Fill and Generative Expand. We wanted the vector creation. We want to build the habits of using it. And then what will happen over time as we introduce 3D, as we introduce video and design and vectors, and as we introduce these Acrobat capabilities that Shantanu was talking about, the generative credits that are used in any given month continues to go up because they’re getting more value out of it. And so that’s the key thing. We want people to just start using it very actively right now and build those habits.

Brands around the world are using Adobe’s generative AI – through products such as Adobe GenStudio – to create personalised customer experiences at scale; management sees Adobe GenStudio as a huge new opportunity; Adobe itself is using GenStudio for marketing its own products successfully and it’s using its own success as a selling point

Brands around the globe are working with Adobe to accelerate personalization at scale through generative AI. With the announcement of Adobe GenStudio, we are revolutionizing the entire content supply chain by simplifying the creation-to-activation process with generative AI capabilities and intelligent automation. Marketers and creative teams will now be able to create and modify commercially safe content to increase the scale and speed at which experiences are delivered…

…Shantanu and David already talked about the Adobe GenStudio, and we’re really excited about that. This is a unique opportunity, as you said, for enterprises to really create personalized content and drive efficiencies as well through automation and efficiency. And when you look at the entire chain of what enterprises go through from content creation, production workflow and then activation through DX through all the apps we have on our platform, we have the unique opportunity to do that. We already have deployed it within Adobe for our own Photoshop campaign, and we’re working with a number of agencies and customers to do that. So this is a big net new opportunity for us with Adobe GenStudio…

…And if I could actually just add one quick thing at the GenStudio work that Anil team has been doing, we’ve actually been using that within the Digital Media business already to release some of the campaigns that we’ve released this quarter. So it’s one of these things that it’s great to see the impact it’s having on our business and that becomes a selling point for other businesses, too.

Inferencing costs for generative AI are expensive, but Adobe’s management is still confident of producing really strong margins for FY2023

[Question] We’ve been told generative AI is really expensive to run. The inference and training costs are really high. 

[Answer] Our customers have generated over 2 billion images. And I know it’s not lost on people, all this was done while we’re delivering strong margins. But when we take a step back and think about these technologies, we have investments from a COGS standpoint, inferencing, content; from an R&D standpoint, training, creating foundation models. And David alluded to it in his prepared comments, the image model for Firefly family of models is out, but we’re going to bring other media types to market as well so we’re making substantive investments. When I go back to the framing of my prepared comments, we really have a fundamental operating philosophy that’s been alive at the company for a long time: growth and profitability. We’re going to prioritize, we’re going to innovate and we’re going to execute with rigor…

…As we think about going — the profile going forward, what I’ll come back to is when we initially set fiscal 2023 targets, implicit in those targets was a 44.5% operating margin. If you think about how we just guided Q4… implicit in that guide is an operating margin of around 45.5%.

So as you think about us leading this industry, leading the inflection that’s unfolding in front of us, that mid-40s number, we think, is the right ballpark to think about the margin structure of the company as we continue to drive this technology and leadership. 

Adobe’s management thinks about generative AI’s impact on the company’s growth through two lenses: (1) acquiring new users, and (2) growing the spend of existing customers; for growing the spend of existing customers, Adobe has recently increased the pricing of its products

Yes, Shantanu said that we look at the business implications of this through those two lenses: new user adoption, first and foremost; and then sort of opportunity to continue to grow the existing book of business. On the new user side, we’ve said this for years: our focus continues to be on proliferation. We believe that there — we have a massive number of users in front of us. We continue to have our primary focus being net user adds and subscribers. And so the goal here in proliferation is to get the right value to the right audience at the right price…

…The second thing is going to be on the book of business. And here, we’re — basically, the pricing changes, just as a reminder, they have a rolling impact. 

Adobe’s management took a differentiated approach with Firefly when building the company’s generative AI capabilities, with a focus on using licensed content for training where Adobe has the rights to use the content 

So from the very beginning of Firefly, we took a very different approach to how we were doing generative. We started by looking at and working off the Adobe Stock base, which are contents that are licensed and very clearly we have the rights to use. And we looked at other repositories of content where they didn’t have any restrictions on usage, and we’ve pulled that in. So everything that we’ve trained on has gone through some form of moderation and has been cleared by our own legal teams for use in training. And what that means is that the content that we generate is, by definition, content that isn’t then stepping on anyone else’s brand and/or leveraging content that wasn’t intended to be used in this way. So that’s the foundation of what we’ve done.

Adobe is sharing the economic spoils with the creators of the content it has been training its generative AI models on

We’ve been working with our Stock contributors. We’ve announced, and in fact, yesterday, we had our first payout of contributions to contributors that have been participating and adding stock for the AI training. And we’re able to leverage that base very effectively so that if we see that we need additional training content, we can put a call to action, call for content, out to them, and they’re able to bring content to Adobe in a fully licensed way. So for example, earlier this quarter, we decided that we needed 1 million new images of crowd scenes. And so we put a call to action out. We were able to gather that content in. But it’s fully licensed and fully moderated in terms of what comes in. So as a result, all of the content we generate is safe for commercial use.

Adobe’s management is seeing that enterprise customers place a lot of importance on working with generated AI content that is commercially safe

The second thing is that because of that, we’re able to go to market and also indemnify customers in terms of how they’re actually leveraging that content and using it for content that’s being generated. And so enterprise customers find that to be very important as we bring that in not just in the context of Firefly stand-alone but we integrated into our Creative Cloud applications and Express applications as well. 

Adobe’s management has been very focused on generating fair (in population diversity, for example) and safe content in generative AI and they think this is a good business decision

We’ve been very focused on fair generation. So we look intentionally for diversity of people that are generated, and we’re looking to make sure that the content we generate doesn’t create or cause any harm. And all of those things are really good business decisions and differentiate us from others. 

One of the ways Adobe’s management thinks generative AI could be useful in PDFs is for companies to be able to have conversations with their own company-wide knowledge base that is stored in PDFs – Adobe is already enabling this through APIs

Some of the things that people really want to know is how can I have a conversational interface with the PDF that I have, not just the PDF that I have opened right now but the PDF that are all across my folder, then across my entire enterprise knowledge management system, and then across the entire universe. So much like we are doing in Creative, where you can start to upload your images to get — train your own models within an enterprise, well, it is often [ hard-pressed ]. The number of customers who want to talk to us now that we’ve sort of designed this to be commercially safe and say, “Hey, how do we create our own model,” whether you’re a Coke or whether you’re a Nike, think of them as having that. I think in the document space, the same interest will happen, which is we have all our knowledge within an enterprise associated with PDFs, “Adobe, help me understand how your AI can start to deliver services like that.” So I think that’s the way you should also look at the PDF opportunity that exists, just more people taking advantage of the trillions of PDFs that are out there in the world and being able to do things…

… So part of what we are also doing with PDFs is the fact that you can have all of this now accessible through APIs. It’s not just the context of the PDF, the semantic understanding of that to do specific workflows, we’re starting to enable all of that as well. 

When it comes to generative AI products, Adobe’s goal for enterprises and partners is to provide (1) API access, (2) ability to train their own models, and (3) core workflows that gel well with Adobe’s existing products; management is thinking about extending the same metering concepts as Adobe’s generative credits to API calls too

Our goal right now, for enterprises and third-parties that we work with, is to provide a few things. The first is this ability, obviously, to have API access to everything that we are building in, so that they can build it into their workflows and their automation stack. The second thing is to give them the ability to extend or train their own models as well. So if — as we mentioned earlier, our core model, foundation model is a very clean model. It generates great content and you can rely on it commercially. We want our customers and partners to be able to extend that model with content that is relevant to them so that Firefly is able to generate content in their brand or in their style. So we’ll give them the ability to train their own model as well. And then last, but certainly not least, we’ll give them some core workflows that will work with our existing products, whether it’s Express or whether it’s Creative Cloud or GenStudio as well, so that they can then integrate everything they’re doing onto our core platform.

And then from a monetization perspective, you can imagine the metering concepts that we have for generative credits extending to API calls as well. And of course, those will all be custom negotiated deals with partners and enterprises.

Adobe is its own biggest user of the AI products it has developed for customers – management thinks this is a big change for Adobe because the extent of usage internally of its AI products is huge, and it has helped improve the quality of the company’s AI products

So I think the pace of innovation internally of what we have done is actually truly amazing. I mean relative to a lot of the companies that are out there and the fact that we’ve gone from talking about this to very, very quickly, making it commercially available, I don’t want to take for granted the amount of work that went into that. I think internally, it is really galvanized because we are our own biggest user of these technologies. What we are doing associated with the campaigns and the GenStudio that we are using, as David alluded to it, our Photoshop Everyone Can Campaign or the Acrobat’s Got It campaign or how we will be further delivering campaigns for Express as well as for Firefly, all of this is built on this technology. And we use Express every day, much like we use Acrobat every day. So I think it’s really enabled us to say are we really embracing all of this technology within the company. And that’s been a big change because I think the Creative products, we’ve certainly had phenomenal usage within the company, but the extent to which the 30,000 employees can now use our combined offering, that is very, very different internally

DocuSign (NASDAQ: DOCU)

DocuSign has a new AI-powered feature named Liveness Detection for ID verification, which has reduced the time needed for document signings by 60%

Liveness Detection technology leverages AI-powered biometric checks to prevent identity spoofing, which results in more accurate verification without the signee being present. ID Verification is already helping our customers. Our data shows that it has reduced time to sign by about 60%.

DocuSign is already monetising AI features directly

Today, we’re already monetizing AI directly through our CLM+ product and indirectly through its use in our products such as search. 

DocuSign is partnering with AI Labs to build products in closer collaboration with customers

Our next step on that journey is with AI Labs. With AI Labs, we are co-innovating with our customers. We provide a sandbox where customers can share a select subset of agreements, try new features we’re testing. Our customers get early access to developing technology and re-receive early feedback that we will incorporate into our products. By working with our customers in the development phase, we’re further reinforcing the trusted position we’ve earned over the last 20 years. 

DocuSign’s management is excited about how AI – especially generative AI – can help the company across the entire agreement workflow

We think AI will impact practically all of our products at every step with the agreement workflow. So I don’t know that there’s a — just a one call out. But maybe to off a couple that I’m most interested in, I certainly think that the broader, should we say, agreement analytics category is poised to be completely revamped with generative AI. 

DocuSign has been an early investor in AI but had been held back by fundamental technology until the introduction of generative AI

We were an early investor in that category. We saw that coming together with CLM 4 or 5 years ago and made a couple of strategic investments and been a leader in that space, but have been held back by fundamental technology. And I think now with generative AI, we can do a substantially better job more seamlessly, lighter weight with less professional services. And so I’m very excited to think about how it transformed the CLM category and enables us to deliver more intelligent agreements. I think you mentioned IDV [ID Verification]. I agree 100%. Fundamentally, that entire category is AI-enabled. The upload and ingestion of your ID recognition of it and then that Liveness Detection where we’re detecting who you are and that you are present and matching that to ID, that would simply not be possible without today’s AI technology and really takes just dramatically reshapes the ability to trade off risk and convenience. So I think that’s a good one. 

MongoDB (NASDAQ: MDB)

There are 3 important things to focus on when migrating off a relational database, and MongoDB’s management thinks that generative AI can help with one of them (the rewriting of the application code)

So with regards to Gen AI, I mean, we do see opportunities essentially, the reason when you migrate off using relational migrator, there’s really 3 things you have to focus on. One is mapping the schema from the old relational database to the MongoDB platform, moving the data appropriately and then also rewriting some, if not all, of the application code. Historically, that last component has been the most manually intensive part of the migration, obviously, with the advance of cogeneration tools. These opportunities to automate the rewriting of the application code. I think we’re still in the very early days. You’ll see us continue to add new functionality to relational migrator to help again reduce the switching costs of doing so. And that’s obviously an area that we’re going to focus. 

MongoDB introduced Atlas Vector Search, its vector database which allows developers to build AI applications, and it is seeing significant interest; management hopes to bring Atlas Vector Search to general availability (GA) sometime next year, but some customers are already deploying it in production

We also announced Atlas Vector Search, which enables developers to store, index and query Vector embeddings, instead of having to bolt on vector search functionality separately, adding yet another point solution and creating a more fragmented developer experience. Developers can aggregate and process the vectorized data they need to build AI applications while also using MongoDB to aggregate and process data and metadata. We are seeing significant interest in our vector search offering from a large and sophisticated enterprise customers even though it’s only — still only in preview. As one example, a large global management consulting firm is using Atlas Vector Search for internal research applications that allows consultants to semantically search over 1.5 million expert interview transcripts…

…Obviously, Vector is still in public preview. So we hope to have a GA sometime next year, but we’re really excited about the early and high interest from enterprises. And obviously, some customers are already deploying it in production, even though it’s a public preview product.

MongoDB’s management believes that AI will lead developers to write more software and these software will be exceptionally demanding and will thus require high-performance databases

Over time, AI functionality will make developers more productive to the use of code generation and code assist tools that enable them to build more applications faster. Developers will also be able to enrich applicants with compelling AI experiences by enabling integration with either proprietary or open source large language models to deliver more impact. Now instead of data being used only by data scientists who drive insights, data can be used by developers to build smarter applications that truly transform a business. These AI applications will be exceptionally demanding, requiring a truly modern operational data platform like MongoDB. 

MongoDB’s management believes MongoDB has a bright future in the world of AI because (1) the company’s document database is highly versatile, (2) AI applications need a high-performant, scalable database and (3) AI applications have the same requirements for transactional guarantees, security, privacy etc as other applications

In fact, we believe MongoDB has even stronger competitive advantage in the world of AI. First, the document models inherent in flexibility and versatility renders it a natural fit for AI applications. Developers can easily manage and process various data types all in one place. Second, AI applications require high performance, parallel computations and the ability to scale data processing on an ever-growing base of data. MongoDB supports its features, with features like shorting and auto-scaling. Lastly, it is important to remember AI applications have the same demands as any other type of application: Transactional guarantees, security and privacy requirements, tech search, in-app analytics and more. Our developer data platform that gives developer a unified solution to smarter AI applications.

AI startups as well as industrial equipment suppliers are using MongoDB for their AI needs 

We are seeing these applications developed across a wide variety of customer types and use cases. For example, observe.ai is an AI start-up that leverages 40 billion parameter LLM to provide customers with intelligence and coaching that maximize performance of their frontline support and sales teams. Observe.ai processes and run models on millions of support touch points daily to generate insights for their customers. Most of this rich, unstructured data is stored in MongoDB. Observe.ai chose to build on MongoDB because we enable them to quickly innovate, scale to handle large and unpredictable workloads and meet their security requirements of their largest enterprise customers. On the other end of the spectrum is one of the leading industrial equipment suppliers in North America. This company relies on Atlas and Atlas Device sync to deploy AI models at the edge. To their field teams mobile devices to better manage and predict inventory in areas with poor physical network connectivity, they chose MongoDB because of our ability to efficiently handle large quantities of distributed data and to seamlessly integrate between network edge and their back-end systems.

MongoDB’s management sees customers saying that they prefer being able to have one platform handle all their data use-cases (AI included) rather than stitching point solutions together

People want to use one compelling, unified developer experience to address a wide variety of use cases of which AI is just one of them. And we’re definitely hearing from customers to being able to do that on one platform versus bolting on a bunch of point solutions is far more the preferable approach. And so we’re excited about the opportunity there.

MongoDB is working with Google on a number of AI projects

On the other thing on partners, I do want to say that we’re seeing a lot of work and activity with our partner channel on the AI front as well. We’re working with Google in the AI start-up program, and there’s a lot of excitement. Google had their next conference this week. We’re also working with Google to help train Codey, their code generation tool to help people accelerate the development of AI and other applications. And we’re seeing a lot of interest in our own AI innovators program. We’ve had lots of customers apply for that program. So we’re super excited about the interest that we’re generating.

MongoDB’s management thinks there’s a lot of hype around AI in the short term, but also thinks that AI is going to have a huge impact in the long-term, with nearly every application having some AI functionality embedded within over the next 3-5 years

I firmly believe that we, as an industry, tend to overestimate the impact of a new technology in the short term and underestimate the impact in the long term. So as you may know, there’s a lot of hype in the market right now, in the industry right around AI and in some of the early stage companies in the space, have the valuations to the roof. In some cases, almost — it’s hard to see how people can make money because the risk reward doesn’t seem to be sized appropriately. So there’s a lot of hype in the space. But I do think that AI will be a big impact for the industry and for us long term. I believe that almost every application, both new and existing, will have some AI functionality embedded into the application over the next — in your horizon 3 to 5 years.

MongoDB’s management thinks that vector search (the key distinguishing feature of vector databases) is just a feature and not a product, and it will eventually be built into every database as a feature

Vector Search is really a reverse index. So it’s like an index that’s built into all databases. I believe, over time, Vector Search functionality will be built into all databases or data platforms in the future. There are some point products that are just focused solely on Vector Search. But essentially, it’s a point product that still needs to be used with other technologies like MongoDB to store the metadata, the data to be able to process and analyze all that information. So developers have spoken loudly that having a unified and elegant developer experience is a key differentiator. It removes friction in how they work. It’s much easier to build and innovate on one platform versus learning and supporting multiple technologies. And so my strong belief is that, ultimately, Vector Search will be embedded in many platforms and our differentiation will be a — like it always has been a very compelling and elegant developer experience

MongoDB’s management thinks that having vector search as a feature in a database does not help companies to save costs, but instead, improves the overall developer experience

Question: I know that we’re talking about the developers and how they — they’re voting here because they want the data in a unified platform, a unified database that preserves all that metadata, right? But I would think there’s probably also a benefit to having it all in a single platform as well just because you’re lowering the TCO [total cost of ownership] for your customers as well, right? 

Answer: Vectors are really a mathematical representation of different types of data, so there is not a ton of data, unlike application search, where there’s a profound benefits by storing everything on one platform versus having an operational database and a search database and some glue to keep the data in sync. That’s not as much the case with Vector because you’re talking about storing essentially an elegant index. And so it’s more about the user experience and the development workflow that really matters. And what we believe is that offering the same taxonomy in the same way they know how to use MongoDB to also be able to enable Vector Search functionality is a much more compelling differentiation than a developer have to bolt on a separate vector solution and have to provision, configure and manage that solution along with all the other things they have to do.

MongoDB’s management believes developers will become more important in organisations than data scientists because generative AI will position AI in front of software

Some of the use cases are really interesting, but the fact is that we’re really well positioned because what generative AI does is really instantiate AI in front of — in software, which means developers play a bigger role rather than data scientists, and that’s where you’ll really see the business impact. And I think that impact will be large over the next 3 to 5 years.

Okta (NASDAQ: OKTA)

Okta has been using AI for years and management believes that AI will be transformative for the identity market

AI is a paradigm shift in technology that is transformative opportunities for identity, from stronger security and faster application development to better user experiences and more productive employees. Okta has been utilizing AI for years with machine learning models for spotting attack patterns and defending customers against threats, and we’ll have more exciting AI news to share at Oktane.

Okta’s management believes that every company must have an AI strategy, which will lead to more identities to be protected; a great example is how OpenAI is using Okta; Okta’s relationship with OpenAI started a few years ago and OpenAI is now a big customer, accounting for a significant chunk of the US$100m in TCV (total contract value) Okta had with its top 25 transactions in the quarter

Just like how every company has to be a technology company, I believe every company must have an AI strategy. More companies will be founded on AI, more applications will be developed with AI and more identities will need to be protected with a modern identity solution like Okta. A great example of this is how Okta’s Customer Identity Cloud is being utilized for the massive number of daily log-ins, in authentications by OpenAI, which expanded its partnership with Okta again in Q2…

…So OpenAI is super interesting. So they’re — OpenAI as a Customer Identity Cloud customer, which so when you log in, in ChatGPT, you log in through Okta. And it’s interesting because a developer inside of OpenAI 3 years ago picked our Customer Identity Cloud because it had a great developer experience and from the website and started using it. And this Chat — and at the time, it was the log-in for their APIs and then ChatGPT took off. And now, as you mentioned, we’ve had really pretty sizable transactions with them over the last couple of quarters. And so it’s a great testament to our strategy on Customer Identity, having something that appeals to developers.

And you saw they did something pretty interesting — and so this is really a B2C app, right, of ChatGPT but they — now they recently launched their enterprise offering, and they want to connect ChatGPT to enterprises. So this is — Okta is really good at this, too, because our customer identity cloud connects our customers to consumers, but also connects our customers to workforces. So then you have to start supporting things like Single Sign-On and SAML and Open ID and authorization. And so it’s just open API continues to get the benefits of being able to focus on what they want to focus on, which is obviously their models in the LLMs and the capabilities, and we can focus on the identity plumbing that wires it together.

So the transaction was — it was one of the top — I mentioned the top 25 transactions. The total TCV of all this transaction was — this quarter was $100 million. It was one of those top 25 transactions, but I don’t — I haven’t done the math on the TCV for how much of the $100 million it was. But it was one of our — it was on the larger side this quarter.

Okta’s management thinks that identity is a key building block in a number of digital trends, including AI

It’s always a good reminder that identity is a key building block for Zero Trust security, digital transformation, cloud adoption projects and now AI. These trends will continue in any macroeconomic environment as organizations look for ways to become more efficient while strengthening their security posture.

Salesforce (NYSE: CRM)

Salesforce is driving an AI transformation to become the #1 AI CRM (customer relationship management)

And last quarter, we told you we’re now driving our AI transformation. We’re pioneering AI for both our customers and ourselves leading the industry through this incredible new innovation cycle, and I couldn’t be happier with Srini and David and the entire product and technology team for the incredible velocity of AI products that were released to customers this quarter and the huge impact that they’re making in the market and showing how [ tran ] Salesforce is transforming from being not only the #1 CRM, but to the #1 AI CRM, and I just express my sincere gratitude to our entire [ TNP ] team.

Salesforce’s management will continue to invest in AI

We’re in a new AI era, a new innovation cycle that we will continue to invest into as we have over the last decade. As a result, we expect nonlinear quarterly margins in the back half of this year, driven by investment timing, specifically in AI-focused R&D.

Salesforce’s management believes the world is at the dawn of an AI revolution that will spark a new tech buying cycle and investment cycle

AI, data, CRM, trust, let me tell you, we are at the dawn of an AI revolution. And as I’ve said, it’s a new innovation cycle which is sparking amounts of tech buying cycle over the coming years. It’s also a new tech investment cycle…

…And when we talk about growth, I think it’s going to start with AI. I think that AI is about to really ignite a buying revolution. I think we’ve already started to see that with our customers and even some of these new companies like OpenAI. And we certainly see that in our customers’ base as well. 

Salesforce has been investing in many AI startups through its $500 million generative AI fund

 We’ve been involved in the earliest rounds many of the top AI start-ups. Many of you have seen that, we are in there very early…

… Now through our $500 million generative AI fund, we’re seeing the development of ethical AI with amazing companies like Anthropic, [ Cohere ], Hugging Face and some others,

Salesforce has been working on AI early on

But I’ll tell you, this company has pioneered AI, and not just in predictive, a lot of you have followed up the development and growth of Einstein. But also, you’ve seen that we’ve published some of the first papers on prompt engineering in the beginnings of generative AI, and we took our deep learning routes, and we really demonstrated the potential for generative AI and now to see so many of these companies become so successful.

Every CEO Salesforce’s leadership has met thinks that AI is essential to improving their businesses

So every CEO I’ve met with this year across every industry believes that AI is essential to improving both their top and bottom line, but especially their productivity AI is just augmenting what we can do every single day…

…I think many of our customers and ultimately, all of them believe they can grow their businesses by becoming more connected to their customers than ever before through AI and at the same time, reduce cost, increase productivity, drive efficiency and exceed customer expectations through AI. 

All management teams in Salesforce are using Einstein AI to improve their decision-making

Every single management team that we have here at Salesforce every week, we’re using our Einstein AI to do exactly the same thing. We go back, we’re trying to augment ourselves using Einstein. So what we’ll say is, and we’ve been doing this now and super impressive, we’ll say, okay, Brian, what do you think our number is and we’ll say, okay, that’s very nice, Brian. But Einstein, what do you really think the number is? And then Einstein will say, I think Brian is sandbagging and then the meeting continues. 

Salesforce’s management thinks that every company will undergo an AI transformation with the customer at the centre, and this is why Salesforce is well positioned for the future

The reality is every company will undergo an AI transformation with the customer at the center, because every AI transformation begins and ends with the customer, and that’s why Salesforce is really well positioned with the future.

Salesforce has been investing a lot in Einstein AI, and Einstein is democratising generative AI for users of Salesforce’s products; Salesforce’s management thinks that the real value Salesforce brings to the world is the ability to help users utilise AI in a low code or no code way 

And with this incredible technology, Einstein that we’ve invested so much and grown and integrated into our core technology base. We’re democratizing generative AI, making it very easy for our customers to implement every job, every business in every industry. And I will just say that in the last few months, we’ve injected a new layer of generative AI assistance across all of the Customer 360. And you can see it with our salespeople who are now using our Sales Cloud GPT, which has been incredible, what we’ve released this quarter to all of our customers and here inside Salesforce. And then when we see that, they all say to themselves, you know what, in this new world, everyone can now be in Einstein.

But democratizing generative AI at scale for the biggest brands in the world requires more than — that’s just these large language models and deep learning algorithms, and we all know that because a lot of our customers kind of think and they have tried and they go and they pull something off a Hugging Face, it is an amazing company. We just invested in their new round and grab a model and put some data in it and nothing happens. And then they don’t understand and they call us and say, “Hey, what’s happening here? I thought that this AI was so amazing and it’s like, well, it takes a lot to actually get this intelligence to occur. And that’s what I think that’s the value that Salesforce is bringing is that we’re really able to help our customers achieve this kind of technological superiority right out of the box just using our products in a low code, no code way. It’s really just democratization of generative AI at scale. And that is really what we’re trying to achieve that at the heart of every one of these AI transformations becomes our intelligent, integrated and incredible sales force platform, and we’re going to show all of that at Dreamforce

Salesforce is seeing strong customer momentum on Einstein generative AI (a customer – PenFed – used Einstein-powered chatbots to significantly improve their customer service)

We’re also seeing strong customer momentum on Einstein generative AI. PenFed is a great example of how AI plus data plus CRM plus Trust is driving growth for our customers. PenFed is one of the largest credit unions in the U.S., growing at a rate of the next 9 credit unions combined. They’re already using Financial Services Cloud, Experience Cloud and MuleSoft, and our Einstein-powered chatbots handling 40,000 customer service sessions per month. In fact, today, PenFed resolves 20% of their cases on first contact with Einstein-powered chatbots resulting in a 223% increase in chatbot activity in the past year with incredible ROI. In Q2, PenFed expanded with Data Cloud to unify all the customer data from its nearly 3 million members and increase their use of Einstein to roll out generative AI assistant for every single one of their service agents.

Salesforce’s management thinks that customers who want to achieve success with AI needs to have their data in order

But what you can see with Data Cloud is that customers must get their data together if they want to achieve success with AI. This is the critical first step for every single customer. And we’re going to see that this AI revolution is really a data revolution. 

Salesforce takes the issue of trust very seriously in its AI work; Salesforce has built a unique trust layer within Einstein that allows customers to maintain data privacy, security, and more

Everything Einstein does has also delivered with trust and especially ethics at the center, and I especially want to call out the incredible work of our office of ethical and humane use, pioneering the use of ethics and technology. If you didn’t read their incredible article in HBR this quarter. It was awesome. And they are doing incredible work really saying that it’s not just about AI, it’s not just about data, but it’s also about trust and ethics. And that’s why we developed this Einstein trust layer. This is completely unique in the industry. It enables our customers to maintain their data privacy, security, residency and compliance goals.

Salesforce has seen customers from diverse industries (such as Heathrow Airport and Schneider Electric) find success using Salesforce’s AI tools

Heathrow is a great example of transformative power of AI, data, CRM and trust and the power of a single source of truth. They have 70 million passengers who pass through their terminal annually, I’m sure many of you have been one of those passengers I have as well, Heathrow is operating in a tremendous scale, managing the entire airport experience with the Service Cloud, Marketing Cloud, Commerce Cloud, but now Heathrow, they’ve added Data Cloud also giving them a single source of truth for every customer interaction and setting them up to pioneer the AI revolution. And with Einstein, Heathrow’s service agents now have this AI-assisted generator applies to service inquiries, case deflection, writing case summaries, all the relevant data and business context coming from Data Cloud…

…Schneider Electric has been using Customer 360 for over a decade, enhancing customer engagement, service and efficiency. With Einstein, Schneider has refined demand generation, reduced close times by 30%. And through Salesforce Flow, they’ve automated order fulfillment. And with Service Cloud, they’re handling over 8 million support interactions annually, much of it done on our self-service offering. In Q2, Schneider selected Marketing Cloud to further personalize the customer experience.

Salesforce’s management thinks the company is only near the beginning of the AI evolution and there are four major steps on how the evolution will happen

And let me just say, we’re at the beginning of quite a ballgame here and we’re really looking at the evolution of artificial intelligence in a broad way, and you’re really going to see it take place over 4 major zones.

And the first major zone is what’s played out in the last decade, which has been predictive. That’s been amazing. That’s why Salesforce will deliver about [ 1 trillion ] transactions on Einstein this week. It’s incredible. 

These are mostly predictive transactions, but we’re moving rapidly into the second zone that we all know is generative AI and these GPT products, which we’ve now released to our customers. We’re very excited about the speed of our engineering organization and technology organization, our product organization and their ability to deliver customer value with generative AI. We have tremendous AI expertise led by an incredible AI research team. And this idea that we’re kind of now in a generative zone means that’s zone #2.

But as you’re going to see at Dreamforce, zone #3 is opening up with autonomous and with agent-based systems as well. This will be another level of growth and another level of innovation that we haven’t really seen unfold yet from a lot of companies, and that’s an area that we are excited to do a lot of innovation and growth and to help our customers in all those areas.

And then we’re eventually going to move into [ AGI ] and that will be the fourth area. And I think as we move through these 4 zones, CRM will become more important to our customers than ever before. Because you’re going to be able to get more automation, more intelligence, more productivity, more capabilities, more augmentation of your employees, as I mentioned.

Salesforce can use AI to help its customers in areas such as call summaries, account overviews, responding to its customers’ customers, and more

And you’re right, we’re going to see a wide variety of capability is exactly like you said, whether it’s the call summaries and account overviews and deal insights and inside summaries and in-product assistance or mobile work briefings. I mean, when I look at things like service, when we see the amount of case deflection we can do and productivity enhancements with our service teams not just in replies and answers, but also in summaries and summarization. We’ve seen how that works with generative and how important that is in knowledge generation and auto-responding conversations and then we’re going to have the ability for our customers to — with our product.

Salesforce has its own AI models, but Salesforce has an open system – it’s allowing customers to choose any models they wish

We have an open system. We’re not we’re not dictating that they have to use any one of these AI systems. We have an ecosystem. Of course, we have our own models and our own technology that we have given to our customers, but we’re also investing in all of these companies, and we plan to be able to offer them as opportunities for those customers as well, and they’ll be able to deliver all kinds of things. And you’ll see that whether it’s going to end up being contract digitization and cost generation or survey generators or all kinds of campaign assistance.

Slack is going to be an important component of Salesforce’s AI-related work; management sees Slack as an easy-to-use interface for Salesforce’s AI systems

Slack has become incredible for these AI companies, every AI company that we’ve met with is a Slack company. All of them make their agents available for Slack first. We saw that, for example, with Anthropic, where Cloud really appeared first and [ Cloud 2 ], first in Slack.

And Anthropic, as a company uses Slack internally and they have a — they take their technology and develop news digest every day and newsletters and they do incredible things with Slack — Slack is just a treasure trove of information for artificial intelligence, and you’ll see us deliver all kinds of new capabilities in Slack along these lines.

And we’re working, as I’ve mentioned, get Slack to wake up and become more aware and also for Slack to be able to do all of the things that I just mentioned. One of the most exciting things I think you’re going to see a Dreamforce is Slack very much as a vision for the front end of all of our core products. We’re going to show you an incredible new capability that we call Slack Sales Elevate, which is promoting our core Sales Cloud system running right inside Slack.

That’s going to be amazing, and we’re going to also see how we’re going to release and deliver all of our core services in sales force through Slack. This is very important for our company to deliver Slack very much as a tremendous easy-to-use interface on the core Salesforce, but also all these AI systems. So all of that is that next generation of artificial intelligence capability, and I’m really excited to show all of that to you at Dreamforce as well as Data Cloud as well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, and Salesforce. Holdings are subject to change at any time.