What We’re Reading (Week Ending 17 April 2022)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 17 April 2022:

1. RWH004: Intelligent Investing w/ Jason Zweig – William Green and Jason Zweig

Jason Zweig (00:06:24):

I think the other story about my dad that really sticks in my memory, William, is in 1981, when my dad was dying of cancer, I was home for a visit and the phone rang, and a voice said, “Is this the Zweig residence?” Very polite, formal sounding man. And I said, “Yes, can I help you?” And he said, “Is Irving there?” And I said, “Yes, but he’s not really able to come to the phone, can I take a message?” And as I recall the man’s name he said, “Well, could you tell him that Glen Irwin is on the phone?” And I knew everything about my parents’ business and a lot about their life history, I had never heard of this man. And I went and I told him. At that point it was very difficult for my dad to move around the house because his lung cancer had spread to his legs, but he looked at me and then a light came on in his eyes and he said, “Oh, I’ll speak to him.” And he, with a great deal of difficulty, came to the phone.

Jason Zweig (00:07:28):

If you’ve ever listened to a stunning conversation that you can only hear one half of, it always sticks with you. And my dad took the phone and he said, “Glen,” and after a long pause my dad said, “Yes, I remember.” And the person at the other end started telling my dad a story, and my dad kept nodding and saying, “Yes, I remember, I remember” and I saw something I had never seen, I saw my father cry, and I couldn’t hear almost anything of what Mr. Irwin was telling him, but they talked for about 10 minutes And at the end my dad said, “Thank you very much, I hope so,” which I immediately inferred, and I think correctly, that Mr. Irwin had said to my dad, “I hope I will get to see you while I still can.”

Jason Zweig (00:08:22):

And when he hung up I said to my dad, “Who was that?” And my dad proceeded to tell me the other half of the story, which is sometime around in the late 1930s, my dad was the student at Union College in Schenectady in New York, and he was walking to class one morning, and he was walking behind a student, and my dad noticed he was black. And at that time he was either the only black student, or one of maybe three black students, or a handful of black students at the time, and my dad had never seen him before. And they were both walking along, minding their own business, and suddenly from behind a few trees a bunch of white guys jumped the black student and started kicking him and beating him up. And my dad immediately dropped his books, or whatever he was carrying, and jumped in and fought back and took Glen Irwin’s side, even though he didn’t know who this kid was, but it was obvious to him who was right and who was wrong.

Jason Zweig (00:09:26):

And momentarily the campus security people came along and broke up the fight, and they all got dragged to the office of the president of the university, whose name was Dixon Ryan Fox, who was a very famous scholar. And of course the white kids who had jumped Glen Irwin all blamed him, and they said, “We were walking along, minding own business, and this N-word guy attacked us, so we had to fight back, and then this kid came along and made even more trouble, and that’s what happened.” And so Fox turned to my dad, and Glen Irwin, and said, “What’s your side of the story?” And Glen Irwin was so scared he couldn’t speak, and my dad said, “Well, President Fox, maybe you remember me from when I was admitted to Union College,” because my dad had gotten a rejection letter when he had initially applied that said, “You’re qualified for admission but the Jewish quota is filled,” because in the 1930s most elite educational institutions in this country had a policy that they would only admit so many Jews, and the Jewish quota had been filled.

Jason Zweig (00:10:37):

And so my dad immediately got in his family’s wagon, because in those days they didn’t have cars, and rode to Schenectady, which was probably about 25 miles away, 30 miles away, and he waited outside President Fox’s office all day long until his secretary said that he could go in. And he was admitted, and he said to the president of the college, “You sent me this letter, and it said the Jewish quota has been filled. Well, as you know, President Fox, the whims of war are gathering in Europe, and young American men may be called into military service. Should I tell the US Army that the Jewish quota has been filled when I’m drafted?” So he’s telling this story, and President Fox says, “I remember you young man, why don’t you tell me what really happened?” And so what happened in the end was the thugs who attacked Glen Irwin were expelled. Glen Irwin went on and, if I remember right, he became something like a chemical engineer, and became a senior executive at a major company in the US. And what to me was so striking about this story is that my dad had never told any of us about this. My mom had never heard the story, in fact, the day it happened my mom didn’t even hear about it, because all this happened between me and my dad, and that, I think, is really the definition of quiet courage, when you do something that noble and you never even talk about it. And he completely transformed this man’s life, and obviously Mr. Irwin was calling because somebody had told him, “Irving Zweig is very sick,” and they hadn’t spoken in over 40 years…

…William Green (00:31:20):

And I wondered if you could talk about the element of luck versus skill. Clearly these guys have to have skill. I remember people telling me that they had been in investment meetings with Peter Lynch at Fidelity, and they would say, “Look, I came out of the same meeting. I heard the same information from the same companies and he made more money than I did again and again.” So there was clearly something he had. And yet there is an amount of luck that I think we can’t deny. Can you unpack that a little for us?

Jason Zweig (00:31:47):

One way I like to think about it, is that there’s a skill to being lucky. And I know you’ve heard me tell this story before, William, and technically it has nothing to do with investment management. But people often ask me how I got to edit Graham’s book, The Intelligent Investor. And they expect me to say, “Oh, the publisher did a beauty contest and brought in 10 different writers and had each one write a sample chapter.” Or, “They interviewed people,” or whatever. And it’s like, “No, that’s not what happened at all.

Jason Zweig (00:32:19):

What happened is this. So I had read a book and then interviewed the author, a book called The Luck Factor by British psychologist named Richard Wiseman. And he had done a sort of big nationwide survey of people’s attitudes toward luck. And when all the surveys came back, he and his team were going through them. And there was one that really jumped out at him, which was, and I’m massively really paraphrasing. I’m going to get all the details wrong. But the essence of it is correct.

Jason Zweig (00:32:49):

This woman had said, “My husband died. Two of my kids have cancer. I lost my job. I got it back, but I’m a very lucky person.” And he said, “I really need to interview this woman.” So they brought her in and he said, “You described all these terrible things that happened to you and you say you’re lucky. Why do you say that?” And she proceeds to tell him this story. And she says that after her husband died and her kids got sick, she felt very depressed, as anybody would. And she was really struggling. And then she decided that she needed a rule. And the rule she came up with was whenever she’s about to go into a room full of people, she thinks of a color.

Jason Zweig (00:33:37):

Then she goes into the room and she walks up to the first person who’s wearing anything of that color and says, “Hello, my name is,” whatever her name was. And so she looks at professor Wiseman and he looks at her and he says, “Well, what does that have to do with luck?” And she says, “I always have a date on Saturday night.” So I have just read this and heard the story from him. And there was a huge party at Time Inc. where you and I think both were working there at the time. And hundreds of journalists were there. I forget what the occasion was.

Jason Zweig (00:34:10):

And I was talking with as usual, my closest friends and not really socializing with the group. But before I had walked in the room, I had said to myself, and I’m not sure which color it was, but I’m going to say blue. I had said blue. And so I looked across the room and there was somebody I knew wearing blue. And I said to my friends, “Excuse me, I really have to go talk to her.” And it was our mutual friend, Nina.

William Green (00:34:39):

This is Nina Munk who’s a wonderful writer.

Jason Zweig (00:34:43):

Yep. And so I lost her in the crowd. And I haven’t talked to her in like three years or four years or something. And I was like, “Ah, the heck with it. Forget it.” And then I was like, “No, I have to talk to her because she’s wearing,” whatever color it was, blue. And I found her because I was for the color and we had a wonderful talk about nothing in particular and life went on.

Jason Zweig (00:35:06):

And I went back to work the next day, et cetera, et cetera. But it turns out a couple days later, her book publisher takes her out to lunch to congratulate her for finishing her wonderful book on the merger, the takeover of Time Warner by AOL.

William Green (00:35:21):

Fools Rush In.

Jason Zweig (00:35:22):

Fools Rush In. And her publisher says to her, “Oh-

William Green (00:35:25):

And we were working for those fools.

Jason Zweig (00:35:26):

That’s correct. And her publisher says to her, “Oh, Nina, you could help me with one thing.We have this book by this guy who’s dead, Benjamin Graham, I think his name is. And it still sells, but it’s old and we need to update it. Who do you think would be good for that?” And she said my name. Now, she insists to this day that she would’ve said my name anyway, but I’m not so sure about that. I think she might have said, “Well, I don’t know. There’s like five different people you could try. One of them is Jason’s Zweig.”

Jason Zweig (00:35:58):

But instead, because I just so happened to run over to her because she was wearing the right color, she said my name. And that’s why they hired me. And so the thing is, that was despite the fact that I was trying to outwork everybody else in financial journalism, despite the fact that I had all these great contacts, despite everything I threw into my job, why did I get this in honor of a lifetime? Because Nina Munk happened to be wearing a dress whose color I had thought of because I had read a book.

Jason Zweig (00:36:34):

So skill is hugely important and it matters, but much of life, maybe most of life is shaped by just these weird moments of random chance. And the more professional you are, and the more intellectual effort is involved in what you do, the more vehemently you will deny the importance of the luck, but it affects everyone in every field. And it’s hugely important in asset management too. 

2. Alexandr Wang – A Primer on AI – Patrick O’Shaughnessy and Alexandr Wang

[00:14:29] Patrick: I couldn’t agree more on that point, maddening that we don’t become just the perfect beacon for all the most talented people. The interesting analogy that I’ve heard before just to wrap our minds around the sorts of things or tasks or functions or whatever that constantly improving AI/ML models can accomplish. One model that was funny and interesting was like anything that an intern could do for you, you might be able to scale up through one of these models. It’s complicated enough that a person’s on it now but it’s simple enough that you give it to an intern it’s sort of repetitive. I always kind of like that conception. What’s your way of thinking about how to communicate to your audience, other businesses using your tooling and just people general, what categories of things AI can do well, and maybe what categories of things we’re excited about but might be a very long time until AI can do well.

[00:15:19] Alexandr: I think this is one of the general misconceptions about AI and machine learning, which I think causes a great deal of FUD. Which is that the intuitive belief is that the things that are easy for humans to do are going to be the things that are easy for AI and machine learning to do, which is absolutely not the case and the things that are easy for algorithms are relatively orthogonal, frankly, to things that are easy for humans to do. One simple example here is I think that it’s going to be a very, very long time before we have home robots that can do things like fold your laundry and your dishes, but a much shorter time span/I think this is already today where you can have artificial intelligence systems that are world class copywriters and can write better rhetoric, better words than most people ever could. There’s probably a few frameworks I would assign to this. I think in a broad general sense, one way to think about the potential impact or lower bound potential impact of artificial intelligence is kind of as you mentioned, which is the ability to scale repetitive human tasks. So take repetitive human tasks instead of going from zero to one, go from one to N human work. And I think that this is a generally amazing thing to happen because I think that humans for the most part don’t enjoy repetitive tasks or generally find those relatively unpleasant and find it much more exciting to be creative and to constantly be creating.

This ability to scale human tasks from one to N, is going to be this incredible, not only economic good or economic enabler for the world, but also going to be a significant enabler for humans to be more leveraged, more happy, more creative, et cetera. I think that’s one way to sort of contextualize the broad impact that AI can have. And there’s a bunch of other nuances, which I’m sure we’ll get to. If you think about what tasks humans are good at versus what tasks algorithms are good at, generally that more or less boils down to data availability, which is that where there are large pools of digital data an algorithm can learn from, and those pools of digital data either have been collected in the past or easy to collect in the future. Those are going to be the problems which algorithms can do effectively and can learn to do effectively. And then areas where there does not exist digital data and is expensive to collect this digital data. Those are going to be the last things to be automated. So a great example is if you look at GPT 3 and these large language models, the real secret behind it is that it leverages two decades of Reddit data, which is two decades of humans using the internet and basically typing language into the internet in various forms for decades and decades and decades. And that is a pool of digital data that it used to be able to do these incredible things in writing long form text.

Then if you think about this parallel that I mentioned around home robot, there’s so little data about actual capture data of let’s someone folding a shirt, or somebody folding a towel, or going around and doing chores, the ability to actually collect and produce that level of digital data necessary to produce algorithms that can understand that and actually perform this task is an incredibly, incredibly, incredibly hard road. This extends by the way to things that are really unintuitive. So for example, DeepMind and OpenAI very recently released algorithms some of which are very good at … DeepMind released an algorithm that’s very good at competitive programming, OpenAI released an algorithm that can prove very difficult math problems or math theorems. And these are both things which are very, very challenging for humans to do. Very, very premium skill sets as far as humans go. But there are incredible pools of visual data as well as abilities to verify or simulate the outcomes here, which allow these algorithms to reform actually incredibly, incredibly well. There’s this very interesting process by which artificial intelligence will slowly automate or meaningfully change what human jobs that are primarily digital in context will look like. And then a lot of the physical work will I think be generally on touch for a very long time…

[00:20:18] Patrick: Maybe it makes sense to help people understand the process of creating one of these models in the first place. I think the discreet steps, let’s say the outcome is a model that makes a useful prediction. Ultimately, this is all predictions instead of what’s being generated by the models in the first place. I don’t know where to start, whether it’s with raw data or annotation of data, and we’re starting to get into what Scale now provides for companies. But how do you think about explaining the discreet or the important stages of building one of these models in the first place? I think just understanding that architecture will let us dig into each piece a little bit more.

[00:20:51] Alexandr: Again, everything starts with the data. I often will analogize the data for these algorithms as the ingredients that you would make a dish with, or the ingredients that you would make something that you’d eat with. Is incredibly, incredibly important. We often say this thing, which is data is the new code. If you compare traditional software versus AI software, in traditional software, the lifeblood is really the code. That’s the thing that informs the system what to do. In artificial intelligence and machine learning, the lifeblood is really the data. That is certainly like one major change. That’s really important. The life cycle for most of these algorithms is a few fold. So first is this process of collecting large amounts of data. By collecting it could be data that is already sitting there. There’s a lot of software processes that already collect a bunch of data. There’s a lot of cameras in the world that already collect a bunch of data, but you need to get the raw data in the first place. Then it goes through this process of annotation, which is the conversion of this large pools of unstructured data to structured data that algorithms can actually learn from. This could be for example, in imagery or video from a self-driving car marking where the cars and pedestrians and signs and road markings, and bicyclists and whatnot are so that an algorithm can actually learn from those things. It could be for example, in large snippets of text, actually summarizing that text so that now we can understand and learn what it means to actually summarize text. So whatever that translation is from unstructured data to a structured format that these algorithms can learn from, then it goes through a training process.

So these algorithms basically look through these rims and rims of data, learn patterns and slowly train themselves so to speak, to be able to do whatever task is necessary on top of the data. And then you launch one of these algorithms in production and you run them on real world data, and they’re constantly producing as you mentioned these predictions. The very important piece is, this is not a sort of like one way process this is actually a loop. If you look at almost every algorithm that has launched out their own production, it is not a sort of you build the algorithm and then you’re done because these algorithms are generally very brittle and unless you’re constantly updating them and maintaining them, they will eventually do things that you don’t want them to do, or they’ll eventually perform poorly. There’s this critical process by which you are constantly then replenishing them. You’re constantly going and recollecting new data, annotating it, training the algorithm, launching that new algorithm onto production and you constantly undergo this process to create very high quality algorithms.

[00:23:19] Patrick: I want to make sure that this interesting point you made about data being the new code really hits home for people, and maybe even put that in a business context. So if the IP or the moat of a software company is this code base that takes a very long time to develop, has all sorts of dimensions to it. Maybe it’s microservices, maybe it’s some code monolith, it’s questionably like an incredibly valuable asset. It’s digital, but it’s an incredibly valuable asset to the company. And you’re talking about, I think, a transition where it’s something different where maybe, I don’t know, maybe Google’s data repository or something, is this unbelievable advantage that they have because no one else has access to all of their data. Is that kind of what you mean? That ultimately maybe something like Google, their data is worth a lot more than their code base. And that that would become a trend that we see sort of across industries?

[00:24:07] Alexandr: If you look at the highest performing algorithms across a variety of different domains, image recognition and speech recognition and summarizing texts and answering questions of texts. So these very different cognitive tasks, look under the hood, they actually all use the exact same code base. That’s been this very meaningful shift that’s happened over the past few years in artificial intelligence. We’re at this point where the code has become effectively the same and more or less a commodity so to speak when it comes to artificial intelligence and machine learning. The thing that enables the differentiation is really the data and the data sets that are used to power these algorithms. To your point, if you think about … One of the ways that we talk about this in a business context is if you think about what is your strategic asset? In general in business, your strategic assets are the things that allow you to differentiate yourself against your competition. In a world where 99.99% of the software in the world is sort of traditional software, and then only 0.01% is AI software, then you care the most about your code. Your code is what will differentiate your product versus your competitor’s product or your processes versus your competitors’ processes, et cetera. But then as more and more of the software in the world’s written, infused with AI, using AI or over time the interfaces shift to AI. Interfaces and Alexa like interface for example, as that shift happens, as you go from 99.99 to 90:10 or 80:20, or even 50:50 over time, the vector of differentiation totally shifts to data and the data sets that you have access to. And so that means is that your strategic differentiator to your point as a firm is going to be primarily based off of what are my existing data assets.

And then what is the engine by which I’m constantly producing new insightful differential data to power these core algorithms that are actually powering my business. And these algorithms at the core that will power the future of business, I think are relatively core. I think there’s definitely algorithms around automating business processes that are going to result in significantly more profitable firms over time. There’s going to be algorithms that are based around customer recommendations and customer life cycle, which is a lot of the algorithms that we’ve seen date. Imagine TikTok recommendation algorithm, but for like every economic interaction or every economic transaction in your life that is constantly identifying the perfect next thing that you may want to transact with. And that is going to exist across every firm or every industry is basically going have to build their version of that. And that’s going to result in significantly more efficient trade. The long-term impacts of that you could think of as like a general reduction in marketing expenses or sales and marketing expenses because the algorithm just does a better job at knowing what the user wants to do next and having to do all this marketing and all this very active sales. There’s a lot of very real changes to I think the physics of what the best businesses will look like in, let’s say a decade or two decades or three decades that come from artificial intelligence. If you think about what will allow me to do these things better than someone else, it’s the quality, efficacy and volume of the data that is used to power these algorithms…

[00:41:11] Patrick: If we zoom out and go to the more market side of things and put my investor hat on and think about what drives enterprise value, value creation, the things that investors ultimately care about when they’re putting money into a business, they want to get a lot more money out. The world of software has obviously been a center stage for seven, 10 years now because they’ve tended to be very scalable, fairly high margin, incredibly fast growing businesses. And the word that you never want to hear as an investor is deceleration, in the growth world where maybe they’re reaching saturation points and software is no longer a new thing. It’s a fairly mature thing. How do you think about, you mentioned this concept of thinking about like an S curve and maybe we’re for software approaching the diminishing part of that S curve. Where is AI in that same thing and how might these two things intersect to form lots of new enterprise value in the future if software becomes overly saturated?

[00:42:04] Alexandr: One thing to think about software for a moment, the sort of alchemy or the magic of software is that A, you’re able to collect very large scale data sets in a very coordinated way, B that you’re able to build simple workflow tooling on top of these data sets, think about your traditional CRM or frankly the majority of SaaS tooling is workflows on top of these data sets that enable business value. Then three is basically infinite scalability of a lot of these systems. These are some of the like technological primitives that have enabled SaaS broadly speaking, or software in general to produce a lot of value for most enterprises, but these primitives or these forms of alchemy have some cap, that’s for the saturation of software that you’re mentioning. Well, then if you think about AI technology and you use this mental model that I mentioned before, which is the fundamental promise of AI technology is you can take repetitive task that people are doing, you go from one to N with those repetitive tasks so you can automate the Nth repetitive tasks rather than relying on humans for that. Well, if you look at the majority of fortune 500 businesses or the majority largest enterprise in the world, there are an incredible number of parts of their business, where they spend enormous amounts of money on large teams of people to do repetitive tasks.

The alchemy that is possible there is not only the automation of meaningful parts of that work, but also the ability to even go further than even the best trained humans could do in many of those tasks. There’s some value that potential economic value or the TAM so to speak of AI machine learning is just absolutely astronomical. I think that is at minimum 10X, probably 100X the total business value that has been generated by SaaS systems or software historically, I think if you think about it, you have this one S curve of the saturation of software. And then there’s this very, very early S curve that is being developing right now around the productization and productionization of large scale AI systems, let’s say in the enterprise, or let’s say across businesses. And the real question is, okay, what’s the pacing of that S-curve versus the pacing of the saturation and deceleration of the current software S-curve? And I’m an optimist in not too long, we’re going to have a massive proliferation of AI use cases within the enterprise that are going to be way more impactful than the use cases of software in the past. And the way you’ll see that the business ROIs generated from high quality AI systems are going to be 10X more than the business value generated by let’s say, deploying a CRM or deploying an ERP system.

3. Why Market Timing Is Near Impossible – Peridot Capital Management

Let’s assume for a moment that you, unlike most everyone else on the planet, have an uncanny ability to forecast when S&P 500 company profits are going to decline within the economic cycle. You surmise that the market should go down when profits are falling  so you will use this knowledge to simply lose less money during market downturns than the average investor.

The long-term data would support this strategy. Since 1960, the S&P 500 index has posted a calendar year decline 12 times (about 19% of the time). Similarly, S&P 500 company profits have posted calendar year declines 13 times during that period (21% of the time). This matches up with the often repeated statistic that the market goes up four years out of every five (and thus you should always be invested). But what if you can predict that 5th year? Surely that would work.

Here’s the kicker; while the S&P 500 index fell in value during 12 of those years and corporate profits fell during 13 of those years, there were only 4 times when they both fell during the same year. So, on average, even if you knew for a fact which years would see earnings declines, the stock market still rose 70% of the time.

So the stock market goes up 80% of the time in general and in years when corporate profits are falling the it goes up 70% of the time. And so I ask you (and every client who I discuss this with), how on earth can anyone expect to know when to be out of the market? 

4. There Is No Playbook – Christoph Gisiger and Gavin Baker

Mr. Baker, you are one of the very few tech investors who lived through the dotcom crash in the year 2000 firsthand and remained active in the sector afterwards. As a battle-hardened industry veteran, how do you assess today’s market environment compared to back then?

Here’s the big thing: A lot of non-profitable tech companies with under $100 billion market capitalization just experienced a similar crash in valuations as we saw in the year 2000. But from a fundamental perspective, I don’t think the burst of the dotcom bubble has many parallels to what’s happening today. At that time, after the bubble burst, the fundamentals of every tech company imploded, they missed their earnings numbers by thirty, forty or fifty percent. Many had significant year-over-year revenue declines, and then their stocks went down more.

And how do things look today?

I do not believe that the fundamentals are going to crash in a similar way. In the year 2000, nobody knew which business models were going to work on the internet. The buildout in telecom equipment, data centers and software was not based on a consumption basis. It was built in anticipation of demand that took much longer than expected to materialize. In fact, we added so much telecom capacity that it took 15 years to absorb the amount of fiber and optical components we put in the ground. Every bank, every retailer and almost every other company was in a huge hurry to go online. They spent all this money to put up a website, but then they were like: «Wow! Why did I do that?» Today, you don’t see that degree of overbuild or excess on the supply side, because it’s all sold on a consumption basis. I promise, if the big cloud hyperscalers stopped spending on CapEx, they would run out of capacity in twelve to eighteen months. It’s a very different environment.

What does this mean for tech stocks?

It creates opportunities because today’s unprofitable tech companies are so much better than the ones back then. Many of them are Software as a Service companies where we know that the business model works. They have immense control over their P&L. You have seen some of them take their free cash flow margins up 80-90% in two quarters. They can be profitable whenever they want. They’re making a conscious trade-off between growth and profitability, and when they tilt towards more profitability, they don’t stop growing, they just grow slower. So it’s wild to me that you’ve had a move comparable to the year 2000 crash in non-profitable tech companies. Their forward multiples have compressed at least as much if not more, but they are great businesses, they’re not missing their numbers.

However, many of these companies were notably highly valued. As interest rates have risen, their shares have now come under pressure.

If you’re unprofitable, you’re essentially a long-duration asset. Hence, it’s natural that you take some pain as interest rates go up. But I think you’ve taken all the pain in the terminal valuation now. I don’t see much more multiple compression. Thoma Bravo, a private equity firm, just took out a software company at 12x forward sales. Today, you can buy a lot of software companies at roughly half that multiple, and they are growing faster with better fundamentals than the asset acquired by Thoma Bravo. So if you’re a software company trading at 6x sales, and an inferior company just got bought out at 12x sales by a very knowledgeable private equity buyer, I think that’s enough of a discount. That’s why I don’t see much more multiple compression. What will drive performance is growth and the relative operational performance of these businesses, and they should do reasonably well in an inflationary environment…

How does this affect the outlook for the tech sector?

For their research, a lot of people are going back to look at the 70s. That’s a great exercise for energy, materials or restaurant companies where the business models are stable. But it may lead you to terrible conclusions for companies and industries where the business models have drastically changed. That’s why looking at the 70s to understand how tech will do today is absurd. Today’s tech companies are totally different, their business model is completely different. They have much higher ROICs, they are less capital intensive, have much higher margins, more pricing power and more gross profit per employee than tech companies in the 1970s. There is no precedent, there is no playbook for these business models in a high inflation environment. In America and Europe, you have never seen how inflation impacts the business models of different software companies. You haven’t seen how it impacts different internet business models. So from first principles thinking, there is an exciting opportunity to reach very differentiated conclusions and first principles thinking suggests these business models should do well fundamentally in a high inflation environment…

In today’s world, cyberattacks are also occurring more and more often. Does this speak in favor of cybersecurity companies?

During the first twenty years of my career, I was always very negative on cybersecurity because it was one of the very rare industries where scale was a massive disadvantage rather than an advantage. Before the rise of artificial intelligence, human beings were writing software, it was a very manual process. And, as a cybersecurity company got bigger, hackers would start to optimize more and more for hacking that particular cybersecurity company’s software. As a result, the performance of that company would go down and it would lose customers. AI changed all of that because if the AI learns from the attacks, then you get better with scale. So that’s another area I’m excited for. Antonio Gracias, a fantastic thinker, has this great phrase «pro-entropic». To me, cybersecurity companies are pro-entropic. They benefit from rising chaos in the world.

5. Deep Roots – Morgan Housel

Forecasting, “If this happens, then that will happen,” rarely works, because this event gives rise to another trend, which incentivizes a different behavior, which sparks a new industry, which lobbies against this, which can cancel that, and so on endlessly.

To see how powerful these chain reactions can be, look at history, where it’s easy to skip the question, “And why is that?”

Take the question, “Why are student loans so high?”

Well, in part because millions of people ran to college when job prospects were dim in the mid-2000s.

Why were job prospects dim?

Well, there was a financial crisis in 2008.

Why?

Well, there was a housing bubble.

Why?

Well, interest rates were slashed in the early 2000s.

Why?

Well, 19 hijackers crashed planes on 9/11 that spooked the Fed into action to prevent a recession.

Why? Well …

You can keep asking, why? forever. And when do so you get these crazy connections, like a terrorist attack leading to student debt a decade later.

Every current event has parents, grandparents, great grandparents, siblings, and cousins. Ignoring that family tree can muddy your understanding of events, giving a false impression of why things happened, how long they might last, and under what circumstances they might happen again. Viewing events in isolation, without an appreciation for their deep roots, helps explain everything from why forecasting is hard to why politics is nasty.

Japan’s economy has been stagnant for 30 years because its demographics are terrible. Its demographics are terrible because it has a cultural preference for small families. That preference began in the late 1940s when, after losing its empire, its people nearly starved and froze to death each winter when the nation couldn’t support its existing population.

It was almost the opposite in America. The end of wartime production in 1945 scared policymakers, who feared a recession. So they did everything they could to make it easier for consumers to spend money, which boosted the economy, which inflated consumers’ social expectations, which led to a household debt boom that culminated with the 2008 crash.

No one looking at the last decade of economic performance blames Harry Truman. But you can draw a straight line from those decisions to what’s happening today.

6. “Ignoring the Possibility of Progress Is a Sure Method of Destroying Ourselves” – Rafaela von Bredow, Johann Grolle, and David Deutsch

DER SPIEGEL: Professor Deutsch, you believe that mankind, after billions and billions of years of absolute monotony in the universe, will now reshape it to their liking, that a new cosmological era is coming. Are you serious?

Deutsch: I am not the first to propose this idea. The Italian geologist Antonio Stoppani wrote in the 19th century that he had no hesitation in declaring man to be a new power in the universe, equivalent to the power of gravitation.

DER SPIEGEL: And fly to distant planets? Tap energy from black holes? Conquer entire galaxies?

Deutsch: I am not saying that we will necessarily do all this. I am only saying that, in principle, there is nothing to stop us. Only the laws of physics could prevent us. And we do not know a law of physics that forbids us, for example, from traveling to distant stars.

DER SPIEGEL: Theoretically, the colonization of the galaxy may be possible. But how would this work practically?

Deutsch: Human brains, assisted by our computers, can create the necessary knowledge for this – even though we do not yet know how.

DER SPIEGEL: Your late colleague Stephen Hawking did not have such high hopes for Homo sapiens. He thought we were “just a chemical scum on a moderate-sized planet, orbiting around a very average star in the outer suburb of one among a hundred billion galaxies.” Was Hawking wrong?

Deutsch: Well, it’s literally true. Just as it is true in a sense that the war in Ukraine was caused by atoms. It’s factually true, but it doesn’t explain anything. What we need to understand the world and our role in it are explanations, not empty statements.

DER SPIEGEL: Even among your fellow researchers, it might be hard to find many who grant us humans such a godlike role in the universe as you do.

Deutsch: Science is currently in a deplorable state. I’m reluctant to diss my colleagues, but, unfortunately, there’s a sort of cult of the expert. Accordingly, many researchers remain narrowly focused on their particular field, and even within that they are focused on creating usefulness rather than finding explanations. This is a terrible mistake.

DER SPIEGEL: What is so terrible about useful science?

Deutsch: All usefulness, every prediction, comes from understanding. However, if you no longer strive for fundamental explanations, but believe that it is sufficient to generate something useful, then you will merely move incrementally from one decimal place to the next, and even then, only in areas that are already well studied. This tendency has dramatically slowed down progress.

DER SPIEGEL: There are photos of a black hole, we can genetically modify people and develop a vaccine against a new pathogen within months. All this is not progress?

Deutsch: Yes, it is, but it is going slower than it could.

DER SPIEGEL: Biologist Richard Dawkins believes that this is perhaps because our brains are insufficient to comprehend the increasingly complex world. After all, it evolved to deal with problems on the African savanna. Now, however, we have to deal with stars, quantum and nuclear reactions.

Deutsch: Dawkins overlooks the fact that there is basically only one kind of computer. Whether it’s your laptop, or a supercomputer for modeling the climate, any computer can run the same computations. And our brain is nothing more than a universal computer. Its hardware can run any program, and we can use extra memory in our computers if necessary; therefore, it can run any explanation. There is no such thing as a computer that’s suitable for understanding the savanna, but not the sky. We couldn’t build one if we tried. It violates the laws of physics.

7. DALL•E 2 – Sam Altman

1) This is another example of what I think is going to be a new computer interface trend: you say what you want in natural language or with contextual clues, and the computer does it. We offer this for code and now image generation; both of these will get a lot better. But the same trend will happen in new ways until eventually it works for complex tasks—we can imagine an “AI office worker” that takes requests in natural language like a human does…

…3) Copilot is a tool that helps coders be more productive, but still is very far from being able to create a full program. DALL•E 2 is a tool that will help artists and illustrators be more creative, but it can also create a “complete work”. This may be an early example of the impact AI on labor markets. Although I firmly believe AI will create lots of new jobs, and make many existing jobs much better by doing the boring bits well, I think it’s important to be honest that it’s increasingly going to make some jobs not very relevant (like technology frequently does).

4) It’s a reminder that predictions about AI are very difficult to make. A decade ago, the conventional wisdom was that AI would first impact physical labor, and then cognitive labor, and then maybe someday it could do creative work. It now looks like it’s going to go in the opposite order.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We have no vested interest in any company mentioned. Holdings are subject to change at any time.