What We’re Reading (Week Ending 16 July 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 16 July 2023:

1. Inside Google’s big AI shuffle — and how it plans to stay competitive, with Google DeepMind CEO Demis Hassabis – Nilay Patel and Demis Hassabis

From the outside, the timeline looks like this: everyone’s been working on this for ages, we’ve all been talking about it for ages. It is a topic of conversation for a bunch of nerdy journalists like me, a bunch of researchers, we talk about it in the corner at Google events.

Then ChatGPT is released, not even as a product. I don’t even think Sam [Altman] would call it a great product when it was released, but it was just released, and people could use it. And everyone freaked out, and Microsoft releases Bing based on ChatGPT, and the world goes upside down, and Google reacts by merging DeepMind and Google Brain. That’s what it looks like from the outside. Is that what it felt like from the inside?

That timeline is correct, but it’s not these direct consequences; it’s more indirect in a sense. So, Google and Alphabet have always run like this. They let many flowers bloom, and I think that’s always been the way that even from Larry [Page] and Sergey [Brin] from the beginning set up Google. And it served them very well, and it’s allowed them to organically create incredible things and become the amazing company that it is today. On the research side, I think it’s very compatible with doing research, which is another reason we chose Google as our partners back in 2014. I felt they really understood what fundamental and blue sky research was, ambitious research was, and they were going to facilitate us being and enable us to be super ambitious with our research. And you’ve seen the results of that, right?

By any measure, AlphaGo, AlphaFold, but more than 20 nature and science papers and so on — all the normal metrics one would use for really delivering amazing cutting-edge research we were able to do. But in a way, what ChatGPT and the large models and the public reaction to that confirmed is that AI has entered a new era. And by the way, it was a little bit surprising for all of us at the coalface, including OpenAI, how viral that went because — us and some other startups like Anthropic and OpenAI — we all had these large language models. They were roughly the same capabilities.

And so, it was surprising, not so much what the technology was because we all understood that, but the public’s appetite for that and obviously the buzz that generated. And I think that’s indicative of something we’ve all been feeling for the last, I would say, two, three years, which is these systems are reaching a level of maturity now and sophistication where it can really come out of the research phase and the lab and go into powering incredible next-generation products and experiences and also breakthroughs, things like AlphaFold directly being useful for biologists. And so, to me, this is just indicative of a new phase that AI is in of being practically useful to people in their everyday lives and actually being able to solve really hard real-world problems that really matter, not just the curiosities or fun, like games.

When you recognize that shift, then I think that necessitates a change in your approach as to how you’re approaching the research and how much focus you’re having on products and those kinds of things. And I think that’s what we all came to the realization of, which was: now was the time to streamline our AI efforts and focus them more. And the obvious conclusion of that was to do the merger…

It feels like the ChatGPT moment that led to this AI explosion this year was really rooted in the AI being able to do something that regular people could do. I want you to write me an email, I want you to write me a screenplay, and maybe the output of the LLM is a C+, but it’s still something I can do. People can see it. I want you to fill out the rest of this photo. That’s something people can imagine doing. Maybe they don’t have the skills to do it, but they can imagine doing it. All the previous AI demos that we have gotten, even yours, AlphaFold, you’re like, this is going to model all the proteins in the world.

But I can’t do that; a computer should do that. Even a microbiologist might think, “That is great. I’m very excited that a computer can do that because I’m just looking at how much time it would take us, and there’s no way we could ever do it.” “I want to beat the world champion at Go. I can’t do that. It’s like, fine. A computer can do that.” 

There’s this turn where the computer is starting to do things I can do, and they’re not even necessarily the most complicated tasks. Read this webpage and deliver a summary of it to me. But that’s the thing that unlocked everyone’s brain. And I’m wondering why you think the industry didn’t see that turn coming because we’ve been very focused on these very difficult things that people couldn’t do, and it seems like what got everyone is when the computer started doing things people do all the time.

I think that analysis is correct. I think that is why the large language models have really entered the public consciousness because it’s something the average person, that the “Joe Public,” can actually understand and interact with. And, of course, language is core to human intelligence and our everyday lives. I think that does explain why chatbots specifically have gone viral in the way they have. Even though I would say things like AlphaFold, I mean of course I’d be biased in saying this, but I think it’s actually had the most unequivocally biggest beneficial effects so far in AI on the world because if you talk to any biologist or there’s a million biologists now, researchers and medical researchers, have used AlphaFold. I think that’s nearly every biologist in the world. Every Big Pharma company is using it to advance their drug discovery programs. I’ve had multiple, dozens, of Nobel Prize-winner-level biologists and chemists talk to me about how they’re using AlphaFold.

So a certain set of all the world’s scientists, let’s say, they all know AlphaFold, and it’s affected and massively accelerated their important research work. But of course, the average person in the street doesn’t know what proteins are even and doesn’t know what the importance of those things are for things like drug discovery. Whereas obviously, for a chatbot, everyone can understand, this is incredible. And it’s very visceral to get it to write you a poem or something that everybody can understand and process and measure compared to what they do or are able to do… 

…There are so many decisions I make every day,it’s hard to come up with one now. But I tend to try and plan out and scenario a plan many, many years in advance. So I tell you the way I try to approach things is, I have an end goal. I’m quite good at imagining things, so that’s a different skill, visualizing or imagining what would a perfect end state look like, whether that’s organizational or it’s product-based or it’s research-based. And then, I work back from the end point and then figure out what all the steps would be required and in what order to make that outcome as likely as possible.

So that’s a little bit chess-like, right? In the sense of you have some plan that you would like to get to checkmate your opponent, but you’re many moves away from that. So what are the incremental things one must do to improve your position in order to increase the likelihood of that final outcome? And I found that extremely useful to do that search process from the end goal back to the current state that you find yourself in.

Let’s put that next to some products. You said there’s a lot of DeepMind technology and a lot of Google products. The ones that we can all look at are Bard and then your Search Generative Experience. There’s AI in Google Photos and all this stuff, but focused on the LLM moment, it’s Bard and the Search Generative Experience. Those can’t be the end state. They’re not finished. Gemini is coming, and we’ll probably improve both of those, and all that will happen. When you think about the end state of those products, what do you see?

The AI systems around Google are also not just in the consumer-facing things but also under the hood that you may not realize. So even, for example, one of the things we applied our AI systems to very initially was the cooling systems in Google’s data centers, enormous data centers, and actually reducing the energy they use by nearly 30 percent that the cooling systems use, which is obviously huge if you multiply that by all of the data centers and computers they have there. So there are actually a lot of things under the hood where AI is being used to improve the efficiency of those systems all the time. But you’re right, the current products are not the end state; they’re actually just waypoints. And in the case of chatbots and those kinds of systems, ultimately, they will become these incredible universal personal assistants that you use multiple times during the day for really useful and helpful things across your daily lives.

From what books to read to recommendations on maybe live events and things like that to booking your travel to planning trips for you to assisting you in your everyday work. And I think we’re still far away from that with the current chatbots, and I think we know what’s missing: things like planning and reasoning and memory, and we are working really hard on those things. And I think what you’ll see in maybe a couple of years’ time is today’s chatbots will look trivial by comparison to I think what’s coming in the next few years.

My background is as a person who’s reported on computers. I think of computers as somewhat modular systems. You look at a phone — it’s got a screen, it’s got a chip, it’s got a cell antenna, whatever. Should I look at AI systems that way — there’s an LLM, which is a very convincing human language interface, and behind it might be AlphaFold that’s actually doing the protein folding? Is that how you’re thinking about stitching these things together, or is it a different evolutionary pathway?

Actually, there’s a whole branch of research going into what’s called tool use. This is the idea that these large language models or large multimodal models, they’re expert at language, of course, and maybe a few other capabilities, like math and possibly coding. But when you ask them to do something specialized, like fold a protein or play a game of chess or something like this, then actually what they end up doing is calling a tool, which could be another AI system, that then provides the solution or the answer to that particular problem. And then that’s transmitted back to the user via language or pictorially through the central large language model system. So it may be actually invisible to the user because, to the user, it just looks like one big AI system that has many capabilities, but under the hood, it could be that actually the AI system is broken down into smaller ones that have specializations.

And I actually think that probably is going to be the next era. The next generation of systems will use those kinds of capabilities. And then you can think of the central system as almost a switch statement that you effectively prompt with language, and it roots your query or your question or whatever it is you’re asking it to the right tool to solve that question for you or provide the solution for you. And then transmit that back in a very understandable way. Again, using through the interface, the best interface really, of natural language.

Does that process get you closer to an AGI, or does that get you to some maximum state and you got to do something else?

I think that is on the critical path to AGI, and that’s another reason, by the way, I’m very excited about this new role and actually doing more products and things because I actually think the product roadmap from here and the research roadmap from here toward something like AGI or human-level AI is very complementary. The kinds of capabilities one would need to push in order to build those kinds of products that are useful in your everyday life like a universal assistant requires pushing on some of these capabilities, like planning and memory and reasoning, that I think are vital for us to get to AGI. So I actually think there’s a really neat feedback loop now between products and research where they can effectively help each other…

You’ve signed a letter from the Center for AI Safety — OpenAI’s Sam Altman and others have also signed this letter — that warns against the risk from AI. And yet, you’re pushing on, Google’s in the market, you’ve got to win, you’ve described yourself as competitive. There’s a tension there: needing to win in the market with products and “Oh boy, please regulate us because raw capitalism will drive us off the cliff with AI if we don’t stop it in some way.” How do you balance that risk?

It is a tension. It’s a creative tension. What we like to say at Google is we want to be bold and responsible, and that’s exactly what we’re trying to do and live out and role model. So the bold part is being brave and optimistic about the benefits, the amazing benefits, incredible benefits, AI can bring to the world and to help humanity with our biggest challenges, whether that’s disease or climate or sustainability. AI has a huge part to play in helping our scientists and medical experts solve those problems. And we’re working hard on that  and all those areas. And AlphaFold, again, I’d point to as a poster child for that, what we want to do there. So that’s the bold part. And then, the responsible bit is to make sure we do that as thoughtfully as possible with as much foresight as possible ahead of time.

Try and anticipate what the issues might be if one was successful ahead of time. Not in hindsight, and perhaps this happened with social media, for example, where it is this incredible growth story. Obviously, it’s done a lot of good in the world, but then it turns out 15 years later we realize there are some unintended consequences as well to those types of systems. And I would like to chart a different path with AI. And I think it’s such a profound and important and powerful technology. I think we have to do that with something as potentially as transformative as AI. And it doesn’t mean no mistakes will be made. It’s very new, anything new, you can’t predict everything ahead of time, but I think we can try and do the best job we can.

And that’s what signing that letter was for was just to point out that I don’t think it’s likely, I don’t know on the timescales, but it’s something that we should consider, too, in the limit is what these systems can do and might be able to do as we get closer to AGI. We are nowhere near that now. So this is not a question of today’s technologies or even the next few years’, but at some point, and given the technology’s accelerating very fast, we will need to think about those questions, and we don’t want to be thinking about them on the eve of them happening. We need to use the time now, the next five, 10, whatever it is, years, to do the research and to do the analysis and to engage with various stakeholders, civil society, academia, government, to figure out, as this stuff is developing very rapidly, what the best way is of making sure we maximize the benefits and minimize any risks.

And that includes mostly, at this stage, doing more research into these areas, like coming up with better evaluations and benchmarks to rigorously test the capabilities of these frontier systems.

You talked about tool usage for AI models, you ask an LLM to do something, it goes off and asks AlphaFold to fold the protein for you. Combining systems like that, integrating systems like that, historically that’s where emergent behaviors appear, things you couldn’t have predicted start happening. Are you worried about that? There’s not a rigorous way to test that.

Right, exactly. I think that’s exactly the sort of thing we should be researching and thinking about ahead of time is: as tool use becomes more sophisticated and you can combine different AI systems together in different ways, there is scope for emergent behavior. Of course, that emergent behavior may be very desirable and be extremely useful, but it could also potentially be harmful in the wrong hands and in the hands of bad actors, whether that’s individuals or even nation-states…

There’s the concept of model collapse. That we’re going to train LLMs on LLM-generated data, and that’s going to go into a circle. When you talk about cross-referencing facts, and I think about Google — Google going out in the web and trying to cross-reference a bunch of stuff but maybe all that stuff has been generated by LLMs that were hallucinating in 2023. How do you guard against that?

We are working on some pretty cool solutions to that. I think the answer is, and this is an answer to deepfakes as well, is to do some encrypted watermarking, sophisticated watermarking, that can’t be removed easily or at all, and it’s probably built into the generative models themselves, so it’s part of the generative process. We hope to release that and maybe provide it to third parties as well as a generic solution. But I think that the industry in the field needs those types of solutions where we can mark generated media, be that images, audio, perhaps even text with some Kitemark that says to the user and future AI systems that these were AI-generated. And I think that’s a very, very pressing need right now for near-term issues with AI like deepfakes and disinformation and so on. But I actually think a solution is on the horizon now.

2. A stock market gift right under your nose – Chin Hui Leong

In my book, the best returns come from owning stocks for the long term. For example, I have owned shares of Apple, Amazon, Booking Holdings, and Intuitive Surgical since 2010. On average, these shares have grown by almost 17 times their original value, turning each dollar invested to nearly US$17 over the past 13 years. The key ingredient here is time. But the trick is knowing what shares to hold.

Ideally, the business behind the stock should exhibit the ability to grow in both good times and bad. When businesses are able to deliver huge increases in earnings over time, your odds of a good outcome increase. Here is your big hint. If companies can perform during a tough economy, it stands to reason that they will do as well or better, when the economic conditions improve. And if they outperform, it is a great recipe for long-term investment returns…

Booking Holdings, which owns popular travel sits such as Booking.com and Agoda, reported revenue and profit growth of over 65 per cent and nearly 149 percent, respectively, between 2007 and 2009 at the worst of the GFC. Post-GFC, the company outperformed. From 2009 to today, Booking Holdings’ revenue and net profit soared by almost eight-fold and nine-fold, respectively. The shares I bought are up by more than 900 per cent, closely mirroring its profit increase, demonstrating that stock returns followed growth over 13 years.

Likewise, Apple’s iPhone was criticised for being too expensive back in 2007. Yet its sales from 2007 to 2009 (the GFC period) show that the smartphone is far from a discretionary purchase. In fact, the iPhone drove Apple’s revenue and earnings per share up 52 per cent and 60 per cent, respectively, during this tumultuous period. Today, revenue is more than 10-fold the 2009 level and over 26 times the EPS. The shares which I own since 2010 are up 21 times, another marker that returns follow actual growth…

… A key reason why I chose this quartet of stocks in 2010 is due to their strong performance during the difficult GFC period. Today, you have similar conditions. Last year, business growth stalled due to issues ranging from unfavourable exchange rates to supply chain disruptions and rising interest rates. But behind these troubles, you are being gifted real-world data on a select group of businesses that thrived, despite the circumstances…

…Said another way, you do not have to guess which companies will do well in bad times, you can sieve through the available data and see for yourself. At the end of this process, you should have a list of potential stocks to buy. This list, I submit, should comprise a superior set of companies to start your research. Instead of looking for a needle in a haystack, you will be able to dramatically narrow down your search, right off the bat. As far as gifts from the stock market go, that is hard to beat.

3. An Interview with Marc Andreessen about AI and How You Change the World – Ben Thompson and Marc Andreessen

I did want to ask one quick question about that article Software is Eating the World. The focus of that seemed to be that we’re not in a bubble, which obviously in 2011 turned out to be very true. I wrote an Article in 2015 saying we’re not in a bubble. That also turned out to be very true. By 2021, 2022, okay maybe, but you missed a lot of upside in the meantime to say the least!

However, there’s one bit in that article where you talk about Borders giving Amazon its e-commerce business, and then you talk about how Amazon is actually a software company. That was certainly true at the time, but I think you can make the case — and I have — that Amazon.com in particular is increasingly a logistics company that is very much rooted in the real world, with a moat that costs billions of dollars to build and a real world moat, you can’t really compete with it: they can compete anyone out of business in the long run by dropping prices and covering their marginal costs. Now that doesn’t defeat your point, all of that is enabled by software and their dominant position came from software, but do you think there is a bit where physical moat still means more, or is Amazon just an exception to every rule?

MA: You can flip that on its head, and you can basically observe that the legacy car companies basically make that same argument that you’re making as to why they’ll inevitably crush Tesla. Car company’s CEOs have made this argument to me directly for many years, which is, “Oh, you Californians, it’s nice and cute that you’re doing all this stuff with software, but you don’t understand the car industry is about the real world. It’s about atoms and it’s about steel and it’s about glass and rubber and it’s about cars that have to last for 200,000 miles and have to function in the snow.” They usually point out, “You guys test your electric self-driving cars in the California weather, wait till you have a car on the road in Detroit. It’s just a matter of time before you software people come to the realization that you’re describing for Amazon, which is this is a real world business and the software is nice, but it’s just a part of it and this real world stuff is what really matters.”

There’s some truth to that. Look, the global auto industry in totality still sells a lot more cars than Tesla. Absolutely everything you’re saying about Amazon logistics is correct, but I would still maintain that over the long run that the opposite is still true, and I would describe it as follows, which is Amazon, notwithstanding all of their logistics expertise and throwaway, they’re still the best software company. Apple notwithstanding all of their manufacturing prowess and industrial design and all the rest of it, they’re still the best or one of the two best mobile software companies. Then of course Tesla, we’re sitting here today, and Tesla I think today is still worth more than the rest of the global auto industry combined in terms of market cap, and I think the broad public investor base is looking forward and saying, “Okay, the best software company is in fact going to win.” Then of course you drive the different cars and you’re like, “Okay, obviously the Tesla is just a fundamentally different experience as a consequence of quite literally being now a self-driving car run run by software.”

I would still hold of the strong form of what I said in that essay, which is in the long run, the best software companies win. And then it’s just really hard. Part of the problem is, it’s hard to compete with great software with mediocre software, it’s really hard to do that because there comes a time when it really matters and the fundamental form and shape of the thing that you’re dealing with fundamentally changes. You know this, are you going to use the video recorder app on your smartphone, which is software, or are you going to use an old-fashioned camcorder that in theory comes with a 600-page instruction manual and has 50 buttons on it. At some points the software wins and I would still maintain that that is what will happen in many markets…

What is the case for AI as you see it?

MA: Well, this is part of why I know there’s hysterical panic going on, because basically the people who are freaking out about AI never even bothered to stop and basically try to make the positive case, and just immediately assumed that everything is going to be negative.

The positive case on AI is very straightforward, which is AI is, number one is just AI is a technical development. It has the potential to grow the economy and do all the things that technology does to improve the world, but very specifically, the thing about AI is that it is intelligence. The thing about intelligence, and we know this from the history of humanity, intelligence is a lever on the rest of the world, a very fundamental way to make a lot of things better at the same time.

We know that because in human affairs, human intelligence, we know, across thousands of studies for a hundred years, increases in human intelligence make basically all life outcomes better for people. So people who are smarter are able to better function in life, they’re able to have higher educational attainment, they’re able to have better career success, they have better physical health. By the way, they’re also more able to deal with conflict, they’re less prone to violence, they’re actually less bigoted, they also have more successful children, those children go on to become more successful, those children are healthier. So intelligence is basically this universal mechanism to be able to deal with the complex world, to be able to assimilate information, and then be able to solve problems.

Up until now, our ability as human beings to engage in the world and apply intelligence to solve problems has been, of course, limited to the faculties that we have with these kind of partial augmentations, like in the form of calculating machines. But fundamentally, we’ve been trying to work through issues with our own kind of inherent intelligence. AI brings with it the very big opportunity, which I think is already starting to play out, to basically say, “Okay, now we can have human intelligence compounded, augmented with machine intelligence”. Then effectively, we can do a forklift upgrade and effectively make everybody smarter.

If I’m right about that and that’s how this is going to play out, then this is the most important technological advance with the most positive benefits, basically, of anything we’ve done probably since, I don’t know, something like fire, this could be the really big one…

But if it’s so smart and so capable, then why isn’t it different this time? Why should it be dismissed as another sort of hysterical reaction to say that there’s this entity coming along? I mean, back in the day, maybe the chimps had an argument about, “Look, it’s okay if these humans evolve and they’re smarter than us”. Now they’re stuck in zoos or whatever it might be. I mean, why would not a similar case be made for AI?

MA: Well, because it’s not another animal, and it’s not another form of human being, it’s a machine. This is what’s remarkable about it, it’s machine intelligence, it’s a combination of the two. The significance of that, basically, is like your chimp analogy, or basically human beings reacting to other human beings, or over time in the past when two different groups of humans would interact and then declare war on each other, what you were dealing with was you were dealing with evolved living species in each case.

That evolved part there is really important because what is the mechanism by which evolution happens, right? It’s conflict. So survival of the fittest, natural selection, the whole point of evolution is to kind of bake off different, originally one cell organisms, and then two cell organisms, and then ultimately animals, and then ultimately people against each other. The way that evolution happens is basically a big fight and then, at least in theory, the stronger of the organisms survives.

At a very deep genetic level, all of us are wired for combat. We’re wired for conflict, we’re wired for a high level of, let’s say, if not a high level of physical violence, then at least a high level of verbal violence and social and cultural conflict. I mean, machine intelligence is not evolved. The term you might apply is intelligent design, right?

(laughing) Took me a second on that one.

MA: You remember that from your childhood? As do I. Machine intelligence is built and it’s built by human beings, it’s built to be a tool, it’s built the way that we build tools, it’s built in the form of code, it’s built in the form of math, it’s built in the form of software that runs on chips. In that respect, it’s a software application like any other. So it doesn’t have the four billion years of conflict driven evolution behind it, it has what we design into it.

That’s where I part ways from, again, the doomers, where from my perspective, the doomers kind of impute that it’s going to behave as if it had come up through four billion years of violent evolution when it hasn’t, like we have built it. Now, it can be used to do bad things and we can talk about that. But it, itself, does not have inherent in it the drive for kind of conquest and domination that living beings do.

What about the accidental bad things, the so-called paperclip problem?

MA: Yeah, so the paperclip problem is a very interesting one because it contains what I think is sort of a logical fallacy that’s right at the core of this whole argument, which is for the paperclip argument to work — the term that the doomers use — they call it orthogonality.

So for the paperclip argument to work, you have to believe two things at the same time. You have to believe that you have a super intelligent AI that is so intelligent, and creative, and flexible, and devious, and genius level, super-genius level conceptual thinker, that it’s able to basically evade all controls that you would ever want to put on it. It’s able to circumvent all security measures, it’s able to build itself its own energy sources, it’s able to manufacture itself its own chips, it’s able to hide itself from attack, it’s able to manipulate human beings into doing what it wants to do, it has all of these superpowers. Whenever you challenge the doomers on the paperclip thing, they always come up with a reason why the super intelligent AI is going to be able, it’s going to be so smart that it’s going to be able to circumvent any limitations you put on it.

But you also have to believe that it’s so stupid that all it wants to do is make paperclips, right? There’s just a massive gap there, because if it’s smart enough to turn the entire world, including atoms and the human body into paperclips, then it’s not going to be so stupid as to decide that’s the only thing that matters in all of existence. So this is what they call the orthogonality argument, because the sleight of hand they try to do is they try to say, well, it’s going to be super genius in these certain ways, but it’s going to be just totally dumb in this other way. That those are orthogonal concepts somehow.

Is it fair to say that yours is an orthogonal argument though? Where it’s going to be super intelligent, even more intelligent than humans in one way, but it’s not going to have any will or drive because it hasn’t evolved to have it. Could this be an orthogonality face-off in some regards?

MA: Well, I would just say I think their orthogonality theory is a little bit like the theory of false consciousness and Marxism. It’s just like you have to believe that this thing is not going to be operating according to any of the ways that you would expect normal people or things to behave.

Let me give you another thing. So a sort of thing they’ll say, again, that’s part of orthogonality, is they’ll say, “Well, it won’t be doing moral reasoning, it’ll be executing its plan for world conquest, but it will be incapable of doing moral reasoning because it’ll just have the simple-minded goal”. Well, you can actually disprove that today, and you can disprove that today by going to any LLM of any level of sophistication, you can do moral reasoning with it. Sitting here, right now, today, you can have moral arguments with GPT, and with Bard, and with Bing, and with every other LLM out there. Actually, they are really good at moral reasoning, they are very good at arguing through different moral scenarios, they’re very good at actually having this exact discussion that we’re having…

...Again, just cards on the table, I mostly agree with you, so I’m putting up a little bit of a defense here, but I recognize it’s probably not the best one in the world. But I see there being a few candidates for being skeptical of the AI doomers.

First, you’ve kind of really jumped on the fact that you think the existential risk doesn’t exist. Is that the primary driver of your skepticism and some would say dismissal of this case? Or is it also things like another possibility would be AI is inevitable, it’s going to happen regardless, so let’s just go forward? Or is there sort of a third one, which is that any reasonable approach, even if there were risks — look at COVID, it’s not doable. We can’t actually manage to find a middle path that is reasonable and adjust accordingly, it’s either one way or the other. Given that and your general skepticism, that’s the way it has to go.

Are all three of those working in your argument here, or is it really just you don’t buy it at all?

MA: So I think the underlying thing is actually a little bit more subtle, which is I’m an engineer. So for better or for worse, I was trained as an engineer. Then I was also trained in science in the way that engineers are trained in science, so I never worked as a scientist, but I was trained in the scientific method as engineers are. I take engineering very seriously, and I take science very seriously, and I take the scientific method very seriously. So when it comes time to engage in questions about what is a technology going to do, I start by going straight to the engineering, which is like, “Okay, what is it that we’re dealing with here”?

The thing is, what we’re dealing with here is something that you’re completely capable of understanding what it is. What it is it’s math and code. You can buy many textbooks that will explain the math and code to you, they’re all being updated right now to incorporate the transformer algorithm, there’s books already out on the market. You can download many How-To guides on how to do this stuff. It’s lots of matrix multiplication, there’s lots of linear algebra involved, there are various algorithms, it’s just like these are machines and you can understand it as a machine.

What I would think of is there’s these flights of fancy that people then launch off of where they make extrapolations, in some cases, literally billions of years into the future. I read this book Superintelligence, which is the one that is kind of the catechism urtext for the AI doomers. [Nick Bostrom] goes from these very general descriptions of possible forms of future intelligence to these extrapolations of literally what’s going to happen billions of years in the future. These seem like fine thought experiments, this seems like a fine way to write science fiction, but I don’t see anything in it resembling engineering.

Then also the other thing really striking is there’s an absence of science. So what do we know about science? We know that science involves at its core the proposing of a hypothesis and then a way to test the hypothesis such that you can falsify it if it’s not true. You’ll notice that in all these books and all these materials, as far as I’ve been able to find, there are no testable hypotheses, there are no falsifiable hypotheses, there are not even metrics to be able to evaluate how you’re doing against your hypothesis. You just have basically these incredible extrapolations.

So I read this stuff and I’m like, “Okay, fine, this isn’t engineering”. They seem very uninterested in the details of how any of this stuff works. This isn’t science. there are no hypotheses so it reads to me as pure speculation. Speculation is fun, but we should not make decisions in the real world just based on speculation.

What’s the testable hypothesis that supports your position? What would you put forward that, if something were shown to be true, then that would change your view of the matter?

MA: Yeah, I mean, we have these systems today. Are they seizing control of their computers and declaring themselves emperor of earth?

I mean, I did have quite the encounter with Sydney.

MA: (laughing) How’s it going? Yeah, well, there you go. Right? Well, so look, the meme that I really like on this, there is a meme I really like on this, I’ll make the sin of trying to explain a meme, but it’s the eldritch horror from outer space.

I put a version of that in my article about Sydney.

MA: The kicker is the evil shoggoth, AI doom saying thing is mystified why the human being isn’t afraid of it. Then the human being’s response is, “Write this email”.

So again, this is the thing — what do we do? What do we do when we’re engineers and scientists? We build the thing, and we test the thing, and we figure out ways to test the thing, we figure out do we like how the thing is working or not? We figure out along the way what are the risks, then we figure out the containment methods for the risk.

This is what we’ve done with every technology in human history. The cumulative effect of this is the world we live in today, which is materially an incredibly advanced world as compared to the world that our ancestors lived in.

Had we applied the precautionary principle or any of the current trendy epistemic methods to evaluating the introduction of prior technologies ranging from fire and the wheel all the way to gunpowder and microchips, we would not be living in the world we’re living in today. We’d be living in a much worse world, and child mortality would be through the roof and we’d all be working these god awful physical labor jobs and we’d be like, “Wow, is this the best we can do?” I think our species has actually an excellent track record at dealing with these things, and I think we should do what we do, we should build these things and then we should figure out the pros and cons…

Was crypto a mistake, and I mean both in terms of the technology, but also in terms of how closely a16z became tied to it reputationally? Is there a bit where you wish you had some of those reputation points right now for your AI arguments, where maybe that’s more important to human flourishing in the long run?

MA: Yeah, I don’t think that, so that idea that there’s some trade off there, I don’t think it works that way. This is a little bit like the topic of political capital in the political system, and there’s always this question if you talk to politicians, there’s always this question of political capital, which is do you gain political capital by basically conceding on things, or do you gain political capital by actually exercising political power? Right? Are you better off basically conserving political power or actually just putting the throttle forward and being as forceful as you can?

I mean, look, I believe whatever political power we have, whatever influence we have is because we’re a hundred percent on the side of innovation. We’re a hundred percent on the side of startups, we’re a hundred percent on the side of entrepreneurs who are building new things. We take a very broad brush approach to that. We back entrepreneurs in many categories of technology, and we’re just a hundred percent on their side.

Then really critically, we’re a hundred percent on their side despite the waxing and waning of the moon. My experience with all of these technologies, including the Internet and computers and social media and AI and every other thing we can talk about biotech, they all go through these waves. They all go through periods in which everybody is super excited and extrapolates everything to the moon, and they all go through periods where everybody’s super depressed and wants to write everything off. AI itself went through decades of recurring booms and winters. I remember in the 1980s, AI went through a big boom in the 1980s, and then crashed super hard in the late eighties, and was almost completely discredited by the time I got to college in ’89. There had been a really big surge of enthusiasm before that.

My view is like, “We’re just going to put ourselves firmly on the side of the new ideas, firmly on the side of the innovations. We’re going to stick with them through the cycles”. If there’s a crypto winter, if there’s an AI winter, if there’s a biotech winter, whatever, it doesn’t really matter. By the way, it also maps to the fundamentals of how we think about what we do, which is we are trying to back the entrepreneurs with the biggest ideas, building the biggest things, to the extent that we succeed in doing that building big things takes a long time.

4. The private credit ‘golden moment’ – Robin Wigglesworth

By ‘private credit’ or ‘private debt’, we’re mostly (but not only) talking about direct loans between an investment fund and a corporate borrower, usually a small or mid-sized company.

These sometimes struggle to get traditional banks interested in their custom — for big banks it’s more attractive to lend to big blue-chip companies that you can also sell M&A advice, derivatives and pension plan management etc — but remain too small to tap the bond market, where you realistically need to raise at least $200mn in one gulp, and ideally over $500mn.

Private credit funds therefore often depict themselves as helping bread-and-butter ma-and-pa small businesses that mean ol’ banks are shunning. In reality, most of the lending is done to private equity-owned businesses, or as part of a distressed debt play. So it can arguably be better seen as a rival (or complement) to the leveraged loan and junk bond markets…

…As you can see from the fundraising bonanza, private credit has morphed from a cottage business mostly focused on distressed debt into a massive business over the past decade. And after starting out overwhelmingly American it is beginning to grow a little in Europe and Asia(opens a new window) as well.

Morgan Stanley estimates the overall assets under management at about $1.5tn (of which about $500bn was money raised but not yet lent, aka ‘dry powder’ as the industry loves to call it).

That makes it bigger than both the US high yield and leveraged loan markets for the first time, says Cyprys:..

…Why has it been growing? Well, for investors it is the promise of both smoother and stronger returns, in an era where even the high-yield bond market for a long time made a mockery of its moniker. Remember when some European junk-rated companies could borrow at negative rates(opens a new window)? Happy days.

Direct loans are also more attractive when interest rates are rising, because they are floating rate, as opposed to the fixed rates that public market bonds pay. At the same time, since these are private, (mostly) untraded assets, their value doesn’t move around as much leveraged loans or traditional bonds…

…In many respects the growth of private credit is a healthy development. It is arguably far better that an investment fund with long-term locked-up capital takes on the associated credit risk than a traditional deposit-taking commercial bank.

But as we wrote earlier this year, there are a lot of reasons to be wary of the current private credit boom. Things have basically gone a bit nuts as money has gushed in.

Using data on business development companies — publicly listed direct lenders, often managed by one of the private capital industry’s giants — Goldman has put some meat on one of our skeleton arguments: floating rate debt is great for investors, but only up to a point.

At some point the rising cost of the debt will crush the company, and we may be approaching that point.

UBS predicts that the default rate of private credit borrowers will spike to a peak of 9-10 per cent early next year as a result, before falling back to about 5-6 per cent as the Federal Reserve is forced into cutting rates.

Default rates like that might seem manageable. It’s hardly Creditpocalypse Now. But the problem is that, as Jeff Diehl and Bill Sacher of Adam Street — a US private capital firm — wrote in a recent report(opens a new window), loss avoidance is the name of the game in private credit:

Benign economic and credit conditions over the last decade have allowed many managers to avoid losses, leading to a narrow return dispersion . . . The benign climate has changed with higher rates, wider credit spreads and slowing revenue growth, all of which is likely to put pressure on many managers’ portfolios…

…And to be fair, as our colleague Mark Vandevelde wrote in a fab recent column, the broader danger isn’t really that there’s been silly lending going on. These are investors and asset managers that (mostly) know what they’re doing, in an area people know is risky. People will lose money, the world will keep turning etc.

The issue, as Mark writes, is that private credit firms are now big and extensive enough to plausibly become shock conduits between investors, borrowers, and the broader economy:

In short, the biggest risks inherent in the rise of private credit are the ones that critics most easily miss. They arise, not from the misbehaviour of anyone on Wall Street, but from replacing parts of an imperfect banking system with a novel mechanism whose inner workings we are only just discovering.

This may seem like vague hand-waving by journalists, but the reality is that the complex interlinkage of private credit, private equity and broader debt markets is opaque. As the Federal Reserve noted in its latest financial stability report(opens a new window):

Overall, the financial stability vulnerabilities posed by private credit funds appear limited. Most private credit funds use little leverage and have low redemption risks, making it unlikely that these funds would amplify market stress through asset sales. However, a deterioration in credit quality and investor risk appetite could limit the capacity of private credit funds to provide new financing to firms that rely on private credit . Moreover, despite new insights from Form PF, visibility into the private credit space remains limited. Comprehensive data are lacking on the forms and terms of the financing extended by private credit funds or on the characteristics of their borrowers and the default risk in private credit portfolios.

5. Debt: The First 5,000 Years – Johan Lunau

Economists claim that we started off with barter, moved to coinage, and only then discovered the infinite wonders of credit. Each iteration in this supposedly linear evolution is presented as a logical solution to a common problem.

  1. Whilst barter, the original system, did allow for the exchange of goods and services, it required a double coincidence of wants: I need to have something you want, and you need to have something I want.If there’s no match, there’s no exchange.
  2. It therefore made sense to store things that everybody wanted, making transactions much more flexible and frequent (commodities like dried cod, salt, sugar, etc.). But certain issues remained… what if the goods were perishable? And how could transactions far from home be made practical?
  3. Enter precious metals, which are durable, portable, and divisible into smaller units. As soon as central authorities began to stamp these metals, their different characteristics (weight, purity) were extinguished, and they became the official currencies in specific national economies or trade regions.
  4. Banks and credit followed thereafter, as the final step.

However, Graeber’s main argument is that the above timeline is wrong, as intuitive as it is. Specifically, he posits that we actually started off with credit, then transitioned to coinage, and resort to barter only when an economy or central authority collapses (as with the fall of the Soviet Union). Moreover, he writes that this progression was chaotic and not linear; there were constant rise-and-fall cycles of credit and coinage. It’s obvious that this account is much, much harder to teach at universities, lacking the elegant simplicity of the version that is commonly presented in textbooks.

In fact, to the frustration of economists, it appears there is no historical evidence for a barter system ever having existed at all, except among obscure peoples like the Nambikwara of Brazil and the Gunwinggu of Western Arnhem Land in Australia. And even then, it takes place between strangers of different tribes in what to us are bizarre ceremonies.

However, there is evidence for widespread debt transactions as far back as 3,500 BC in Mesopotamia, which is now modern Iraq. Merchants would use credit to trade, and people would run up tabs at their local alehouses. We know this because Sumerians would often record financial dealings on clay tablets called bullae in cuneiform (successful translation of this language kicked off in the 1800s), which were dug up by archaeologists.

And whilst Sumeria did have a currency (the silver shekel), it was almost never used in transactions. Instead, it was a simple unit of account for bureaucrats. 1 shekel was divided into 60 minas, each of which was equal to 1 bushel of barley on the principle that temple labourers worked 30 days a month and received 2 rations of barley each day. Though debts were often recorded in shekels, they could be paid off in any other form, such as barley, livestock, and furniture. Since Sumeria is the earliest society about which we know anything, this discovery alone should have resulted in a revision of the history of money. It obviously didn’t…

…As stated, Graeber wrote that history is marked by flip-flop cycles of credit and coinage. But the question is, why? Likely because of cycles of war and peace.

“While credit systems tend to dominate in periods of relative social peace, or across networks of trust (…), in periods characterised by widespread war and plunder, they tend to be replaced by precious metal”.

The reason for this is twofold. Unlike credit, gold and silver can be stolen through plunder, and in transactions, it demands no trust, except in the characteristics of the precious metal itself. And soldiers, who are often constantly travelling with a fair probability of death, are the definition of an extremely bad credit risk. Who would lend to them? Armies typically created entire marketplaces around themselves.

“For much of human history, then, an ingot of gold or silver, stamped or not, has served the same role as the contemporary drug dealer’s suitcase of unmarked bills: an object without a history, valuable because one knows it will be accepted in exchange for other goods just about anywhere, no questions asked.”


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Intuitive Surgical, Microsoft, and Tesla. Holdings are subject to change at any time.