What We’re Reading (Week Ending 31 March 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 31 March 2024:

1. Gold-Medalist Coders Build an AI That Can Do Their Job for Them – Ashlee Vance

Take the case of Cognition AI Inc.

You almost certainly have not heard of this startup, in part because it’s been trying to keep itself secret and in part because it didn’t even officially exist as a corporation until two months ago. And yet this very, very young company, whose 10-person staff has been splitting time between Airbnbs in Silicon Valley and home offices in New York, has raised $21 million from Peter Thiel’s venture capital firm Founders Fund and other brand-name investors, including former Twitter executive Elad Gil. They’re betting on Cognition AI’s team and its main invention, which is called Devin.

Devin is a software development assistant in the vein of Copilot, which was built by GitHub, Microsoft and OpenAI, but, like, a next-level software development assistant. Instead of just offering coding suggestions and autocompleting some tasks, Devin can take on and finish an entire software project on its own. To put it to work, you give it a job—“Create a website that maps all the Italian restaurants in Sydney,” say—and the software performs a search to find the restaurants, gets their addresses and contact information, then builds and publishes a site displaying the information. As it works, Devin shows all the tasks it’s performing and finds and fixes bugs on its own as it tests the code being written.

The founders of Cognition AI are Scott Wu, its chief executive officer; Steven Hao, the chief technology officer; and Walden Yan, the chief product officer…

…Wu, 27, is the brother of Neal Wu, who also works at Cognition AI. These two men are world-renowned for their coding prowess: The Wu brothers have been competing in, and often winning, international coding competitions since they were teenagers…

…Sport-coding—yes, it’s a real thing—requires people to solve puzzles and program with speed and accuracy. Along the way, it trains contestants to approach problems in novel ways. Cognition AI is full of sport-coders. Its staff has won a total of 10 gold medals at the top international competition, and Scott Wu says this background gives his startup an edge in the AI wars…

…One of the big claims Cognition AI is making with Devin is that the company has hit on a breakthrough in a computer’s ability to reason. Reasoning in AI-speak means that a system can go beyond predicting the next word in a sentence or the next snippet in a line of code, toward something more akin to thinking and rationalizing its way around problems. The argument in AI Land is that reasoning is the next big thing that will advance the industry, and lots of startups are making various boasts about their ability to do this type of work.

Devin does appear to be well ahead of the other coding assistants in many respects. You can give it jobs to do with natural language commands, and it will set off and accomplish them. As Devin works, it tells you about its plan and then displays the commands and code it’s using. If something doesn’t look quite right, you can give the AI a prompt to go fix the issue, and Devin will incorporate the feedback midstream. Most current AI systems have trouble staying coherent and on task during these types of long jobs, but Devin keeps going through hundreds and even thousands of tasks without going off track.

In my tests with the software, Devin could build a website from scratch in 5 to 10 minutes, and it managed to re-create a web-based version of Pong in about the same amount of time. I had to prompt it a couple of times to improve the physics of the ball movement in the game and to make some cosmetic changes on its websites, all of which Devin accomplished just fine and with a polite attitude…

…Exactly how Cognition AI made this breakthrough, and in so short a time, is something of a mystery, at least to outsiders. Wu declines to say much about the technology’s underpinnings other than that his team found unique ways to combine large language models (LLMs) such as OpenAI’s GPT-4 with reinforcement learning techniques. “It’s obviously something that people in this space have thought about for a long time,” he says. “It’s very dependent on the models and the approach and getting things to align just right.”

2. Geopolitics in the C-Suite – Jami Miscik, Peter Orszag, and Theodore Bunzel

But even though national security and foreign policy occasionally intruded on corporate America during that time, until very recently, few executives concerned themselves with geopolitics. In the post–Cold War world, with globalization on the march, the idea that national interests might be at odds with open markets and expanding trade came to seem alien to American executives.

But the changes that have roiled the geopolitical landscape in recent years have left an impression in C-suites around the United States. In a recent poll of 500 institutional investors, geopolitics ranked as the top risk to the global economy and markets in 2024…

…As governments lean on economic restrictions and industrial policies to achieve geopolitical ends, corporations have increasingly become both the objects and instruments of foreign policy…

…The centrality of economic competition to today’s foreign policy problems represents a qualitative break from the past. During the Cold War, for example, the United States and the Soviet Union hardly interacted economically: trade between them peaked at a paltry $4.5 billion in 1979; in recent years, the United States and China have generally traded that much every week or two, adjusting for inflation. In the post–Cold War era, U.S. foreign policy was focused on opening markets and reducing international economic barriers rather than erecting them. Era-defining crises such as the 9/11 attacks did little to change the relationship between U.S. policymakers and American corporations; if anything, the “war on terror” further solidified the idea that foreign policy was primarily concerned with security and military issues, not economics.

But in the background, global economic integration was transforming the playing field. In 1980, trade accounted for just 37 percent of global GDP. Today, that figure is 74 percent, and economies have become intertwined to a degree never seen in the twentieth century. Globalization is not new, of course; it has been a centuries-long process. What is new, however, is the emergence of great-power rivalry in a highly interconnected world. Military power still matters, but economic and technological competition have become the main battlefield of global politics. Under the so-called Washington consensus that dominated policymaking for decades, the question of where a semiconductor manufacturer would build its next factory or whether German auto companies would decide to throttle their investments in China would have seemed relatively unimportant to policymakers. Now, such questions are at the center of almost every major foreign policy debate.

Greater economic integration has also created a complex web of links between geopolitical rivals that policymakers now seek to leverage for strategic ends. This is especially true when it comes to financial and technological networks, where Washington holds a privileged position…

…But as great-power tensions have increased, so has the number of sectors caught in the fray of what Farrell and Newman call “weaponized interdependence.” Consider, for example, the way that G-7 countries have taken advantage of Russian dependence on shipping insurers based in the West, an industry that most foreign policymakers had probably never thought about before Russia’s 2022 invasion of Ukraine. To try to cap the price of Russian oil exports, the G-7 prevented these companies from insuring Russian crude oil cargoes unless they had been sold at a maximum of $60 per barrel.

Western powers are not the only ones playing this game. In 2010, after a Chinese fishing trawler and Japanese Coast Guard patrol boats collided in disputed waters, setting off a diplomatic row between Beijing and Tokyo, China banned exports to Japan of the rare-earth minerals that are critical components of batteries and electronics, thus raising costs and creating shortages for Japanese manufacturers of everything from hybrid cars to wind turbines…

…More recently, a number of American consulting firms have been caught in the middle of the complex U.S.-Saudi relationship, with Congress demanding details about their contracts with Saudi Arabia that Riyadh has forbidden them to provide.

All these dynamics are being turbocharged by an intensifying competition between the United States and China, the two countries with the largest and most globally intertwined economies. Both aim to dominate the twenty-first-century economy, which means gaining the upper hand in computing technologies, biotechnology, and clean energy. And the foreign policies of both countries are now driven by a shared desire to shape their economies in ways that reduce their vulnerability and increase their leverage. China calls this “self-reliance.” Washington calls it “de-risking.” For the United States, what it looks like in practice is expanded export controls on advanced semiconductors and manufacturing equipment, enhanced government screening of investments by U.S. companies in foreign markets, and major subsidies for industries such as electric vehicles and microchips, primarily through the Inflation Reduction Act and the CHIPS Act. In this brave new world, the secretary of commerce is as important to foreign policy as the secretaries of state and defense.

Washington is hardly alone in taking such steps. State-sponsored drives for greater self-reliance have taken hold in nearly every major economy, particularly after the supply-chain disruptions of the COVID-19 pandemic. The number of countries introducing or expanding investment screening, for example, jumped from three between 1995 and 2005 to 54 between 2020 and 2022. Meanwhile, a wave of industrial policies has increased trade barriers in an attempt to induce companies to reshore their supply chains. At the same time, the understanding of what matters to national security has also expanded, as countries seek to advance or protect everything from software and microchips to pharmaceuticals and foodstuffs.

Many of the complications of this new era are rooted in the difference between the way the public and private sectors view time horizons. Policymakers set bright lines with immediate operational implications—for example, suddenly forbidding companies from exporting or importing certain goods from certain countries. But companies need to make long-term investment decisions. Should a company set up another plant in China if there is market demand and doing so is currently allowed by law? Should a pharmaceutical company set up advanced R & D centers in mainland China or purchase a Chinese biotech firm, given the long-run trajectory of relations between Beijing and the West? Should a consumer electronics firm purchase Chinese-made chips if they are the most cost-efficient option? Answering these questions requires executives to forecast the outcomes of highly volatile political debates and policymaking choices over which they have little control. And yet whatever decisions they make have a significant effect on whether, for example, the United States can effectively “de-risk” its economic relationship with China.

The example of semiconductors is instructive. Washington is seeking to reshore semiconductor manufacturing, but the success of its flagship industrial policy, the CHIPS Act, depends only in part on how the Commerce Department distributes the legislation’s $39 billion in subsidies over the next five years. A much more important factor is whether the Taiwanese chip manufacturer TSMC will risk setting up facilities in the United States despite high costs and a relative scarcity of human capital, and whether Apple decides to buy slightly more expensive chips made by U.S. fabricators instead of less expensive ones produced in Asia. And the CHIPS Act is only one input in those decisions.

3. Get Smart: Chasing Nvidia? Don’t succumb to FOMO – Chin Hui Leong

Are you feeling left out because you missed Nvidia’s (NASDAQ: NVDA) massive stock rise?

Well, we have good news and bad news.

Let’s start with the bad news: that tightening in your chest you are feeling right now is the fear of missing out — or better known by its initials “FOMO”. 

And ladies and gentlemen, FOMO is real.

It’s that sneaky emotion which spurs you to buy a stock based on a feeling rather than proper research…

…But hang on, amid the hype — there’s good news too.

If you recognise that you are feeling FOMO, then congratulations — you have just taken the first step in recognising what you have to deal with: your runaway emotions.

The next step is to keep your emotions in check.

On the other side of FOMO, is its cousin FOJI — or the fear of joining in.

Like FOMO, FOJI is also a strong emotion.

That’s especially true for some investors who are bearing scars from 2022 when US stocks took a beating amid a punishing bear market.

These scars can emit another fear — FOJI — which is paralysing for investors.

The fear of looking stupid if you buy today only to watch the stock fall the very next day…

…Whether it is FOMO or FOJI, you won’t invest well if feelings dictate your actions.

Recognising the presence of both emotions is key…

…Beyond FOMO and FOJI, there’s JOMO or the joy of missing out.

Don’t feel down if you decide to give Nvidia a pass.

As Peter Lynch once said — you can’t kiss all the frogs to find out which will turn into a prince.

Unconvinced?

In Lynch book’s “One Up on Wall Street”, he wrote down the names of 65 stocks which returned at least 10 times their original price (he calls them 10-baggers).

Except that the fund that he ran owned NONE of them.

Before you start rolling your eyes, consider this point: Peter Lynch achieved a stunning 29% per year in annualized returns over 13 years, outpacing the benchmark S&P 500 index (INDEXSP: .INX) by more than two times…

…By sharing the list of missed winners, Lynch had a salient point to make: you do not need to be in every 10-bagger to deliver enviable returns.

4. How the richest woman in the world—mocked as a ‘miser’ in the press—helped bail out New York City during the panic of 1907 – Will Daniel

Hetty Green is remembered as the “world’s greatest miser” and the “Witch of Wall Street,” but these days, Green would likely be seen as an eccentric investing icon. After all, while she became famous for her frugal nature and gruff exterior, Green pioneered value investing strategies that have made billionaires out of many of today’s leading investors. And when the chips were down, when people really needed help, the whaling heiress turned independent investor, business tycoon, and world’s wealthiest woman often used her fortune to save the day…

…Over a three-week period after the panic began on Oct. 22, 1907, the New York Stock Exchange plummeted nearly 50% from its 1906 peak. And a year later, in 1908, Gross National Product (GNP), a measure akin to today’s Gross Domestic Product (GDP), cratered 12%. The problems for the banking system were so severe during the knickerbocker crisis that they spurred the establishment of the Federal Reserve System…

…As the situation deteriorated, John Pierpont Morgan, the American financier who founded what is now JPMorgan Chase, was eventually forced to call together a group of Wall Street’s best and brightest at the Morgan Library to help decide how to prop up the ailing economy and stock market. Hetty Green was the only woman who was invited to attend that meeting during the height of the panic…

…“I saw this situation coming,” she said, noting that there were undeniable signs of stress. “Some of the solidest men of the Street came to me and wanted to unload all sorts of things, from palatial residences to automobiles.”

Green said that she then gave The New York Central Railroad company a “big loan” after they came knocking, and that made her “sit up and do some thinking.” She decided to begin gathering as much cash as possible, understanding that a panic could be on the way…

…Green described how men came to New York from all over the country to ask for loans during the panic of 1907. But despite being labeled a “miser” throughout her life, she didn’t take advantage of the situation.

“Those to whom I loaned money got it at 6%. I might just as easily have secured 40%,” she explained…

…Usury, or charging excessive interest for a loan, was against Green’s moral code, which was born of her Quaker roots…

…Green would go on to lend the government of New York City $1.1 million at the peak of the 1907 panic, which is equivalent to roughly $33 million in today’s dollars…

…“On more than one occasion, when New York was running low on money, she would lend money to the city,” explained Charles Slack, the author of Green’s biography, Hetty: The Genius and Madness of America’s First Female Tycoon. “And she always did so at reasonable rates. She didn’t gouge or hold the city over a barrel.”

5. Transcript for Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast #416 – Lex Fridman and Yann Lecun

Lex Fridman (00:50:40) I would love to sort of linger on your skepticism around auto regressive LLMs. So one way I would like to test that skepticism is everything you say makes a lot of sense, but if I apply everything you said today and in general to I don’t know, 10 years ago, maybe a little bit less, no, let’s say three years ago, I wouldn’t be able to predict the success of LLMs. So does it make sense to you that autoregressive LLMs are able to be so damn good?

Yann LeCun (00:51:20) Yes.

Lex Fridman (00:51:21) Can you explain your intuition? Because if I were to take your wisdom and intuition at face value, I would say there’s no way autoregressive LLMs, one token at a time, would be able to do the kind of things they’re doing.

Yann LeCun (00:51:36) No, there’s one thing that autoregressive LLMs or that LLMs in general, not just the autoregressive one, but including the bird style bidirectional ones, are exploiting and its self supervised running, and I’ve been a very, very strong advocate of self supervised running for many years. So those things are a incredibly impressive demonstration that self supervised running actually works. The idea that started, it didn’t start with BERT, but it was really kind of good demonstration with this.

(00:52:09) So the idea that you take a piece of text, you corrupt it, and then you train some gigantic neural net to reconstruct the parts that are missing. That has produced an enormous amount of benefits. It allowed us to create systems that understand language, systems that can translate hundreds of languages in any direction, systems that are multilingual, so it’s a single system that can be trained to understand hundreds of languages and translate in any direction, and produce summaries and then answer questions and produce text.

(00:52:51) And then there’s a special case of it, which is the auto regressive trick where you constrain the system to not elaborate a representation of the text from looking at the entire text, but only predicting a word from the words that are come before. And you do this by constraining the architecture of the network, and that’s what you can build an auto aggressive LLM from.

(00:53:15) So there was a surprise many years ago with what’s called decoder only LLM. So since systems of this type that are just trying to produce words from the previous one and the fact that when you scale them up, they tend to really understand more about language. When you train them on lots of data, you make them really big. That was a surprise and that surprise occurred quite a while back, with work from Google, Meta, OpenAI, et cetera, going back to the GPT kind of work, general pre-trained transformers.

Lex Fridman (00:53:56) You mean like GPT2? There’s a certain place where you start to realize scaling might actually keep giving us an emergent benefit.

Yann LeCun (00:54:06) Yeah, I mean there were work from various places, but if you want to place it in the GPT timeline, that would be around GPT2, yeah.

Lex Fridman (00:54:19) Well, because you said it so charismatic and you said so many words, but self supervised learning, yes. But again, the same intuition you’re applying to saying that auto aggressive LLMs cannot have a deep understanding of the world. If we just apply that, same intuition, does it make sense to you that they’re able to form enough of a representation in the world to be damn convincing, essentially passing the original touring test with flying colors?

Yann LeCun (00:54:50) Well, we’re fooled by their fluency, right? We just assume that if a system is fluent in manipulating language, then it has all the characteristics of human intelligence, but that impression is false. We’re really fooled by it.

Lex Fridman (00:55:06) What do you think Alan Turing would say, without understanding anything, just hanging out with it?

Yann LeCun (00:55:11) Alan Turing would decide that a Turing test is a really bad test, okay? This is what the AI community has decided many years ago that the Turing test was a really bad test of intelligence.

Lex Fridman (00:55:22) What would Hans Marvek say about the larger language models?

Yann LeCun (00:55:26) Hans Marvek would say that Marvek Paradox still applies. Okay, we can pass-

Lex Fridman (00:55:32) You don’t think he would be really impressed?

Yann LeCun (00:55:34) No, of course everybody would be impressed. But it’s not a question of being impressed or not, it’s the question of knowing what the limit of those systems can do. Again, they are impressive. They can do a lot of useful things. There’s a whole industry that is being built around them. They’re going to make progress, but there is a lot of things they cannot do, and we have to realize what they cannot do and then figure out how we get there. And I’m seeing this from basically 10 years of research on the idea of self supervised running, actually that’s going back more than 10 years, but the idea of self supervised running. So basically capturing the internal structure of a piece of a set of inputs without training the system for any particular task, to learning representations.

(00:56:26) The conference I co-founded 14 years ago is called International Conference on Learning Representations. That’s the entire issue that deep learning is dealing with, and it’s been my obsession for almost 40 years now. So learning representation is really the thing. For the longest time, we could only do this with supervised learning, and then we started working on what we used to call unsupervised learning and revived the idea of unsupervised running in the early 2000s with your [inaudible 00:56:58] and Jeff Hinton. Then discovered that supervised running actually works pretty well if you can collect enough data. And so the whole idea of unsupervised, self supervised running kind of took a backseat for a bit, and then I tried to revive it in a big way starting in 2014, basically when we started FAIR and really pushing for finding new methods to do self supervised running both for text and for images and for video and audio.

(00:57:29) And some of that work has been incredibly successful. I mean, the reason why we have multilingual translation system, things to do, content moderation on Meta, for example, on Facebook, that are multilingual, that understand whether a piece of text is hate speech not or something, is due to that progress using self supervised running for NLP, combining this with transformer architectures and blah, blah, blah.

(00:57:53) But that’s the big success of self supervised running. We had similar success in speech recognition, a system called WAVE2VEC, which is also a joint embedding architecture, by the way, trained with contrastive running. And that system also can produce speech recognition systems that are multilingual with mostly unlabeled data and only need a few minutes of labeled data to actually do speech recognition, that’s amazing. We have systems now based on those combination of ideas that can do real time translation of hundreds of languages into each other, speech to speech.

Lex Fridman (00:58:28) Speech to speech, even including, which is fascinating, languages that don’t have written forms.

Yann LeCun (00:58:34) That’s right.

Lex Fridman (00:58:34) Just spoken only.

Yann LeCun (00:58:35) That’s right. We don’t go through text, it goes directly from speech to speech using an internal representation of speech units that are discrete, but it’s called Textless NLP. We used to call it this way. But yeah, so I mean incredible success there. And then for 10 years, we tried to apply this idea to learning representations of images by training a system to predict videos, learning intuitive physics by training a system to predict what’s going to happen in the video.

(00:59:02) And tried and tried and failed and failed, with generative models, with models that predict pixels. We could not get them to learn good representations of images. We could not get them to learn good representations of videos. And we tried many times, we published lots of papers on it, where they kind of sort of work, but not really great. They started working, we abandoned this idea of predicting every pixel and basically just doing the joint embedding and predicting and representation space, that works. So there’s ample evidence that we’re not going to be able to learn good representations of the real world using generative model. So I’m telling people, everybody’s talking about generative AI. If you’re really interested in human level AI, abandon the idea of generative AI…

…Yann LeCun (01:35:29) I actually made that comment on just about every social network I can, and I’ve made that point multiple times in various forums. Here’s my point of view on this, people can complain that AI systems are biased and they generally are biased by the distribution of the training data that they’ve been trained on that reflects biases in society, and that is potentially offensive to some people or potentially not. And some techniques to de-bias then become offensive to some people because of historical incorrectness and things like that.

(01:36:23) And so you can ask two questions, the first question is, is it possible to produce an AI system that is not biased? And the answer is, absolutely not. And it’s not because of technological challenges, although they are technological challenges to that, it’s because bias is in the eye of the beholder. Different people may have different ideas about what constitutes bias for a lot of things, there are facts that are indisputable, but there are a lot of opinions or things that can be expressed in different ways. And so you cannot have an unbiased system, that’s just an impossibility.

(01:37:08) And so what’s the answer to this? And the answer is the same answer that we found in liberal democracy about the press, the press needs to be free and diverse. We have free speech for a good reason, is because we don’t want all of our information to come from a unique source because that’s opposite to the whole idea of democracy and progressive ideas and even science. In science, people have to argue for different opinions and science makes progress when people disagree and they come up with an answer and consensus forms, and it’s true in all democracies around the world.

(01:37:58) There is a future which is already happening where every single one of our interaction with the digital world will be mediated by AI systems, AI assistance. We’re going to have smart glasses, you can already buy them from Meta, the Ray-Ban Meta where you can talk to them and they are connected with an LLM and you can get answers on any question you have. Or you can be looking at a monument and there is a camera in the glasses you can ask it like, what can you tell me about this building or this monument? You can be looking at a menu in a foreign language, and I think we will translate it for you, or we can do real time translation if we speak different languages. So a lot of our interactions with the digital world are going to be mediated by those systems in the near future.

(01:38:53) Increasingly, the search engines that we’re going to use are not going to be search engines, they’re going to be dialogue systems that we just ask a question and it will answer and then point you to perhaps appropriate reference for it. But here is the thing, we cannot afford those systems to come from a handful of companies on the west coast of the US because those systems will constitute the repository of all human knowledge, and we cannot have that be controlled by a small number of people. It has to be diverse for the same reason the press has to be diverse, so how do we get a diverse set of AI assistance? It’s very expensive and difficult to train a base model, a base LLM at the moment, in the future it might be something different, but at the moment, that’s an LLM. So only a few companies can do this properly.

(01:39:50) And if some of those top systems are open source, anybody can use them, anybody can fine tune them. If we put in place some systems that allows any group of people, whether they are individual citizens, groups of citizens, government organizations, NGOs, companies, whatever, to take those open source AI systems and fine tune them for their own purpose on their own data, then we’re going to have a very large diversity of different AI systems that are specialized for all of those things.

(01:40:35) I tell you, I talked to the French government quite a bit, and the French government will not accept that the digital diet of all their citizens be controlled by three companies on the west coast of the US. That’s just not acceptable, it’s a danger to democracy regardless of how well-intentioned those companies are, and it’s also a danger to local culture, to values, to language. I was talking with the founder of Infosys in India, he’s funding a project to fine tune Llama 2, the open source model produced by Meta, so that Llama 2 two speaks all 22 official languages in India, it is very important for people in India. I was talking to a former colleague of mine, Moustapha Cisse, who used to be a scientist at Fair and then moved back to Africa, created a research lab for Google in Africa and now has a new startup Co-Kera.

(01:41:37) And what he’s trying to do, is basically have LLM that speak the local languages in Senegal so that people can have access to medical information because they don’t have access to doctors, it’s a very small number of doctors per capita in Senegal. You can’t have any of this unless you have open source platforms, so with open source platforms, you can have AI systems that are not only diverse in terms of political opinions or things of that-

Yann LeCun (01:42:00) … AI systems that are not only diverse in terms of political opinions or things of that type, but in terms of language, culture, value systems, political opinions, technical abilities in various domains, and you can have an industry, an ecosystem of companies that fine tune those open source systems for vertical applications in industry. I don’t know, a publisher has thousands of books and they want to build a system that allows a customer to just ask a question about the content of any of their books, you need to train on their proprietary data. You have a company, we have one within Meta, it’s called Metamate, and it’s basically an LLM that can answer any question about internal stuff about the company, very useful.

(01:42:53) A lot of companies want this. A lot of companies want this not just for their employees, but also for their customers, to take care of their customers. So the only way you’re going to have an AI industry, the only way you’re going to have AI systems that are not uniquely biased is if you have open source platforms on top of which any group can build specialized systems. So the direction of inevitable direction of history is that the vast majority of AI systems will be built on top of open source platforms…

…Lex Fridman (02:04:21) You often say that a GI is not coming soon, meaning not this year, not the next few years, potentially farther away. What’s your basic intuition behind that?

Yann LeCun (02:04:35) So first of all, it’s not going to be an event. The idea somehow, which is popularized by science fiction and Hollywood, that somehow somebody is going to discover the secret to AGI or human-level AI or AMI, whatever you want to call it, and then turn on a machine and then we have AGI, that’s just not going to happen. It’s not going to be an event. It’s going to be gradual progress. Are we going to have systems that can learn from video how the world works and learn good representations? Yeah. Before we get them to the scale and performance that we observe in humans it’s going to take quite a while. It’s not going to happen in one day. Are we going to get systems that can have large amount of associated memory so they can remember stuff? Yeah, but same, it’s not going to happen tomorrow. There is some basic techniques that need to be developed. We have a lot of them, but to get this to work together with a full system is another story.

(02:05:37) Are we going to have systems that can reason and plan perhaps along the lines of objective-driven AI architectures that I described before? Yeah, but before we get this to work properly, it’s going to take a while. Before we get all those things to work together, and then on top of this, have systems that can learn hierarchical planning, hierarchical representations, systems that can be configured for a lot of different situation at hand, the way the human brain can, all of this is going to take at least a decade and probably much more because there are a lot of problems that we’re not seeing right now that we have not encountered, so we don’t know if there is an easy solution within this framework. So it’s not just around the corner. I’ve been hearing people for the last 12, 15 years claiming that AGI is just around the corner and being systematically wrong. I knew they were wrong when they were saying it. I called their bullshit…

…Lex Fridman (02:08:48) So you push back against what are called AI doomers a lot. Can you explain their perspective and why you think they’re wrong?

Yann LeCun (02:08:59) Okay, so AI doomers imagine all kinds of catastrophe scenarios of how AI could escape or control and basically kill us all, and that relies on a whole bunch of assumptions that are mostly false. So the first assumption is that the emergence of super intelligence is going to be an event, that at some point we’re going to figure out the secret and we’ll turn on a machine that is super intelligent, and because we’d never done it before, it’s going to take over the world and kill us all. That is false. It’s not going to be an event. We’re going to have systems that are as smart as a cat, have all the characteristics of human-level intelligence, but their level of intelligence would be like a cat or a parrot maybe or something. Then we’re going to work our way up to make those things more intelligent. As we make them more intelligent, we’re also going to put some guardrails in them and learn how to put some guardrails so they behave properly.

(02:10:03) It’s not going to be one effort, that it’s going to be lots of different people doing this, and some of them are going to succeed at making intelligent systems that are controllable and safe and have the right guardrails. If some other goes rogue, then we can use the good ones to go against the rogue ones. So it’s going to be my smart AI police against your rogue AI. So it’s not going to be like we’re going to be exposed to a single rogue AI that’s going to kill us all. That’s just not happening. Now, there is another fallacy, which is the fact that because the system is intelligent, it necessarily wants to take over. There is several arguments that make people scared of this, which I think are completely false as well.

(02:10:48) So one of them is in nature, it seems to be that the more intelligent species otherwise end up dominating the other and even distinguishing the others sometimes by design, sometimes just by mistake. So there is thinking by which you say, “Well, if AI systems are more intelligent than us, surely they’re going to eliminate us, if not by design, simply because they don’t care about us,” and that’s just preposterous for a number of reasons. First reason is they’re not going to be a species. They’re not going to be a species that competes with us. They’re not going to have the desire to dominate because the desire to dominate is something that has to be hardwired into an intelligent system. It is hardwired in humans. It is hardwired in baboons, in chimpanzees, in wolves, not in orangutans. The species in which this desire to dominate or submit or attain status in other ways is specific to social species. Non-social species like orangutans don’t have it, and they are as smart as we are, almost, right?

Lex Fridman (02:12:09) To you, there’s not significant incentive for humans to encode that into the AI systems, and to the degree they do, there’ll be other AIs that punish them for it, I’ll compete them over it.

Yann LeCun (02:12:23) Well, there’s all kinds of incentive to make AI systems submissive to humans.

Lex Fridman (02:12:26) Right.

Yann LeCun (02:12:27) Right? This is the way we’re going to build them. So then people say, “Oh, but look at LLMs. LLMs are not controllable,” and they’re right. LLMs are not controllable. But objectively-driven AI, so systems that derive their answers by optimization of an objective means they have to optimize this objective, and that objective can include guardrails. One guardrail is, obey humans. Another guardrail is, don’t obey humans if it’s hurting other humans within limits.

Lex Fridman (02:12:57) Right. I’ve heard that before somewhere, I don’t remember

Yann LeCun (02:12:59) Yes, maybe in a book.

Lex Fridman (02:13:01) Yeah, but speaking of that book, could there be unintended consequences also from all of this?

Yann LeCun (02:13:09) No, of course. So this is not a simple problem. Designing those guardrails so that the system behaves properly is not going to be a simple issue for which there is a silver bullet for which you have a mathematical proof that the system can be safe. It’s going to be a very progressive, iterative design system where we put those guardrails in such a way that the system behave properly. Sometimes they’re going to do something that was unexpected because the guardrail wasn’t right and we’re dd correct them so that they do it right. The idea somehow that we can’t get it slightly wrong because if we get it slightly wrong, we’ll die is ridiculous. We are just going to go progressively. It is just going to be, the analogy I’ve used many times is turbojet design. How did we figure out how to make turbojet so unbelievably reliable?

(02:14:07) Those are incredibly complex pieces of hardware that run at really high temperatures for 20 hours at a time sometimes, and we can fly halfway around the world on a two-engine jetliner at near the speed of sound. Like how incredible is this? It’s just unbelievable. Did we do this because we invented a general principle of how to make turbojets safe? No, it took decades to fine tune the design of those systems so that they were safe. Is there a separate group within General Electric or Snecma or whatever that is specialized in turbojet safety? No. The design is all about safety, because a better turbojet is also a safer turbojet, so a more reliable one. It’s the same for AI. Do you need specific provisions to make AI safe? No, you need to make better AI systems, and they will be safe because they are designed to be more useful and more controllable…

…Lex Fridman (02:28:45) Well, it’ll be at the very least, absurdly comedic. Okay. So since we talked about the physical reality, I’d love to ask your vision of the future with robots in this physical reality. So many of the kinds of intelligence that you’ve been speaking about would empower robots to be more effective collaborators with us humans. So since Tesla’s Optimus team has been showing us some progress on humanoid robots, I think it really reinvigorated the whole industry that I think Boston Dynamics has been leading for a very, very long time. So now there’s all kinds of companies Figure AI, obviously Boston Dynamics.

Yann LeCun (02:29:30) Unitree.

Lex Fridman (02:29:30) Unitree, but there’s a lot of them.

Yann LeCun (02:29:33) There’s a few of them.

Lex Fridman (02:29:33) It’s great. It’s great. I love it. So do you think there’ll be millions of humanoid robots walking around soon?

Yann LeCun (02:29:44) Not soon, but it’s going to happen. The next decade I think is going to be really interesting in robots, the emergence of the robotics industry has been in the waiting for 10, 20 years without really emerging other than for pre-program behavior and stuff like that. And the main issue is, again, the Moravec paradox, how do we get those systems to understand how the world works and plan actions? And so we can do it for really specialized tasks. And the way Boston Dynamics goes about it is basically with a lot of handcrafted dynamical models and careful planning in advance, which is very classical robotics with a lot of innovation, a little bit of perception, but it’s still not, they can’t build a domestic robot.

(02:30:41) We’re still some distance away from completely autonomous level five driving, and we’re certainly very far away from having level five autonomous driving by a system that can train itself by driving 20 hours like any 17-year-old. So until we have, again, world models, systems that can train themselves to understand how the world works, we’re not going to have significant progress in robotics. So a lot of the people working on robotic hardware at the moment are betting or banking on the fact that AI is going to make sufficient progress towards that…

…Yann LeCun (02:38:29) I love that question. We can make humanity smarter with AI. AI basically will amplify human intelligence. It’s as if every one of us will have a staff of smart AI assistants. They might be smarter than us. They’ll do our bidding, perhaps execute a task in ways that are much better than we could do ourselves, because they’d be smarter than us. And so it’s like everyone would be the boss of a staff of super smart virtual people. So we shouldn’t feel threatened by this any more than we should feel threatened by being the manager of a group of people, some of whom are more intelligent than us. I certainly have a lot of experience with this, of having people working with me who are smarter than me.

(02:39:35) That’s actually a wonderful thing. So having machines that are smarter than us, that assist us in all of our tasks, our daily lives, whether it’s professional or personal, I think would be an absolutely wonderful thing. Because intelligence is the commodity that is most in demand. That’s really what I mean. All the mistakes that humanity makes is because of lack of intelligence really, or lack of knowledge, which is related. So making people smarter, we just can only be better. For the same reason that public education is a good thing and books are a good thing, and the internet is also a good thing, intrinsically and even social networks are a good thing if you run them properly.

(02:40:21) It’s difficult, but you can. Because it helps the communication of information and knowledge and the transmission of knowledge. So AI is going to make humanity smarter. And the analogy I’ve been using is the fact that perhaps an equivalent event in the history of humanity to what might be provided by generalization of AI assistant is the invention of the printing press. It made everybody smarter, the fact that people could have access to books. Books were a lot cheaper than they were before, and so a lot more people had an incentive to learn to read, which wasn’t the case before.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple, Alphabet (parent of Google), Microsoft, Meta Platforms, and Tesla. Holdings are subject to change at any time.