What We’re Reading (Week Ending 28 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 28 May 2023:

1. Yuval Noah Harari argues that AI has hacked the operating system of human civilisation – Yuval Noah Harari

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about ChatGPT and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults…

…Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as AI fights AI in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single AI adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.

At first, AI will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence…

…We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of AI cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

2. What Happens if the US Defaults on its Debt? – Nick Maggiulli

As U.S. Treasury Secretary Janet Yellen recently noted, unless Congress raises (or suspends) the debt limit, the U.S. government may run out of money as early as June 1.

With such a dire warning, many investors have begun to wonder: what happens if the US defaults on its debt? Though this scenario remains unlikely, it is important to understand the potential consequences of a default and how they could impact you…

…When it comes to the term ‘default’ there are two ways that this has been broadly defined:

  • An actual default: This is the traditional meaning of the term and it occurs when a borrower fails to make a required principal or interest payment to a lender. In the case of the United States (or any other sovereign nation), a default occurs if the government is unable (or unwilling) to make payments on its debt (e.g. if the U.S. failed to make payments on its Treasury bonds). Default in these cases can either be partial (failing to pay back some of the debt) or full (failing to pay back all of the debt). However, this isn’t the only kind of default that can occur.
  • A technical default: Unlike an actual (or traditional) default when a government fails to make payments on its bonds, a technical default occurs if the government fails to pay for its other obligations even if its bond payments were made on time. For example, the U.S. Treasury could decide to prioritize Treasury bondholders and pay them in full before paying out whatever was left to Social Security recipients and government employees. While this would avoid a default in the traditional sense of the term, it could still negatively impact millions of Americans who rely on income from the U.S. government to pay their bills…

…As we navigate the political and economic complexities of raising the debt ceiling in the coming weeks, it’s important to understand what could happen if the U.S. defaults on its debt. The consequences of a such an event would have a major impact not only in the U.S., but across the globe. And while we can’t predict the exact outcomes, below are some possible scenarios that could unfold based on economic studies, expert opinions, and historical precedent:

  • Global financial turmoil: Given the reliance of the global financial system on U.S. Treasury bonds and U.S. dollars, a default could lead to a loss of confidence in the U.S. government and a global market panic. The most visible impact of this would be declining asset prices and a disruption in international trade. The duration of such a panic would be determined by the severity of the U.S. default and how quickly the U.S. could restore confidence in financial markets.
  • Possible recession: Two economists modeled the potential impact of a U.S. default on employment and the results weren’t great. They argued that a technical default (where the federal government fails to make payments for some of its responsibilities) would raise unemployment from 3.4% to 7%, and an actual default (where the federal government fails to make payments to U.S. bondholders) would raise unemployment from 3.4% to above 12%. Such a quick rise in unemployment could lead to reduced consumer spending and a recession.
  • Rising interest rates: When the U.S. Treasury failed to make payments on $122 million in Treasury bonds in 1979, short-term interest rates jumped 0.6 percent. This was true despite the fact that the failure to make payments was a clerical error on the part of the Treasury and not an actual default (since all the bondholders were eventually paid back with interest). If the U.S. were to actually default, the cost of borrowing would rise sharply for individuals and businesses, ultimately slowing economic growth.
  • Depreciating value of the dollar: A U.S. default could reduce confidence in the U.S. dollar and push many nations to seek out more reliable alternatives. This would reduce the demand for the dollar, decrease its value, and increase the cost of imports in the U.S., leading to higher inflation.
  • Lower credit rating: If the U.S. were to default, credit rating agencies would downgrade the U.S.’s credit rating, which would make future borrowing more expensive for the U.S. government. Standard & Poor’s downgraded the U.S.’s credit rating for the first time ever in 2011 even though a default never occurred. Imagine what would happen if one did?
  • Impaired government functions: An actual default (and even a technical default) could force the government to delay payments to Social Security recipients, employees, and others who rely on their services. This could disrupt the lives of millions of Americans and severely impact economic growth. The White House released a report in October 2021 that outlined the potential consequences of such a default and how it could impact various sectors of the economy.
  • Political fallout: If your job was to get Donald Trump re-elected in 2024, there are few things that would help more than a U.S. default in 2023. Regardless of political beliefs, many Americans will hold the current party in power (Democrats) ultimately responsible in the event of a default. This would influence future elections and public policy for many years to come.

While these scenarios paint a sobering picture of what could happen if the U.S. were to default on its debt, it’s important to remember that no one knows the future. Don’t just take my word for it though. Consider what Warren Buffett said on the topic at the most recent Berkshire Hathaway shareholders meeting:

It’s very hard to see how you recover once…people lose faith in the currency…All kinds of things can happen then. And I can’t predict them and nobody else can predict them, but I do know they aren’t good.

3. Microsoft Bets That Fusion Power Is Closer Than Many Think – Jennifer Hiller

In a deal that is believed to be the first commercial agreement for fusion power, the tech giant has agreed to purchase electricity from startup Helion Energy within about five years.

Helion, which is backed by OpenAI founder Sam Altman, committed to start producing electricity through fusion by 2028 and target power generation for Microsoft of at least 50 megawatts after a year or pay financial penalties.

The commitment is a bold one given that neither Helion nor anyone else in the world has yet produced electricity from fusion.

“We wouldn’t enter into this agreement if we were not optimistic that engineering advances are gaining momentum,” said Microsoft President Brad Smith…

…“I had this belief that the two things that would matter most to making the future and raising the quality of life a lot were making intelligence and energy cheap and abundant, and that if we could do that, it would transform the world in a really positive way,” Mr. Altman said.

A number of prominent investors from Mr. Altman to Bill Gates have put money into fusion firms, which have raised more than $5 billion, according to the Washington, D.C.-based Fusion Industry Association.

The process of splitting atoms in nuclear-fission power plants provides nearly 20% of U.S. electricity. But nuclear fusion systems would generate electricity from the energy released when hydrogen atoms are combined to form helium.

The industry got a boost in December when the U.S. Energy Department announced a research breakthrough by scientists after a fusion reaction at the Lawrence Livermore National Laboratory produced more energy than was used to create it by firing lasers at a target.

To be a practical source of power, the entire facility would need to net produce rather than consume energy, and at a price that competes in the broader electricity market…

…David Kirtley, CEO at Helion, said that like a wind- or solar-power developer—the more typical energy firms involved in power purchase agreements—Helion would pay Microsoft financial penalties if it doesn’t deliver power on time. The companies declined to specify the amount.

“There’s some flexibility, but it is really important that there are significant financial penalties for Helion if we don’t deliver,” Mr. Kirtley said. “We think the physics of this is ready for us to signal the commercialization of fusion is ready.”

4. Some Things I Think – Morgan Housel

The fastest way to get rich is to go slow.

Many beliefs are held because there is a social and tribal benefit to holding them, not necessarily because they’re true.

Nothing is more blinding than success caused by luck, because when you succeed without effort it’s easy to think, “I must be naturally talented.”…

…The most valuable personal finance asset is not needing to impress anyone.

Most financial debates are people with different time horizons talking over each other…

…The hardest thing when studying history is that you know how the story ends, which makes it impossible to put yourself in people’s shoes and imagine what they were thinking or feeling in the past…

…Most beliefs are self-validating. Angry people look for problems and find them everywhere, happy people seek out smiles and find them everywhere, pessimists look for trouble and find it everywhere. Brains are good at filtering inputs to focus on what you want to believe…

…The market is rational but investors play different games and those games look irrational to people playing a different game.

A big problem with bubbles is the reflexive association between wealth and wisdom, so a bunch of crazy ideas are taken seriously because a temporarily rich person said it.

Logic doesn’t persuade people. Clarity, storytelling, and appealing to self-interest do…

…Happiness is the gap between expectations and reality, so the irony is that nothing is more pessimistic than someone full of optimism. They are bound to be disappointed…

…Nothing leads to success like unshakable faith in one big idea, and nothing sets the seeds of your downfall like an unshakable faith to one big idea…

…Economies run in cycles but people forecast in straight lines.

You are twice as gullible as you think you are – four times if you disagree with that statement.

Price is what you pay, value is whatever you want Excel to say…

…We underestimate the importance of control. Camping is fun, even when you’re cold. Being homeless is miserable, even when you’re warm…

…“If you only wished to be happy, this could be easily accomplished; but we wish to be happier than other people, and this is always difficult, for we believe others to be happier than they are.” – Montesquieu

With the right incentives, people can be led to believe and defend almost anything.

Good marketing wins in the short run and good products win in the long run…

…The most productive hour of your day often looks the laziest. Good ideas rarely come during meetings – they come while going for a walk, or sitting on the couch, or taking a shower…

…A good test when reading the news is to constantly ask, “Will I still care about this story in a year? Two years? Five years?”

A good bet in economics: the past wasn’t as good as you remember, the present isn’t as bad as you think, and the future will be better than you anticipate.

5. Layers of AI – Muji

AI is such a loose term, a magical word that simply means some type of mathematically-driven black box. It is generally thought of as a compute engine that can do a task at or better than a human can, driven by a “brain” (AI engine) making decisions. Essentially, AI is a bunch of inner mathematical algorithms that interconnect & combine into one big algorithm (the overall AI model). These take an input, do logic (the black box), and send back an output.

At the highest level, AI has thus far been Artificial Narrow Intelligence (ANI), a weaker form of AI that is honed to complete a specific task. As seen over the past few months, we are quickly approaching Artificial General Intelligence (AGI), a stronger form of AI that can perform a wider range of tasks, and can think abstractly and adapt. AGI is the holy grail of many an AI researcher.

Today, AI takes a lot of forms, such as Machine Learning (learning from the past to predict the future), Computer Vision (identifying structure in video or imagery), Speech-to-Text/Text-to-Speech (converting audio to text and vice versa), Expert Systems (highly honed decision engines), and Robotics (controlling the real world)…

…It is worth having some caution with AI, but know that the hype is real, and the potential of these cutting-edge AI models is palpable. At a minimum, we are at the precipice of a new era in productivity boosts from virtual assistance and automation. But as these engines mature, combine, and integrate with others more, it suddenly feels that AGI is on our doorstep.

ML is the subset of AI that is trained on past historical data in order to make decisions or predict outcomes. In general, ML processes a lot of data upfront in a training process, analyzing it to determine patterns within it in order to derive future predictions. With the rise of better models, honed hardware (GPUs and specialized chips from hyperscalers), and continually improving scale & performance from the cloud hyperscalers, the potential of ML is now heavily scaling up. ML models can make decisions, interact with the world (through text, voice, chat, audio, computer vision, image, or video), and take action.

ML is extremely helpful for:

  • processing unstructured content (text, images, video) to extract meaning, understand intent & context
  • image or video recognition to isolate & identify objects
  • make decisions by weighing complex factors
  • categorize & group input (classification)
  • pattern recognition
  • language recognition & translation
  • process historical data to isolate trends occurring, then forecast or predict those trends from there
  • generate new output (text, image, video, audio generation)

ML models are built from a wide variety of statistical model types geared for specific problems, each with a wide number of statistical algorithms that can be used in each. Some common types include:

  • Classification models are used to classify data into categories (labels), in order to predict a discrete value (what category applies to new data).
  • Regression models are used to find correlations between variables, in order to predict continuous values (numerics).
  • Clustering models are good for clustering data together around the natural groups that exist, such as for segmenting customers, making recommendations, and image processing.

There are a number of ways that ML can be taught, including:

  • Supervised Learning is training via a dataset with known answers. These answers become labels that the ML uses to identify patterns and correlations in the data.
  • Unsupervised Learning is training via raw data and letting the AI determine the features and trends within the data. This is used by ML systems for making recommendations, data associations, trend isolation, or customer segmenting.
  • Semi-supervised Learning is in between, which uses a subset of training on a labeled dataset, and another unlabelled one to enrich it further.
  • Reinforcement Learning is a model that gets rewarded for correct and timely answers (via internal scores or human feedback). This is used when there is a known start and end state, where the ML has to determine the best way to navigate the multiple paths in between. This is being leveraged in new language models like ChatGPT to improve the way the engine “talks”…

Some of the components of building ML that are helpful to understand:

  • Features are characteristics or attributes within the raw data that help define the input (akin to columns within a database). These are then fed in as inputs to the ML model, and weighed against each other to identify patterns and how they correlate to each other. Feature Engineering is the process where a data scientist will pre-identify these variables within the data, such as categories or numerical ranges to track. Feature Selection may be needed to select a subset of features in model construction, which may be repeatedly tested to find the best fit, as well as helps simplify models and shorten training times. Features can be collaboratively tracked in Feature Stores, which are similar to Metric Stores in BI stacks [both discussed in the Modern Data Stack]. Unsupervised Learning forces the ML engine to determine the important features on its own.
  • Dimensionality is based on the number of features provided as input into the model – or rather, represents the internal dimensions of the model of how each feature relates to and impacts every other feature (how one variable in a row of input impacts another). High-dimensional data refers to datasets having a wide set of features (a high number of input variables per row).
  • Observations are the number of feature sets provided as input while building the model (akin to rows within a database).
  • Vectors are features turned into numerical form and stored as an array of inputs (one per observation or row, or a sentence of text in NLP). An array of vectors is a two-dimensional matrix. [This is why GPUs are so helpful in ML training, as they specialize in vectorized math.]
  • Tensors represent the multi-dimensional relationships between all vectors. [Hence why Google and NVIDIA use the name often in GPU products, as they specialize in highly-dimensional vectorized math.]
  • Labels are pre-defined answers given to a dataset. This can be the identification of categories that apply to that data (such as color, make, model of a car), successful or failed outcomes (such as whether this is fraud or risky behavior or not), or the tagging and definition of objects within an image or video (this image contains a white cat on a black table). These are then fed into Supervised Learning methods of training ML models.
  • Parameters are what the ML model creates as internal variables at a decision point. This is a trained variable that helps set the importance of an individual feature within the ML engine. (This can be weights & biases within a neural network or a coefficient in a regression.) The parameter count is a general way that ML models use to show how much complexity they hide. (OpenAI’s GPT-3 had 350M-175B parameters in various flavors, and GPT-4 is believed to have up to 1T.)
  • Hyperparameters are external variables the data scientist can adjust in individual statistical algorithms used within the ML model. Think of them as the knobs that can be tuned and adjusted to tweak the statistical model within (along with the fact there are countless statistical models that can be used for any specific algorithm, which can be swapped out).

As with anything data related, it is “garbage in – garbage out”. You must start with good data to have a good ML model. Data science is ultimately the art of creating an ML model, which requires data wrangling (the cleaning, filtering, combining, and enriching of the datasets used in training), selection of the appropriate models & statistical algorithms to use for the problem at hand, feature engineering, and tuning of the hyperparameters. Essentially, data science is about asking the right questions in the right way.

ML models are trained with data, then validated to assure “fit” (statistical relevance) to the task at hand, as well as can be tuned and tweaked along the creation process by the data scientist (via the training data being input, the features selected, or hyperparameters in the statistical model). Once in production, it is typical to occasionally test it to ensure it remains relevant to real-world data (fit), as both models and data can drift (such as shifting behaviors of customers). Models can be trained on more and more data to become more and more accurate in classifications, predictions, and generation. More data generally means more insights and accuracy – however, at some point the model may go off the rails, and start trying to find patterns in random outliers that aren’t helpful. This is known as being “overfit”, where its trained findings aren’t as applicable to real-world data by factoring in noise or randomness more than it should. It must then be retrained on a more up-to-date set of historical data.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Microsoft. Holdings are subject to change at any time.