What We’re Reading (Week Ending 14 April 2024)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 14 April 2024:

1. Perplexity is ready to take on Google – Alex Heath and Aravind Srinivas

What’s it like on the frontlines of the AI talent war right now?

I made mistakes in chasing the wrong people. Recently there was a really senior backend engineer who ended up joining X.AI. He was talking to us, too.

I was talking to Patrick Collison for advice on this, and he said, “Why are you even in this race? Why are you trying to compete with these people? Go after people who want to actually build the stuff that you’re building and don’t chase AI clout.”

There are a lot of good engineers who are applying to us and Anthropic and OpenAI and X.AI and Character.ai. These are the top five choices of AI startups. And people normally just go to the highest bidder. Whoever has the highest valuation will be able to win this race all the time because, on paper, you’re always going to be able to offer the same amount of shares but the dollar value is going to be much higher…

...Have you taken any kind of lesson away from the Gemini diversity scandal? I saw you recently integrated photo generation into Perplexity.

Factfulness and accuracy is what we care about. Google has many other cultural things that they care about, and that’s why they made their products that way. They should only prioritize one aspect, which is giving an accurate answer. They don’t do that for whatever reasons. They have all these other people in the room trying to make decisions.

If I learned one thing, it’s that it’s better to be neutral. Don’t try to have any values you inject into the product. If your product is an answer engine, where people can ask questions and get answers, it better respond in a scholarly way. There’s always a nerd in your classroom who’s just always right, but you don’t hate them for having a certain political value, because they are just going to give you facts. That’s what we want to be. And Google’s trying to be something different. That’s why they got into trouble.

What are you hearing generally about the state of Google from people there right now?

The researchers are still pretty excited about what they’re doing. But the product team messes up their releases. The Gemini product team was fine-tuning all these models to put in the product. There’s a lot of bureaucracy, basically.

I know Sergey Brin being there is making things faster and easier for them. You might have seen the video that was circulating of him being at some hackathon. He brushed it [the Gemini diversity scandal] off as just some kind of a small bug, right?

It’s not a small bug. It’s actually poor execution. The image generation thing is actually very easy to catch in testing. They should have caught it in testing. When you consider Google as the place for de facto information and correctness, when they make mistakes it changes the way you perceive the company…

How much of your tech is in-house versus fine-tuning all these models that you work with? What’s your tech secret sauce?

In the beginning, we were just daisy-chaining GPT-3.5 and Bing. Now, we post-train all these open-source models ourselves. We also still use OpenAI’s model.

We are never going to do the full pre-training ourselves. It’s actually a fool’s errand at this point because it takes so much money to even get one good model by pre-training yourself. There are only four or five companies that are capable of doing that today. And when somebody puts out these open-source models, there’s no reason for you to go and recreate the whole thing.

There is a new term that has emerged in this field called post-training. It’s actually like fine-tuning but done at a much larger scale. We are able to do that and serve our models ourselves in the product. Our models are slightly better than GPT-3.5 Turbo but nowhere near GPT-4. Other than Anthropic and Gemini, nobody has actually gotten to that level yet.

How are you doing to solve AI hallucination in your product? Can you?

The reason why we even have sources at the top of the answer is because we want to make sure that users have the power to go verify the answer. We precisely tell you which link to go to versus showing ten blue links and you not being sure which to read.

The other way is constantly improving the authority of which sources we use to cite the answer and then getting rid of the bad ones. When you don’t have sufficient information, it’s better to say you don’t know rather than saying something you made up.

2. Book Summary: Our Investing Strategy, who does the market smile upon – Made In Japan

He goes by the name of Tatsuro Kiyohara, who was the CIO of Tower Investment Management, which ran the flagship K-1 fund that compounded 20% annually during his 25-year run (that’s 9300%). Compare this to the TOPIX which did an annualized return of roughly 3%.

But its not just the numbers that he posted that were inspiring, the journey to get there was a tumultuous one that would be almost impossible for us to replicate. He is built differently. Who else is willing to pour in almost their entire net worth when the fund is down -72% in an attempt to save his fund not just for his sake, but for the clients that decided to stick with him amid all the redemptions?..

…During his stint in New York, his clients included the familiar hedge funds we’d all heard of. One of which was Julian Robertson’s Tiger Management. Meeting Tiger was perhaps the first instance he decided that he wanted to do something with hedge funds, he would spend his days talking stock at their office. Tiger appreciated him too – one time he realized that Tiger was short a stock, Kawasaki Steel, and also realized Nomura was attempting to ‘promote’ the stock (what you call ‘pump’ these days), he almost had to fight with Tiger to convince exiting and that the stock was going up regardless of fundamentals which they finally obliged. This stock 3xed not long after. Not a surprise that he was invited to Tiger’s annual party to be awarded best salesman of the year…

…The game is to figure out when you’re in the minority. This doesn’t always mean that because everyone is bullish, that you being bullish isn’t a variant perception. If for example the market expects a company to grow by 10% a year for the next 5 years, and you believe it will be more like 30%, you are still in the minority.

It is more difficult to thus have a good investment idea in large caps because its harder to figure out what’s priced in.

From an opportunity cost standpoint for an individual investor, the return on time is so much better researching cheap micro & small-caps with the potential to grow. If you had an hour with the CEO of a large company or a small one the likelihood you will get more valuable insights (your alpha) is much higher in the latter. In general, you also won’t lose much if you’re wrong here, these companies aren’t priced for growth anyway. For him personally, he tries to avoid investing in stocks that have already gone up as much as possible…

…One of his first investments with the fund that falls into this archetype was Nitori, which is a household name today but when he first invested nobody touched it. It was a Hokkaido-based company, and the furniture market in Japan was shrinking. What he realized though was that the market was very fragmented, and he saw Nitori as the one to take market share over time with an exceptional founder. Which proved to be correct. His investment in NItori 10xed all the while the market halved. The lesson here was that even with a shrinking market if you find the right company, you can generate strong returns. No doubt there are some diamonds in the rough…

…In the end, 90% of investing in Small/Micro-caps is about Management
Heres what he looks for:

  1. Operators with a growth mindset
  2. Talented employees that are aligned with the CEO’s Vision and Mission
  3. Doesn’t get crushed by competitors
  4. A widening core competence (read = Moat) as it grows
  5. Not in an industry where you’re pulling forward demand (There is a finite pile of demand that once absorbed will be gone, he points out the Japanese M&A industry as one)
  6. The management is impeccable with their word that is, they do what they say
  7. Companies that have positive feedback loops…

…With one company, every time he requested a meeting with Investor Relations, the CEO showed up every time without fail which he found strange. Eventually, though this business got caught in an accounting scandal and went under. Maybe if the CEO shows up too readily you need to be careful. Another business had zero interest in doing IR, showing up in their factory uniform, and wasn’t too friendly. One day however they show up in suits! This business also didn’t too well…

…There are mavericks among them. Founder of Zensho Holdings (the operator of Beef bowl chain Sukiya). This was an unpopular stock that IPOd. The first thing he saw when visiting the founder’s office was a bench press. He was ‘ripped like Popeye’. At the meeting, all he talked about was how superior a food a ‘beef bowl’ (Gyuudon) was. If Japanese people had eaten enough beef bowls and benchpressed enough Japan wouldn’t have lost the war (LOL).

One time one of Kiyohara-sans employees told this founder he’s also been going to the gym, he immediately challenged said employee and tested him with a ‘lariat’ a type of wrestling tackle. The key is to find a CEO who knows his wrestling moves.

The IR was also interesting. In its mid-term plan, they even included the P/E multiple the company should be trading at in 5 years…

…As he once reflected on his portfolio, he realized that the business and its management was like looking at himself in the mirror. If assessing management correctly is the key to investing, also understand that there is a self-selection bias. If the Founder and CEO is that much more brilliant than you – you won’t even realize how brilliant he is. That said you’d never invest in a ‘dumb’ CEO, so ultimately you end up selecting people ‘on your level’. This it appears, to be the reality for investing in microcaps…

…He was adamant about that whether large or small – he had no interest in buying an expensive business. If the P/E was too high, that was a pass.

Over the years he tried building various growth models and realized this had almost no benefit to making money so just stopped. Was a waste of time.

He screens for such companies by finding a high net cash ratio which was just net cash over market cap. (So basically net-nets).

He also liked to invert the problem, by looking at the current P/E of the stock you can figure out the kind of earnings growth it implied. No rocket science here – he tries to figure out the Terminal Multiple with a Perpetual Growth model. For example, if the risk-free rate is 2% and the P/E is 10x that implies a terminal growth of -8.2% all else equal if earnings growth was -3.1% instead then the P/E should be 20x. (Yes this is all negative growth)

The 40-year average of the risk-free rate is about 1.7% in Japan so that sounds fair to him. Also, he says, forget about the concept of equity risk premium – this is just a banker’s term to underwrite uncertainty. If you’re uncertain just model that into your earnings projections…

…He doesn’t look at P/B where Investors may be calculating the liquidation price which can be inaccurate.

The point he thinks many miss – if a company is loss-making would the hypothetical buyer buy the assets at face value? And say that the business is a decent profitable business – no one’s going to be looking at the P/B they’ll be fixated on the P/E…

…Risks of small/micro caps:

  • Illiquidity discount
  • Many businesses are small suppliers to a much larger company
  • It operates in an industry with low entry barriers
  • Limited talent within the organization
  • Succession issues and nepotism – the son of the owner can be a dumb ass
  • Because no one notices them – the likelihood of a fraud/scandal is higher
  • When the owner retires, this person may pay out a massive retirement bonus
  • Because it’s an owner-operator and harder to take over, no one keeps them in check and may screw up
  • When there is an accounting fraud, the damage will be large
  • They don’t have the resources to expand overseas
  • They have an incentive to keep their valuation low to minimize the inheritance tax.

3. Three reasons why oil prices are remarkably stable – The Economist

Shouldn’t oil prices be surging? War has returned to the Middle East. Tankers in the Red Sea—through which around 12% of seaborne crude is normally shipped—are under attack by Houthi militants. And opec, a cartel of oil exporters, is restricting production. Antony Blinken, America’s secretary of state, has invoked the spectre of 1973, when the Yom Kippur war led to an Arab oil embargo that quadrupled prices in just three months. But oil markets have remained calm, trading mostly in the range of $75 and $85 per barrel for much of last year…

…Oil production is now less concentrated in the Middle East than it has been for much of the past 50 years. The region has gone from drilling 37% of the world’s oil in 1974 to 29% today. Production is also less concentrated among members of OPEC… That is partly because of the shale boom of the 2010s, which turned America into a net energy exporter for the first time since at least 1949…

…Another reason for calm is opec members’ ample spare production capacity (ie, the amount of oil that can be produced from idle facilities at short notice)…

…America’s Energy Information Administration (eia) estimates that opec’s core members have around 4.5m barrels per day of spare capacity—greater than the total daily production of Iraq…

…The world still has a big appetite for oil: according to the eia demand hit a record in 2023 and will be higher still in 2024, thanks in part to growth in India. But that is unlikely to push prices much higher. Global growth is not at the levels seen in the early 2000s. China, long the world’s biggest importer of oil, is experiencing anaemic economic growth. Structural changes to its economy also make it less thirsty for the stuff: next year, for example, half of all new cars sold in the country are expected to be electric.

4. How We’ll Reach a 1 Trillion Transistor GPU – Mark Liu and H.S. Philip Wong

All those marvelous AI applications have been due to three factors: innovations in efficient machine-learning algorithms, the availability of massive amounts of data on which to train neural networks, and progress in energy-efficient computing through the advancement of semiconductor technology. This last contribution to the generative AI revolution has received less than its fair share of credit, despite its ubiquity.

Over the last three decades, the major milestones in AI were all enabled by the leading-edge semiconductor technology of the time and would have been impossible without it. Deep Blue was implemented with a mix of 0.6- and 0.35-micrometer-node chip-manufacturing technology. The deep neural network that won the ImageNet competition, kicking off the current era of machine learning, was implemented with 40-nanometer technology. AlphaGo conquered the game of Go using 28-nm technology, and the initial version of ChatGPT was trained on computers built with 5-nm technology. The most recent incarnation of ChatGPT is powered by servers using even more advanced 4-nm technology. Each layer of the computer systems involved, from software and algorithms down to the architecture, circuit design, and device technology, acts as a multiplier for the performance of AI. But it’s fair to say that the foundational transistor-device technology is what has enabled the advancement of the layers above.

If the AI revolution is to continue at its current pace, it’s going to need even more from the semiconductor industry. Within a decade, it will need a 1-trillion-transistor GPU—that is, a GPU with 10 times as many devices as is typical today…

…Since the invention of the integrated circuit, semiconductor technology has been about scaling down in feature size so that we can cram more transistors into a thumbnail-size chip. Today, integration has risen one level higher; we are going beyond 2D scaling into 3D system integration. We are now putting together many chips into a tightly integrated, massively interconnected system. This is a paradigm shift in semiconductor-technology integration.

In the era of AI, the capability of a system is directly proportional to the number of transistors integrated into that system. One of the main limitations is that lithographic chipmaking tools have been designed to make ICs of no more than about 800 square millimeters, what’s called the reticle limit. But we can now extend the size of the integrated system beyond lithography’s reticle limit. By attaching several chips onto a larger interposer—a piece of silicon into which interconnects are built—we can integrate a system that contains a much larger number of devices than what is possible on a single chip…

…HBMs are an example of the other key semiconductor technology that is increasingly important for AI: the ability to integrate systems by stacking chips atop one another, what we at TSMC call system-on-integrated-chips (SoIC). An HBM consists of a stack of vertically interconnected chips of DRAM atop a control logic IC. It uses vertical interconnects called through-silicon-vias (TSVs) to get signals through each chip and solder bumps to form the connections between the memory chips. Today, high-performance GPUs use HBM extensively…

…With a high-performance computing system composed of a large number of dies running large AI models, high-speed wired communication may quickly limit the computation speed. Today, optical interconnects are already being used to connect server racks in data centers. We will soon need optical interfaces based on silicon photonics that are packaged together with GPUs and CPUs. This will allow the scaling up of energy- and area-efficient bandwidths for direct, optical GPU-to-GPU communication, such that hundreds of servers can behave as a single giant GPU with a unified memory. Because of the demand from AI applications, silicon photonics will become one of the semiconductor industry’s most important enabling technologies…

…We can see the trend already in server GPUs if we look at the steady improvement in a metric called energy-efficient performance. EEP is a combined measure of the energy efficiency and speed of a system. Over the past 15 years, the semiconductor industry has increased energy-efficient performance about threefold every two years. We believe this trend will continue at historical rates. It will be driven by innovations from many sources, including new materials, device and integration technology, extreme ultraviolet (EUV) lithography, circuit design, system architecture design, and the co-optimization of all these technology elements, among other things.

Largely thanks to advances in semiconductor technology, a measure called energy-efficient performance is on track to triple every two years (EEP units are 1/femtojoule-picoseconds).

In particular, the EEP increase will be enabled by the advanced packaging technologies we’ve been discussing here. Additionally, concepts such as system-technology co-optimization (STCO), where the different functional parts of a GPU are separated onto their own chiplets and built using the best performing and most economical technologies for each, will become increasingly critical.

5. The illusion of moral decline – Adam Mastroianni

In psychology, anything worth studying is probably caused by multiple things. There may be lots of reasons why people think morality is declining when it really isn’t.

  • Maybe people say that morality is declining because they think it makes them look good. But in Part I, we found that people are willing to say that some things have gotten better (less racism, for instance). And people still make the same claims when we pay them for accuracy.
  • Maybe because people are nice to you when you’re a kid, and then they’re less nice to you when you’re an adult, you end up thinking that people got less nice over time. But people say that morality has declined since they turned 20, and that it’s declined in the past four years, and all that is true for old people, too.
  • Maybe everybody has just heard stories about how great the past is—like, they watch Leave It to Beaver and they go “wow, people used to be so nice back then.” But again, people think morality has declined even in the recent past. Also, who watches Leave It to Beaver?
  • We know from recent research that people denigrate the youth of today because they have positively biased memories of their own younger selves. That could explain why people blame moral decline on interpersonal replacement, but it doesn’t explain why people also blame it on personal change.

Any of these could be part of the illusion of moral decline. But they are, at best, incomplete.

We offer an additional explanation in the paper, which is that two well-known psychological phenomena can combine to produce an illusion of moral decline. One is biased exposure: people pay disproportionate attention to negative information, and media companies make money by giving it to us. The other is biased memory: the negativity of negative information fades faster than the positivity of positive information. (This is called the Fading Affect Bias; for more, see Underrated ideas in psychology).

Biased exposure means that things always look outrageous: murder and arson and fraud, oh my! Biased memory means the outrages of yesterday don’t seem so outrageous today. When things always look bad today but brighter yesterday, congratulations pal, you got yourself an illusion of moral decline.

We call this mechanism BEAM (Biased Exposure and Memory), and it fits with some of our more surprising results. BEAM predicts that both older and younger people should perceive moral decline, and they do. It predicts that people should perceive more decline over longer intervals, and they do. Both biased attention and biased memory have been observed cross-culturally, so it also makes sense that you would find the perception of moral decline all over the world.

But the real benefit of BEAM is that it can predict cases where people would perceive less decline, no decline, or even improvement. If you reverse biased exposure—that is, if people mainly hear about good things that other people are doing—you might get an illusion of moral improvement. We figured this could happen in people’s personal worlds: most people probably like most of the people they interact with on a daily basis, so they may mistakenly think those people have actually become kinder over time.

They do. In another study, we asked people to answer those same questions about interpersonal replacement and personal change that we asked in a previous study, first about people in general, and then about people that they interact with on a daily basis. When we asked participants about people in general, they said (a) people overall are less moral than they were in 2005, (b) the same people are less moral today than in 2005 (personal change) and (c) young people today are less moral than older people were in 2005 (interpersonal replacement). Just as they did before, participants told us that morality declined overall, and that both personal change and interpersonal replacement were to blame.

But we saw something new when we asked participants about people they know personally. First, they said individuals they’ve known for the past 15 years are more moral today. They said the young folks they know today aren’t as moral as the old folks they knew 15 years ago, but this difference was smaller than it was for people in general. So when you ask people about a group where they probably don’t have biased exposure—or at least not biased negative exposure—they report less moral decline, or even moral improvement.

The second thing that BEAM predicts is that if you turn off biased memory, the illusion of moral decline might go away. We figured this could happen if you asked people about times before they were born—you can’t have memories if you weren’t alive. We reran one of our previous studies, simply asking participants to rate people in general today, the year in which they turned 20, the year in which they were born, 20 years before that, and 40 years before that.

People said, basically, “moral decline began when I arrived on Earth”:

Neither of these studies mean that BEAM is definitely the culprit behind the illusion of moral decline, nor that it’s the only culprit. But BEAM can explain some weird phenomena that other accounts can’t, and it can predict some data that other accounts wouldn’t, so it seems worth keeping around for now.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and TSMC. Holdings are subject to change at any time.

Leave a Reply

Your email address will not be published. Required fields are marked *