All articles

What We’re Reading (Week Ending 11 June 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 11 June 2023:

1. Real estate is China’s economic Achilles heel – Noah Smith

Painting with a broad brush, you could say that China shifted from an export-led economy to a domestic-investment-led economy after 2008. And the biggest chunk of that domestic investment, by far, was real estate.

Real estate development and its related industries (such as real estate finance) don’t just create places for Chinese people to live; they also create vast amounts of employment in the Chinese economy. That’s a big problem right now, because in the wake of the real estate crash that began in 2021, China’s unemployment has risen a lot — officially, unemployment for the 16-24 age group is now at 20.4%, compared to 6.5% in the U.S. Having a vast number of unemployed young people is a threat to both social stability and the future quality of the workforce, and it’s definitely something that’s worrying the Chinese government right now.

That real estate bust, by the way, is still going on, and — as you might expect for a sector so large, it’s weighing heavily on the rest of China’s economy. The overall narrative about China’s recovery in early 2023 has been recovery from the Zero Covid policies of late 2022 — growth was forecast to bounce back to a rapid 5.2% this year. But the most recent monthly economic data shows that the troubles are far from over. Here’s Bloomberg:

China’s economic recovery weakened in May, raising fresh fears about the growth outlook…Manufacturing activity contracted at a worse pace than in April, while services expansion eased, official data showed Wednesday, suggesting the post-Covid rebound had lost momentum…

A stronger recovery in China will also depend on a turnaround in the property market, which makes up about a fifth of the economy when including related sectors. Home sales have slowed after an initial rebound, while real estate developers continue to face financial troubles. 

It’s highly likely that underneath the headline-grabbing drama of Zero Covid, the real force dragging down China’s short-term growth is the general crisis in the real estate sector that began a year and a half ago. That crisis is still ongoing, with more defaults coming periodically. As Adam Wolfe reports in a detailed thread, residential real estate investment is falling:

And that’s in spite of the Chinese government’s frantic efforts to revive the sector. In the past, China was able to use real estate as a form of fiscal stimulus that cost the central government very little — the government just called up the state-controlled big banks and told them to lend more, and the banks lent to property developers. That stimulus came at the expense of long-term productivity growth (since real estate tends to have lower productivity growth than other sectors), but it did prevent China from experiencing recessions for a long time. With the current crash, though, that policy looks to have reached the end of its rope.

The fact is, China just doesn’t need that many more places to live. Even as of 2017 — six years ago! — China had already basically reached developed-country levels of living space per person.

As China built more and more, vacancy rates rose steadily in all big cities except for the four “Tier 1” cities (Beijing, Shanghai, Shenzhen, and Guangzhou). Overall, vacancy rates were significantly higher in China than in most rich countries:…

…In any country, property will be an important component of wealth, alongside stocks and bonds. But in China, with its underdeveloped stock and bond markets, almost all financial wealth is real estate:

From looking at house prices compared to incomes, it’s clear that much of Chinese real estate is bought as an investment property rather than for its value as a place to actually live (and yes, this is speculative bubble behavior). In San Francisco — America’s famously least affordable big city — a typical house costs 10 times the typical resident’s annual income. In Chinese cities this ratio is often much higher:..

…The biggest losers from the real estate bust, however, will probably be China’s local governments.

China’s local governments famously rely on land sales rather than on property taxes for most of their revenue. The real estate market is thus what allows local governments to both provide essential public services and to conduct local industrial policy — which, until the mid-2010s, was China’s main type of industrial policy…

…Xiong explains that this system has a bunch of advantages and disadvantages. On the plus side, buying land in a city is basically like buying equity in that city — if the city government can produce local growth, your land price goes up. So businesspeople and homeowners all become shareholders in the city, which aligns everybody’s incentives toward growth. On the downside, the system creates a ton of different structural incentives for local governments to borrow too much, and for private investors to over-invest in un-economical and risky real estate projects, and for banks to finance these projects too cheaply. In other words, the combination of the local government sector and the property sector is a big reason why real estate looms so much larger in China’s economy than in other countries, and a big reason why the sector got so bloated.

Ultimately, relying on land sales to finance local governments is a strategy that just has a natural time limit. Eventually you run out of valuable land to sell. China’s local governments look like they’re hitting that point, which is why they’re increasingly asking the central government for money. And the central government is stepping in to replace the revenue from the lost land sales:

This means that many of the advantages that China got from federalism and local experimentation and initiative during its amazing growth boom in the 90s, 00s, and early 2010s will now be forfeit. Industrial policy will increasingly be conducted from the center; Xi Jinping and his clique will be making a lot more of the decisions regarding who builds what where, instead of partnerships of local governments and businesspeople. The virtuous cycle where the property sector aligns the interests of local governments and businesses toward growth will now be weakened if not broken altogether in many places.

2. Post-war Germany’s lessons on inflation – Michael Fritzell

Costantino Bresciani-Turroni was an Italian economist that lived between 1882 and 1963. He’s famous for being an anti-fascist intellectual and a proponent of free-market economics.

But more importantly, he wrote a book called The Economics of Inflation, which is widely regarded as the definitive book on Germany’s experience with hyperinflation between 1919 and 1923…

…The first World War broke out on 28 July 1914 when Austria-Hungary declared war on Serbia following the assassination of Archduke Franz Ferdinand. Germany joined the Austria-Hungary coalition. And on the side, Russia, France, the UK and the US formed the Allied forces of World War 1.

Just three days after the start of the war, the German central bank (“the Reichsbank”) suspended the conversion of its notes to gold. The German currency (“the mark”) became paper money without any value anchor. It, therefore, became known as the “paper mark”, as opposed to the previous, gold-backed “gold mark”.

The reason behind the suspension was that the government knew that it would be unable to finance the war through tax revenue. Instead, the Reichsbank took it upon itself to print money to cover any deficit. And in the following four years, the Reichsbank routinely bought government bonds used to finance the budget deficit…

…From July 1914 to the end of the war in December 1918, Germany’s total government debt rose from 300 million marks to 55 billion marks. The war cost roughly 147 billion marks in total, and so more than 1/3 of it was financed through government borrowing, much of it financed through central bank support…

…The war ended on 11 November 1918 with a German surrender, driven by a new German civilian government. From then onwards, the exchange rate started to depreciate rapidly – faster than domestic prices and the volume of circulation.

There had been hopes that a German victory would lead to spoils of war that could alleviate the country’s debt burden. But once the government declared defeat, those hopes were crushed.

In the eight months after the war ended, the budget deficit reached the 10 billion mark – an incredibly high number. when the Socialist Party took power in November 1918, it didn’t have the strength nor the ability to impose the taxes necessary to balance the budget.

The theory prevailing at the time in Germany was that the depreciation was caused by a deterioration in the balance of payments. But foreign voices and especially the British, believe that the depreciation of the currency was instead caused by an excessive budget deficit.

It’s possible that the holders of the mark feared heavy reparations payments and therefore sold the currency in anticipation of a coming crisis. The Treaty of Versailles was signed on 28 June 1919. The Treaty might have had a psychological influence on the German public, who feared that the government would resort to money printing to fund the deficit.

In reality, the budget deficits would have been high with or without the reparation payments. And the payments actually made under the Treaty of Versailles were not particularly onerous, representing only 1/3 of the total deficit between 1920 and 1922…

…Here is the exact process in which inflation pressures built up in the economy:

  1. The issuance of paper money caused the currency to depreciate as speculators use the newly issued money to buy foreign currency or buy cheaper foreign goods for import.
  2. After the currency depreciated, inflation picked up as imports – especially raw materials – became more expensive.
  3. Later on in the process, the newly printed money worked its way through the economy and eventually led to higher wages. But wages didn’t adjust immediately – instead, they adjusted with a long lag that caught the population off-guard.

There was a narrative early in the post-war era that a weaker currency would stimulate the economy. That was true, but only to a small extent. When the currency depreciated, companies saw their profit margins increase as selling prices adjusted quickly while wages took much longer to adjust. Companies then reinvested their profits and and “fake prosperity” ensued.

Exports did particularly well since they were sold at foreign, higher prices. Inbound tourism to Germany took off. Railway charges did not increase in proportion to the depreciation of the mark, so foreigners were able to enjoy cheap travel when they came to Germany. Pure labour arbitrage industries such as shipyards also did well as wages in Germany fell compared to foreign competitors.

Meanwhile, interest rates remained low. There was a kind of yield curve control in place, with the official discount rate fixed at 5% between 1915 and July 1922, even though inflation accelerated from 1919 onwards to incredible levels.

Instead of raising the interest rate when inflation picked up, the Reichsbank restricted credit instead, favouring certain borrowers over others. It continuously extended credits to private speculators, who proceeded to use these loans to buy foreign currency and profit from the depreciation of the mark. It’s unclear how these borrowers were selected. But they appear to have had a cosy relationship with the Reichsbank – to say the least…

…The hoarding of foreign exchange became more serious throughout 1922. German industrialists formed the habit of leaving the profits they made from exports overseas. Germans began to sell houses, land, securities – anything really – to get hold of foreign currency.

Eventually, Germans started using foreign exchange for their day-to-day transactions. Merchants began to set prices in the gold mark or foreign currency. While salaries were still paid in paper marks, wage earners would rush to buy goods as soon as they received the money. Or convert the money into foreign currency as soon as possible.

In February 1923, the Reichsbank tried to support the mark exchange rate artificially through foreign exchange operations. But continuous issuance of paper money caused inflation to continue, and by April, the dam finally broke with the mark being dumped at a record rate.

Workers came up with solutions to the inflation problem by adding surcharges for the depreciation of the currency added onto wage contracts. Wages became tied to cost-of-living indices.

Eventually, Germans started using foreign exchange for their day-to-day transactions. Merchants began to set prices in the gold mark or foreign currency. While salaries were still paid in paper marks, wage earners would rush to buy goods as soon as they received the money. Or convert the money into foreign currency as soon as possible.

It was only in 1923 that hyperinflation got out of control. Taxes were inflated away to almost zero since they were paid with a long lag and tax receipts ended up being only represented 0.8% of government expenses. The rest of the government’s tax revenues came from printing money. By the end of 1923, 75% of all government bonds were held by the Reichsbank…

…On 15 October 1923, a new bank called the “Rentenbank” was created. This bank issued liabilities that were meant to be used as a substitute for the paper mark. Later that year, the value of the paper mark was stabilised at a rate of 4,200 billion marks for a gold mark. And one Rentenmark became equivalent to one gold mark.

The new Rentenmark wasn’t convertible into gold. But just the simple fact that the new money had a different name from the old instilled confidence. As Bresciani-Turroni explained:

“Of the simple fact that the new paper money had a different name from the old, the public thought it was something different from the paper mark… the new money was accepted, despite the fact that it was an unconvertible paper currency.”

And so, when people stopped hoarding foreign currency, the velocity of circulation of paper marks declined. And the increased willingness to hold domestic currency reduced the inflation problem in and of itself. The Rentenmark ended up circulating together with the paper mark for almost a year.

The passing from hyperinflation to complete stability was sudden. The budget was re-established and expenses cut so that equilibrium was reached. The introduction of new taxes and reduced pressure in terms of reparation payments also helped. In 1924-25, the government finally achieved significant budget surpluses.

Counter-intuitively, a shortage of money emerged despite trillion dollar bills. The reason was that domestic prices had increased so much, and the depreciation was so severe that there was not enough money to satisfy the volume of transactions at current prices.

This shortage was best measured through the concept of “real money supply” (=money supply deflated by inflation), which started shrinking from late 1923 onwards. The circulation of money in mid-1922 was 15-20 times the pre-war days, while prices had risen 40-50 times.

The shortage of money in real terms led to the following outcomes

  • Trade was arrested as companies could not gain access to working capital. Factories closed, and unemployment rose.
  • Interest rates increased, and heavily indebted individuals went bankrupt. At the end of 1923, the “call money” interest rate reached 30% per day.

The real money supply shrunk so much that eventually, the entire money supply amounted to only 444 million gold marks, compared to a Reichsbank gold reserve of 1 billion gold marks. That enabled the Reichsbank on 30 August 1924, to fix the conversion rate of the new Reichsmark at a rate of 1 trillion paper marks per US Dollar. In other words, since the value of the money supply had dropped below the Reichsbank’s holdings of gold, it was easy to peg the currency to gold yet again.

After the new Rentenmark and Reichsmark were introduced, prices stopped rising, and the paper mark strengthened against gold. Factories re-opened, unemployment declined, and confidence revived.

3. Why AI Will Save the World – Marc Andreessen 

What AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence – and many others, from the creation of new medicines to ways to solve climate change to technologies to reach the stars – much, much better from here.

AI augmentation of human intelligence has already started – AI is already around us in the form of computer control systems of many kinds, is now rapidly escalating with AI Large Language Models like ChatGPT, and will accelerate very quickly from here – if we let it.

In our new era of AI:

  • Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.
  • Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
  • Every scientist will have an AI assistant/collaborator/partner that will greatly expand their scope of scientific research and achievement. Every artist, every engineer, every businessperson, every doctor, every caregiver will have the same in their worlds.
  • Every leader of people – CEO, government official, nonprofit president, athletic coach, teacher – will have the same. The magnification effects of better decisions by leaders across the people they lead are enormous, so this intelligence augmentation may be the most important of all.
  • Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.
  • Scientific breakthroughs and new technologies and medicines will dramatically expand, as AI helps us further decode the laws of nature and harvest them for our benefit.
  • The creative arts will enter a golden age, as AI-augmented artists, musicians, writers, and filmmakers gain the ability to realize their visions far faster and at greater scale than ever before.
  • I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.
  • In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.
  • And this isn’t just about intelligence! Perhaps the most underestimated quality of AI is how humanizing it can be. AI art gives people who otherwise lack technical skills the freedom to create and share their artistic ideas. Talking to an empathetic AI friend really does improve their ability to handle adversity. And AI medical chatbots are already more empathetic than their human counterparts. Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer.

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips, and probably beyond those…

…My view is that the idea that AI will decide to literally kill humanity is a profound category error. AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.

In short, AI doesn’t want, it doesn’t have goals, it doesn’t want to kill you, because it’s not alive. And AI is a machine – is not going to come alive any more than your toaster will.

Now, obviously, there are true believers in killer AI – Baptists – who are gaining a suddenly stratospheric amount of media coverage for their terrifying warnings, some of whom claim to have been studying the topic for decades and say they are now scared out of their minds by what they have learned. Some of these true believers are even actual innovators of the technology. These actors are arguing for a variety of bizarre and extreme restrictions on AI ranging from a ban on AI development, all the way up to military airstrikes on datacenters and nuclear war. They argue that because people like me cannot rule out future catastrophic consequences of AI, that we must assume a precautionary stance that may require large amounts of physical violence and death in order to prevent potential existential risk.

My response is that their position is non-scientific – What is the testable hypothesis? What would falsify the hypothesis? How do we know when we are getting into a danger zone? These questions go mainly unanswered apart from “You can’t prove it won’t happen!” In fact, these Baptists’ position is so non-scientific and so extreme – a conspiracy theory about math and code – and is already calling for physical violence, that I will do something I would normally not do and question their motives as well…

…This time, we finally have the technology that’s going to take all the jobs and render human workers superfluous – real AI. Surely this time history won’t repeat, and AI will cause mass unemployment – and not rapid economic, job, and wage growth – right?

No, that’s not going to happen – and in fact AI, if allowed to develop and proliferate throughout the economy, may cause the most dramatic and sustained economic boom of all time, with correspondingly record job and wage growth – the exact opposite of the fear. And here’s why.

The core mistake the automation-kills-jobs doomers keep making is called the Lump Of Labor Fallacy. This fallacy is the incorrect notion that there is a fixed amount of labor to be done in the economy at any given time, and either machines do it or people do it – and if machines do it, there will be no work for people to do.

The Lump Of Labor Fallacy flows naturally from naive intuition, but naive intuition here is wrong. When technology is applied to production, we get productivity growth – an increase in output generated by a reduction in inputs. The result is lower prices for goods and services. As prices for goods and services fall, we pay less for them, meaning that we now have extra spending power with which to buy other things. This increases demand in the economy, which drives the creation of new production – including new products and new industries – which then creates new jobs for the people who were replaced by machines in prior jobs. The result is a larger economy with higher material prosperity, more industries, more products, and more jobs.

But the good news doesn’t stop there. We also get higher wages. This is because, at the level of the individual worker, the marketplace sets compensation as a function of the marginal productivity of the worker. A worker in a technology-infused business will be more productive than a worker in a traditional business. The employer will either pay that worker more money as he is now more productive, or another employer will, purely out of self interest. The result is that technology introduced into an industry generally not only increases the number of jobs in the industry but also raises wages…

…Speaking of Karl Marx, the concern about AI taking jobs segues directly into the next claimed AI risk, which is, OK, Marc, suppose AI does take all the jobs, either for bad or for good. Won’t that result in massive and crippling wealth inequality, as the owners of AI reap all the economic rewards and regular people get nothing?

As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual  work – the proletariat. This is another fallacy that simply will not die no matter how often it’s disproved by reality. But let’s drive a stake through its heart anyway.

The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. The largest market in the world for any product is the entire world, all 8 billion of us. And so in reality, every new technology – even ones that start by selling to the rarefied air of high-paying big companies or wealthy consumers – rapidly proliferates until it’s in the hands of the largest possible mass market, ultimately everyone on the planet…

…But you’ll notice what I slipped in there – I said we should focus first on preventing AI-assisted crimes before they happen – wouldn’t such prevention mean banning AI? Well, there’s another way to prevent such actions, and that’s by using AI as a defensive tool. The same capabilities that make AI dangerous in the hands of bad guys with bad goals make it powerful in the hands of good guys with good goals – specifically the good guys whose job it is to prevent bad things from happening.

For example, if you are worried about AI generating fake people and fake videos, the answer is to build new systems where people can verify themselves and real content via cryptographic signatures. Digital creation and alteration of both real and fake content was already here before AI; the answer is not to ban word processors and Photoshop – or AI – but to use technology to build a system that actually solves the problem.

And so, second, let’s mount major efforts to use AI for good, legitimate, defensive purposes. Let’s put AI to work in cyberdefense, in biological defense, in hunting terrorists, and in everything else that we do to keep ourselves, our communities, and our nation safe…

…China has a vastly different vision for AI than we do – they view it as a mechanism for authoritarian population control, full stop. They are not even being secretive about this, they are very clear about it, and they are already pursuing their agenda. And they do not intend to limit their AI strategy to China – they intend to proliferate it all across the world, everywhere they are powering 5G networks, everywhere they are loaning Belt And Road money, everywhere they are providing friendly consumer apps like Tiktok that serve as front ends to their centralized command and control AI.

The single greatest risk of AI is that China wins global AI dominance and we – the United States and the West – do not.

4. Apple Vision – Ben Thompson

This reality — pun intended — hits you the moment you finish setting up the device, which includes not only fitting the headset to your head and adding a prescription set of lenses, if necessary, but also setting up eye tracking (which I will get to in a moment). Once you have jumped through those hoops you are suddenly back where you started: looking at the room you are in with shockingly full fidelity.

What is happening is that Apple Vision is utilizing some number of its 12 cameras to capture the outside world, and displaying them to the postage-stamp sized screens in front of your eyes in a way that makes you feel like you are wearing safety goggles: you’re looking through something, that isn’t exactly like total clarity but is of sufficiently high resolution and speed that there is no reason to think it’s not real.

The speed is essential: Apple claims that the threshold for your brain to notice any sort of delay in what you see and what your body expects you to see (which is what causes known VR issues like motion sickness) is 12 milliseconds, and that the Vision visual pipeline displays what it sees to your eyes in 12 milliseconds or less. This is particularly remarkable given that the time for the image sensor to capture and process what it is seeing is along the lines of 7~8 milliseconds, which is to say that the Vision is taking that captured image, processing it, and displaying it in front of your eyes in around 4 milliseconds…

…The key part here is the “real-time execution engine”; “real time” isn’t just a descriptor of the experience of using Vision Pro: it’s a term-of-art for a different kind of computing. Here’s how Wikipedia defines a real-time operating system:

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts…

… Notably, your fingers don’t need to be extended into space: the entire time I used the Vision Pro my hands were simply resting in my lap, their movement tracked by the Vision Pro’s cameras.

It’s astounding how well this works, and how natural it feels. What is particularly surprising is how high-resolution this UI is; look at this crop of a still from Apple’s presentation:

The bar at the bottom of Photos is how you “grab” Photos to move it anywhere (literally); the small circle next to the bar is to close the app. On the left are various menu items unique to Photos. What is notable about these is how small they are: this isn’t a user interface like iOS or iPadOS that has to accommodate big blunt fingers; rather, visionOS’s eye tracking is so accurate that it can easily delineate the exact user interface element you are looking at, which again, you trigger by simply touching your fingers together. It’s extraordinary, and works extraordinarily well…

…At the risk of over-indexing on my own experience, I am a huge fan of multiple monitors: I have four at my desk, and it is frustrating to be on the road right now typing this on a laptop screen. I would absolutely pay for a device to have a huge workspace with me anywhere I go, and while I will reserve judgment until I actually use a Vision Pro, I could see it being better at my desk as well…

…The keynote highlighted the movie watching experience of the Vision Pro, and it is excellent and immersive. Of course it isn’t, in the end, that much different than having an excellent TV in a dark room.

What was much more compelling were a series of immersive video experiences that Apple did not show in the keynote. The most striking to me were, unsurprisingly, sports. There was one clip of an NBA basketball game that was incredibly realistic: the game clip was shot from the baseline, and as someone who has had the good fortune to sit courtside, it felt exactly the same, and, it must be said, much more immersive than similar experiences on the Quest.

It turns out that one reason for the immersion is that Apple actually created its own cameras to capture the game using its new Apple Immersive Video Format. The company was fairly mum about how it planned to make those cameras and its format more widely available, but I am completely serious when I say that I would pay the NBA thousands of dollars to get a season pass to watch games captured in this way. Yes, that’s a crazy statement to make, but courtside seats cost that much or more, and that 10-second clip was shockingly close to the real thing…

…What was far more striking, though, was how the consumption of this video was presented in the keynote:

Note the empty house: what happened to the kids? Indeed, Apple actually went back to this clip while summarizing the keynote, and the line “for reliving memories” struck me as incredibly sad:

I’ll be honest: what this looked like to me was a divorced dad, alone at home with his Vision Pro, perhaps because his wife was irritated at the extent to which he got lost in his own virtual experience. That certainly puts a different spin on Apple’s proud declaration that the Vision Pro is “The Most Advanced Personal Electronics Device Ever”.

Indeed, this, even more than the iPhone, is the true personal computer. Yes, there are affordances like mixed reality and EyeSight to interact with those around you, but at the end of the day the Vision Pro is a solitary experience.

That, though, is the trend: long-time readers know that I have long bemoaned that it was the desktop computer that was christened the “personal” computer, given that the iPhone is much more personal, but now even the iPhone has been eclipsed. The arc of technology, in large part led by Apple, is for ever more personal experiences, and I’m not sure it’s an accident that that trend is happening at the same time as a society-wide trend away from family formation and towards an increase in loneliness.

This, I would note, is where the most interesting comparisons to Meta’s Quest efforts lie. The unfortunate reality for Meta is that they seem completely out-classed on the hardware front. Yes, Apple is working with a 7x advantage in price, which certainly contributes to things like superior resolution, but that bit about the deep integration between Apple’s own silicon and its custom-made operating system are going to very difficult to replicate for a company that has (correctly) committed to an Android-based OS and a Qualcomm-designed chip.

What is more striking, though, is the extent to which Apple is leaning into a personal computing experience, whereas Meta, as you would expect, is focused on social. I do think that presence is a real thing, and incredibly compelling, but achieving presence depends on your network also having VR devices, which makes Meta’s goals that much more difficult to achieve. Apple, meanwhile, isn’t even bothering with presence: even its Facetime integration was with an avatar in a window, leaning into the fact you are apart, whereas Meta wants you to feel like you are together.

In other words, there is actually a reason to hope that Meta might win: it seems like we could all do with more connectedness, and less isolation with incredible immersive experiences to dull the pain of loneliness. One wonders, though, if Meta is in fact fighting Apple not just on hardware, but on the overall trend of society; to put it another way, bullishness about the Vision Pro may in fact be a function of being bearish about our capability to meaningfully connect.

5. SITALWeek #398 – Brad Slingerlend

Uber Eats will be rolling out up to 2,000 four-wheeled sidewalk robots for meal delivery. Serve, the Level 4 Autonomous delivery bot manufacturer, notes there are already 200 such robots delivering food in LA. Venture capital is pouring into the robotics market, especially for humanoid bipedal and quadrupedal forms. Serve has previously raised capital from Nvidia, Figure just raised $70M for their general-purpose bipedal robot, and, thanks to VC infusions, Sanctuary AI recently unveiled its Phoenix humanoid. General-purpose robots with embedded AI could far exceed the impact that AI has in the purely digital realm, but with a much larger array of potential outcomes…

…This Lex Fridman podcast interview with the director of the MIT Center for Bits and Atoms, Neil Gershenfeld, is packed with insight on computing, AI, and biology. I knew of Gershenfeld because he stumbled into inventing the airbag seat sensor while working on an apparatus for a magic trick in the 1990s. Given the density of knowledge Gershenfeld has, you have to sometimes pause in order to process what he’s saying, but if you can make it to the last quarter of the podcast, I think you’ll see the payoff. One of his more revelatory conclusions is that the advancements from the current wave of AI innovation are now essentially behind us, and its future impact is somewhat predictable. What he means by that conclusion is that we have reached the point where AI can simulate the human brain; therefore, these new systems will be able to do anything a human can do. Meanwhile, humans will also keep doing things humans can do despite AI subsuming a lot of human tasks. Gershenfeld also explains the far bigger disruption will be when AI is embodied in all sorts of objects down to the molecular level. The three minutes starting at this point are particularly insightful. Gershenfeld estimates that embodied human intelligence is eight orders of magnitude more powerful than a human brain on its own. I believe this means we will see far more emergent, unpredictable behaviors from embodied AI than AI running on servers. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple and Meta Platforms. Holdings are subject to change at any time.

Takeaways From Silicon Valley Bank’s Collapse

The collapse of Silicon Valley Bank, or SVB, is a great reminder for investors to always be prepared for the unexpected.

March 2023 was a tumultuous month in the world of finance. On 8 March, Silicon Valley Bank, the 16th largest bank in the USA with US$209 billion in assets at the end of 2022, reported that it would incur a US$1.8 billion loss after it sold some of its assets to meet deposit withdrawals. Just two days later, on 10 March, banking regulators seized control of the bank, marking its effective collapse. It turned out that Silicon Valley Bank, or SVB, had faced US$42 billion in deposit withdrawals, representing nearly a quarter of its deposit base at the end of 2022, in just one day on 9 March.

SVB had failed because of a classic bank run. At a simplified level, banking involves taking in deposits and distributing the capital as loans to borrowers. A bank’s assets (what it owns) are the loans it has doled out, and its liabilities (what it owes) are deposits from depositors. When depositors withdraw their deposits, a bank has to return cash to them. Often, depositors can withdraw their deposits at short notice, whereas a bank can’t easily convert its loans into ready cash quickly. So when a large group of depositors ask for their money back, it’s difficult for a bank to meet the withdrawals – that’s when a bank run happens.

When SVB was initially taken over by regulators, there was no guarantee that the bank’s depositors would be made whole. Official confirmation that the money of SVB’s depositors would be fully protected was only given a few days later. In the leadup to and in the aftermath of SVB’s fall, there was a palpable fear among stock market participants that a systemic bank run could happen within the US banking sector. The Invesco KBW Regional Banking ETF, an exchange-traded fund tracking the KBW Nasdaq Regional Banking Index, which comprises public-listed US regional banks and thrifts, fell by 21% in March 2023. The stock price of First Republic Bank, ranked 14th in America with US$212 billion in assets at the end of 2022, cratered by 89% in the same month. For context, the S&P 500 was up by 3.5%.

SVB was not the only US bank that failed in March 2023. Two other US banks, Silvergate Bank and Signature Bank, did too. There was also contagion beyond the USA. On 19 March, Credit Suisse, a Switzerland-based bank with CHF 531 billion in assets (around US$575 billion) at the end of 2022, was forced by its country’s regulators to agree to be acquired by its national peer, UBS, for just over US$3 billion; two days prior, on 17 March, Credit Suisse had a market capitalization of US$8.6 billion. Going back to the start of 2023, I don’t think it was in anyone’s predictions for the year that banks of significant size in the USA would fail (Signature Bank had US$110 billion in assets at the end of 2022) or that the 167 year-old Credit Suisse would be absorbed by another bank for a relative pittance. These are a sound reminder of a belief I have about investing: Bad scenarios inevitably happen from time to time, but I  just don’t know when. To cope with this uncertainty, I choose to invest in companies that I think have both bright growth prospects in peaceful conditions and a high likelihood of making it through a crisis either relatively unscathed or in even better shape than before.

The SVB bank run is also an example of an important aspect of how I invest: Why I shun forecasts. SVB’s run was different from past bank runs. Jerome Powell, chair of the Federal Reserve, said in a 22 March speech (emphasis is mine):

The speed of the run [on SVB], it’s very different from what we’ve seen in the past and it does kind of suggest that there’s a need for possible regulatory and supervisory changes just because supervision and regulation need to keep up with what’s happening in the world.”

There are suggestions from observers of financial markets that the run on SVB could happen at such breakneck speed – US$42 billion of deposits, which is nearly a quarter of the bank’s deposit base, withdrawn in one day – because of the existence of mobile devices and internet banking. I agree. Bank runs of old would have involved people physically waiting in line at bank branches to withdraw their money. Outflow of deposits would thus take a relatively longer time. Now it can happen in the time it takes to tap a smartphone. In 2014, author James Surowiecki reviewed Walter Friedman’s book on the folly of economic forecasting titled Fortune Tellers. In his review, Surowiecki wrote (emphasis is mine):

The failure of forecasting is also due to the limits of learning from history. The models forecasters use are all built, to one degree or another, on the notion that historical patterns recur, and that the past can be a guide to the future. The problem is that some of the most economically consequential events are precisely those that haven’t happened before. Think of the oil crisis of the 1970s, or the fall of the Soviet Union, or, most important, China’s decision to embrace (in its way) capitalism and open itself to the West. Or think of the housing bubble. Many of the forecasting models that the banks relied on assumed that housing prices could never fall, on a national basis, as steeply as they did, because they had never fallen so steeply before. But of course they had also never risen so steeply before, which made the models effectively useless.”

There is great truth in something writer Kelly Hayes once said: “Everything feels unprecedented when you haven’t engaged with history.” SVB’s failure can easily feel epochal to some investors, since it was one of the largest banks in America when it fell. But it was actually just 15 years ago, in 2008, when the largest bank failure in the USA – a record that still holds – happened. The culprit, Washington Mutual, had US$307 billion in assets at the time. In fact, bank failures are not even a rare occurrence in the USA. From 2001 to the end of March 2023, there have been 563 such incidents. But Hayes’ wise quote misses an important fact about life: Things that have never happened before do happen. Such is the case when it came to the speed of SVB’s bank run. For context, Washington Mutual crumbled after a total of US$16.7 billion in deposits – less than 10% of its total deposit base – fled over 10 days.

I have also seen that unprecedented things do happen with alarming regularity. It was just three years ago, in April 2020, when the price of oil went negative for the first time in history. When investing, I have – and always will – keep this in mind. I also know that I am unable to predict what these unprecedented events could look like, but I am sure that they are bound to happen. To deal with these, I fall back to what I shared earlier:

“To cope with this uncertainty, I choose to invest in companies that I think have both bright growth prospects in peaceful conditions and a high likelihood of making it through a crisis either relatively unscathed or in even better shape than before.”

I think such companies carry the following traits that I have been looking for for a long time in my investing activities:

  1. Revenues that are small in relation to a large and/or growing market, or revenues that are large in a fast-growing market 
  2. Strong balance sheets with minimal or reasonable levels of debt
  3. Management teams with integrity, capability, and an innovative mindset
  4. Revenue streams that are recurring in nature, either through contracts or customer-behaviour
  5. A proven ability to grow
  6. A high likelihood of generating a strong and growing stream of free cash flow in the future

These traits interplay with each other to produce companies I believe to be antifragile. I first came across the concept of antifragility – referring to something that strengthens when exposed to non-lethal stress – from Nassim Nicholas Taleb’s book, Antifragile. Antifragility is an important concept for the way I invest. As I mentioned earlier, I operate on the basis that bad things will happen from time to time – to economies, industries, and companies – but I just don’t know how and when. As such, I am keen to own shares in antifragile companies, the ones which can thrive during chaos. This is why the strength of a company’s balance sheet is an important investment criteria for us – having a strong balance sheet increases the chance that a company can survive or even thrive in rough seas. But a company’s antifragility goes beyond its financial numbers. It can also be found in how the company is run, which in turn stems from the mindset of its leader.

It’s crucial to learn from history, as Hayes’s quote suggests. But it’s also important to recognise that the future will not fully resemble the past. Forecasts tend to fail because there are limits to learning from history and this is why I shun forecasts. In a world where unprecedented things can and do happen, I am prepared for the unexpected.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 04 June 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 04 June 2023:

1. Some Things We’ve Learned This Year – Ben Carlson

Tech stocks don’t need lower rates to go up. Tech stocks got crushed last year with the Nasdaq 100 falling more than 30%. The Fed raised interest rates from 0% to more than 4% so that didn’t help long-duration assets like growth stocks.

But there was this theory many people latched onto that tech stocks were only a rates play. In the 2010s and early-2020s rates were on the floor while tech stocks went bananas so it seemed apparent that there was an inverse relationship. When rates were lower tech stocks would do well and when rates were higher tech stocks would do poorly.

However, this year the Fed has now taken rates over 5% and could continue raising rates one, maybe two more times before all is said and done. Meanwhile, the Nasdaq 100 is up more than 30% in 2023.

Does this mean easy money had nothing to do with tech stock gains? I wouldn’t go that far. Low rates certainly helped long-duration assets. But low rates alone didn’t cause Apple to increase sales from $170 billion to nearly $400 billion in 10 years. Low rates have nothing to do with the AI speculation currently taking place with NVIDIA shares.

Interest rates are an important variable when it comes to the markets and economy. But rates alone don’t tell you the whole story when it comes to where people put their money. Tech stocks were also a fundamental play on innovations that have now become an integral part of all our lives…

Higher rates and inflation don’t guarantee poor stock market returns. There are a lot of market/econ people who think we could be in a new regime of higher rates and higher inflation. It’s a possibility worth considering. Many of those same people assume this will be a bad thing for markets. After all, the past 40+ years of financial market returns are all of a product of disinflation and falling rates, right? Right?

Not so fast. These are the average annual returns for the U.S. stock market over a 40 year period of rising inflation and interest rates:

  • 1940-1979: 10.3% per year

And these are the average annual returns for the U.S. stock market over a 40 year period of falling inflation and interest rates:

  • 1980-2019: 11.7% per year

The results are surprising. Things were better during the 1980-2019 period but not as much as one would think. I don’t know if we are entering a new regime of higher rates and inflation. But if we are it doesn’t necessarily mean the stock market is doomed.

2. Private Equity Fundamentals – Daniel Rasmussen and Chris Satterthwaite

But we can look at the subset of PE-owned companies that are either publicly listed or have issued public debt as a partial reflection of what’s currently going on in the opaque but important asset class. And we can use this data to understand what’s happening to revenue, EBITDA, and debt generally across private portfolios.

We took a look at all PE/VC-owned public companies, or companies with public debt, that were 30%+ sponsor-owned, had IPOed since 2018, had a recognizable sponsor as the largest holder, and were headquartered in North America. There were 350 companies that met this criteria; the public equities are worth a combined $385B, and we estimate the companies with public debt are worth another $360B of equity, comprising $750B or 6.5% of the total private equity AUM of $11.7T. Notably, the sample of public equities is roughly 40% tech, which is a significant industry bet, and consistent with our previous estimates of private equity industry exposure…

… We looked at both pro-forma EBITDA, which 50% of the companies in our sample reported, and at GAAP EBITDA. We see below that PE-backed companies in our sample had significantly lower EBITDA margins than the S&P 500, especially on a GAAP basis, and have seen significant margin compression over the past few years. GAAP EBITDA is, perhaps unsurprisingly, much lower than adjusted EBITDA.

Rising SG&A costs have left the median company barely EBITDA profitable on a GAAP basis. 55% of the PE-backed firms in our sample were free cash flow negative in 2022, and 67% added debt over the last 12 months…

…As a group, these companies have a median leverage of 4.9x, which is roughly the ratio of the average B-rated company. However, this includes many overcapitalized VC-backed companies, which are difficult to parse out from the private equity LBOs. When we look at only those with net debt, the median leverage increases to 8.8x, which would put the median LBO well into CCC credit rating (for context, the median leverage for the S&P 500 is 1.7x).

With interest rates rising over 500bps in 2022, much of the increase in interest rates is still not reflected in the 2022 reported figures. The cost of loans has soared recently: a $1B loan for a junk-rated company now averages 12%, up from around a 7.5% average in 2021, according to Reuters…

…The sample of companies we looked at is nearly unprofitable on an EBITDA basis, mostly cash flow negative, and extraordinarily leveraged (mostly with floating-rate debt that is now costing nearly 12%). These companies trade at a dramatic premium to public markets on a GAAP basis, only reaching comparability after massive amounts of pro-forma adjustments. And these are the companies that most likely reflect the better outcomes in private equity. The market and SPAC boom of 2021 presented a window for private equity and venture capital firms to take companies public, and private investors took public what they thought they could. Presumably, what remains in the portfolios was what could not be taken public.

3. Olivine weathering – Campbell Nilsen

When the term ‘carbon sequestration’ comes up, most people think of trees: purchase a carbon credit when booking a flight and, more likely than not, you’ve paid someone to plant a sapling somewhere.

Unfortunately, tree planting has serious disadvantages. Most significantly, its space requirements are immense. To reduce atmospheric CO₂ (currently about 418 ppm) by 100 ppm, within striking distance of the 280 ppm found in preindustrial times, you’d need to convert 900 million hectares to mature forest (an area about 94 percent the size of mainland China and 85 percent the size of Europe).

Even if that was possible, mature forests (which sequester more carbon in their soil than in their trees) take a long time to grow, and much if not most of the land available for reforestation is held by private actors, which creates significant political difficulties.

More promising solutions for direct-air capture4 are more likely to come from chemistry rather than biology. Several companies have broken ground in this field, such as Climeworks, Carbon Engineering and 1PointFive. All use a reusable sorbent, a chemical that reacts with CO₂ in the air and then releases it when energy is supplied (usually when it’s heated up). The captured, concentrated CO₂ is then pumped underground, where it is permanently trapped in geological formations in its gaseous, pressurized form, or mineralized into stable carbonates via reactions with the surrounding rock.

Sorbent-based direct-air capture is not a new idea, and is already used on space stations to moderate CO₂ levels. Like space applications, Climeworks uses an amine sorbent, which releases its captured CO₂ at a relatively low temperature (about 100°C). Unfortunately, amine-based sorbents are extraordinarily expensive – a study on the economics of amine-based sorbents published last year concluded that each tonne of CO₂ captured would incur hundreds of dollars merely in capital expenditure costs for the sorbent. Energy costs are not trivial, either: each tonne sequestered requires no less than 150 kilowatt-hours (kWh).

It is no coincidence that Climeworks operates in Iceland, because its active geology gives Climeworks access to ample carbon-free geothermal and hydro electricity at a very low cost. Even then, Climeworks currently charges €1,000 per tonne of CO₂ sequestered; its eventual goal is €600 a tonne. For comparison, the social cost of each additional tonne of CO₂ is currently thought to be somewhere around $185 (about €170 as of the time of writing), though getting an exact figure is devilishly tricky and the error bars are wide.

1PointFive and Carbon Engineering use potassium hydroxide as the sorbent, which is much cheaper than Climeworks’s amines, but the energy costs are almost as large. To regenerate potassium hydroxide, both companies use a process which includes heating a calciner (steel cylinder) up to 900°C.6 For Carbon Engineering, the cost of producing a concentrated stream of CO₂ was about $100-$200 a tonne as of 2018, not counting the cost of long–term sequestration.

Ultimately, solutions based on reusable sorbents suffer from a key drawback: once carbon dioxide has been absorbed in a chemical reaction, the resulting compound usually won’t give it back up in purified form unless lots of energy is added to the system. Moreover, sorbent-based processes merely produce a concentrated stream of CO₂, which must be stored (usually underground) or used.

This is easy for the first few thousand or even a million tonnes; for billions or trillions of tonnes, the logistics become nightmarish (though possible). Capturing a trillion tonnes of CO₂ (only 40 percent of humanity’s cumulative carbon emissions) via this process would require about eight times the world’s total yearly energy consumption merely to run the calciners. It could be a small useful addition to our carbon mitigation strategy, but it’s unlikely to help us roll back to a preindustrial environment.

If carbon capture with reusable sorbents is astronomically costly, at least for the time being, could we use a non-regenerating sorbent – something that absorbs CO₂ and locks it away for good?

There is a trade-off here. While we’d save the energy costs of cycling the sorbent and storing gaseous CO₂, we’d also need to produce and store truly massive amounts of sorbent. The alternatives would have to be easily available or cheaply manufactured in vast quantities; and because of the storage requirements (reaching into the trillions of tonnes) the compound would need to be non-toxic and environmentally inert. Processing the substance should require relatively little energy, and its reaction with ambient CO₂ needs to operate quickly.

The idea that silicate minerals might be able to fill this role is not, in and of itself, a new one; the earliest proposal of which I am aware is a three–paragraph letter to the editor in the 1990 issue of Nature, proposing that pressurized CO₂ be pumped into a container of water and silicates; five years later, the journal Energy published a somewhat longer outline for carbon sequestration using several intermediate steps. Neither idea went terribly far; popular activism focused on reducing emissions rather than sequestering them, and ideas published in academic journals remained mostly of academic interest.

In 2007, however, the Dutch press began entertaining a rather more sensational idea: the Netherlands’s, and perhaps the world’s, carbon emissions could be effectively and cheaply offset by spreading huge amounts of ground olivine rock – a commonly found, mostly worthless silicate rock composed mainly of forsterite, Mg₂SiO₄ – onto the shores of the North Sea, producing mile after aesthetically intriguing mile of green sand beaches as a side effect. The author of the proposal, Olaf Schuiling, envisioned repurposing thousands of tankers and trucks to ship ground rock from mines in Norway, covering the coast of the North Sea with shimmering golden-green sand and saving the human race from the consequences of the Industrial Revolution.

It seemed too good to be true – so in 2009 the geoscientists Suzanne Hangx and Chris Spiers published a rebuttal. While it was true that ground forsterite has significant sequestration potential on paper (each tonne of forsterite ultimately sequestering 1.25 tonnes of CO₂), Hangx and Spiers concluded that the logistics of Schuiling’s proposal would make the project an unworkable boondoggle.

Start with transport requirements. For the past two decades, the Netherlands has emitted about 170 megatonnes of CO₂ a year on average; each year, around 136 megatonnes of olivine would be needed to sequester Dutch emissions in full. The nearest major olivine mine, Gusdal, is located in Norway, around a thousand kilometers away. Transporting the required olivine by sea with the most commonly-used cargo ship (the $150 million Handysize vessel, with a capacity of about 25 kilotonnes) for example, would require over 100 trips a week – five percent of the world’s Handysize fleet – further clogging some of the world’s busiest waters for shipping. And that’s just for the Netherlands, which is only responsible for about 0.5 percent of the world’s carbon emissions.

Then there’s the environmental angle. While forsterite on its own is harmless, olivine usually contains trace amounts of other minerals and heavy metals, most prominently nickel, whose effect on marine life, while understudied, is known to be less than benign.

But the real Achilles heels of the Schuiling proposal were matters of physics. The rate of rock weathering is, to a first approximation, a function of three variables: the concentration of CO₂ in the water, the ambient temperature, and (most importantly by far) particle size. While CO₂ concentration in surface ocean water is about the same everywhere, temperature is not: sequestration by forsterite is about three times faster at 25°C (the approximate water temperature off the coast of Miami) than at 15°C (the average in the North Sea). But there’s another problem: olivine needs to be extremely small to weather effectively. Hangx and Spiers estimated that olivine particles 300 microns in diameter (the average size of a grain of beach sand) would take about 144 years to finish half their potential sequestration, and seven centuries to react completely…

…But what if the problems with Schuiling’s idea were in the execution, not the concept? The Intergovernmental Panel on Climate Change (or IPCC), the world’s most authoritative body on the problem, takes the climate and atmosphere of 1750 – when the atmosphere was about 280 ppm CO₂ – as its starting point. What would it take to return to this point?

Since that time, humanity has pumped a little over two trillion tonnes of CO₂ into the atmosphere, which would require about 1.6 trillion tonnes of raw olivine to sequester. You can imagine this as a cube measuring about eight kilometers or five miles on each side. Luckily for us, sources of high-quality olivine are fairly common, bordering on ubiquitous; and because it’s not (yet) very economically valuable, most deposits haven’t been thoroughly mapped. Assuming we’re simply trying to speed up natural processes, the end destination for the olivine will likely be the ocean.

Rock weathering takes place only where the rock is exposed to the elements; a gigantic pile of olivine is only as good as its surface area, and the only way to increase surface area is to break the rock into smaller particles. If you halve the size of your particles, the surface area available is doubled at worst, and you sequester carbon at least twice as quickly (the exact proportion will depend on how many cracks and crevices there are in the breakage – the more jagged the particles, the more surface area and the faster sequestration proceeds). To get back to preindustrial concentrations on a time scale of decades, we’d want to process a lot of olivine and break it down into very small particles – not sand, which (with diameters in the hundreds of microns) is too large, but silt (with diameters in the 10-50 micron range).

What would it take to start making a serious dent in atmospheric CO₂? Say we shot for 80 gigatonnes of olivine a year, locking away 100 gigatonnes of the stuff when fully weathered. Unlike many proposals for carbon sequestration, olivine intervention is not contingent on undiscovered or nascent technology. Let’s take a look at the process through the lens of an increasingly small grain of rock.

Our particle of olivine would begin its journey on a morning much like every day of the past hundreds of millions of years; it is part of a large deposit in the hills of Suluwesi, a fifteen-minute drive from the coast. (Indonesia is particularly well-suited for processing due to its vast expanse of shallow, tropical seas, but the ubiquity of olivine formations means that sequestration could happen in any number of places.) 

This particular morning, however, is different. A mining worker has drilled a hole into the exposed surface of the formation, inserted a blasting cap, and – with a loud bang – smashed another fraction of the rock into pieces small enough to be carried by an excavator. The largest excavators in common use, which cost a bit under two million dollars each, can load about 70 tonnes at a time – a small, but important, fraction of the 220 megatonnes or so the world would need to process that day. Each of several hundred excavators takes no more than a minute or so to load up, complete a full trip to the haul truck, and come back to the front lines. It’s probably cheapest to run it, and the rest of the mining equipment, on diesel; even though it guzzles nearly 200 liters (50 gallons) an hour, the rock it carries will repay its five-tonne-a-day CO₂ footprint tens of thousands of times over.

Our grain of olivine (now part of a chunk the size of a briefcase) is off on a quick trip to the main processing facility in one of about a few thousand haul trucks (each costing nearly five million dollars and carrying up to 400 tonnes at a time), where it’s subjected to a thorough pummeling until it’s reached pebble size. Then it’s off to a succession of rock mills to grind it down to the minuscule size needed for it to weather quickly. 

It’s a good idea, at this point, to talk a bit about the main costs involved in such an immense proposal. As a rule of thumb, the smaller you want your end particles to be, the more expensive it is to get them there. Once a suitable olivine formation has been located, quarrying rock out of the formation is cheap. Even in high-income countries like Australia or Canada where mine workers make top-notch salaries, the cost of quarrying rock and crushing it down to gravel size is generally on the order of two to three dollars a tonne, and it requires very little energy. Since reversing global warming would entail the biggest quarrying operation in history, we might well expect costs to drop further. 

Depending on the deposit, haul trucks might prove unnecessary;8 it may be most cost-effective to have the crusher and mills follow the front lines. The wonderful thing about paying people to mill rocks is that we don’t have to know for sure from our armchair; the engineers tasked with keeping expenses to a minimum will figure it out as they go.

What is quite certain is that the vast majority of that expense, both financially and in terms of energy, comes not from mining or crushing but from milling the crushed rock down to particle size. Hangx and Spiers (the olivine skeptics above) estimated milling costs for end particles of various sizes; while sand-sized grains (300 microns across) required around eight kWh of energy per tonne of olivine processed, grains with a diameter of 37 microns were projected to need nearly three times as much energy input, and ten-micron grains a whopping 174 kWh per tonne. Since wholesale electricity prices worldwide are about 15 cents per kWh, that implies an energy cost of around $26 per tonne of olivine, or about $20 per tonne sequestered – at least $1.2 trillion a year, in other words, and a ten percent increase in the world’s electricity consumption. Can we do any better?

We probably can; it matters a lot, it turns out, what kind of rock mill you use. For example, while Hangx and Spiers assumed the use of a stirred media detritor (SMD) mill for the ten-micron silt, other researchers showed that a wet-attrition miller (WAM), working on equal amounts of rock and water, could achieve an average particle size of under four-microns for an all-inclusive energy cost of 61 kWh ($9.15) per tonne of rock – about $7.32 per sequestered tonne of CO₂, or around $732 billion a year in energy costs.

And the largest rock mills are large indeed; the biggest on the market can process tens of thousands of tonnes a day. It should be clear by now that capital expenditures, while not irrelevant, are small compared to the cost of energy. Though there’s no way to know for sure until and unless the sequestration industry reaches maturity, a reasonable upper estimate for capital investment is about $1.60 per tonne of CO₂ sequestered, giving a total cost per sequestered tonne of no more than nine dollars.9 The resulting bill of $900 billion per year might sound gargantuan – but it’s worth remembering that the world economy is a hundred-trillion-dollar-a-year behemoth, and each tonne of carbon dioxide not sequestered is more than 20 times as costly.

Upon its exit from the mill, our particle, now just five to ten microns in diameter, finds itself in a fine slurry, half water by mass. Silicates usually find their way down to the ocean via rivers, so we’ll have to build our own. Thankfully, the water requirements are not high in the grand scheme of things. 80 gigatonnes of rock a year will need about 2300 cubic meters of water a second; split across dozens of mines worldwide, water requirements can easily be met by drawing from rivers or, in a pinch, desalinating ocean water.

The slurry is pumped into a large concrete pipe (since it’s flowing downhill, energy costs are minimal), and our particle of magnesium silicate comes to rest on the ocean floor of the Java Sea, where it reacts with dissolved carbon dioxide and locks it away as magnesium bicarbonate within a few years. (Because the Java Sea is shallow, it is constantly replenished with atmospheric CO₂ from rainwater and ocean currents. Carbon in the deep ocean is cycled at a far slower pace.) 

While there are a handful of trace minerals in most olivine formations, especially nickel and iron, the ecological costs are local and pale in comparison to the global ecological costs of global warming and ocean acidification.

4. Agfa-Gevaert and Activist Investing in Europe – A Case Study – Swen Lorenz

Germany, the largest economy in Continental Europe, makes for an interesting case study. As the annual review of Activist Insight mentions in its 2017 edition: “Germany has long been a laggard in the space of shareholder activism due to both legal and cultural challenges.”

That’s a very diplomatic way of putting it. Legal scholars with a knack for history will point to a much juicier origin of the problem.

The reason why it had long been tremendously tricky to hold German boards to account for underperformance, dates back to the legal system established by the Nazis. Germany’s first extensive corporate law was written in 1937, and the new legal code’s approach to managing corporations was based on the “Fuehrer principle” (Führerprinzip).

Anyone who wants to study the relevant history should get a copy of “Aktienrecht im Wandel” (roughly: “Corporate Law during changing times”), the definitive two-volume book covering the last 200 years of German commercial law.

The Nazis specifically wanted to create a corporate law designed to:

  • Fend off “the operational and economic damage caused by anonymous, powerful capitalists”.
  • Enable directors to manage companies “for the benefit of the enterprise, the people, and the Reich”.
  • “Push back the power of the shareholders meeting”.

The Nazis lost the war, but the legal system underpinning German corporations and much of the underlying culture remained in place. It was only in 1965 that Germany’s corporate law was significantly reformed, primarily because of one man’s outrageously broad influence over leading German corporations: Hermann Josef Abs, who had been a director of Deutsche Bank since 1938.

During the years of Germany’s so-called economic miracle, Abs had created an impenetrable network of cross-holdings among companies and directorship positions doled out among a small clique of leading figures. This powerful elite of directors shielded each other from accountability; even investors with large-scale financial firepower found many German companies an impenetrable fortress. Germany’s government had no other choice but to (finally) act. The Lex Abs, as the legal reform was called in a rare legislative reference to one specific individual, did away with at least some of the corporate law’s problematic aspects.

Changing the legal code was one thing, changing the underlying culture another. So powerful and deeply-rooted was Abs & Co.’s system that I came across its influence on the German stock market as recently as the late 1990s. Germany’s large, publicly listed corporations used to be a closed shop, summarised by the expression “Deutschland AG” in foreign media.

It was only during the early 2000s that shareholder activism slowly started to become a more regular occurrence in Germany and across Continental Europe. Factors such as a generational change on boards, further legislative reforms, and a large number of newly listed companies managed by internationally trained directors and entrepreneurs led to an increased prevalence of the activist approach.

Once you join the dots from a 30,000 foot perspective and with the benefit of hindsight, it’s incredible how long it takes to soften up a well-entrenched system. Quite literally, it required the generation who had created the system to die.

5. Walking naturally after spinal cord injury using a brain–spine interface – [Numerous authors]

A spinal cord injury interrupts the communication between the brain and the region of the spinal cord that produces walking, leading to paralysis1,2. Here, we restored this communication with a digital bridge between the brain and spinal cord that enabled an individual with chronic tetraplegia to stand and walk naturally in community settings. This brain–spine interface (BSI) consists of fully implanted recording and stimulation systems that establish a direct link between cortical signals3 and the analogue modulation of epidural electrical stimulation targeting the spinal cord regions involved in the production of walking4,5,6. A highly reliable BSI is calibrated within a few minutes. This reliability has remained stable over one year, including during independent use at home. The participant reports that the BSI enables natural control over the movements of his legs to stand, walk, climb stairs and even traverse complex terrains. Moreover, neurorehabilitation supported by the BSI improved neurological recovery. The participant regained the ability to walk with crutches overground even when the BSI was switched off. This digital bridge establishes a framework to restore natural control of movement after paralysis…

…To establish this digital bridge, we integrated two fully implanted systems that enable recording of cortical activity and stimulation of the lumbosacral spinal cord wirelessly and in real time (Fig. 1a).

To monitor electrocorticographic (ECoG) signals from the sensorimotor cortex, we leveraged the WIMAGINE technology3,20. WIMAGINE implants consist of an 8-by-8 grid of 64 electrodes (4 mm × 4.5 mm pitch in anteroposterior and mediolateral axes, respectively) and recording electronics that are embedded within a 50 mm diameter, circular-shaped titanium case that has the same thickness as the skull. The geometry of the system favours close and stable contact between the electrodes and the dura mater, and renders the devices invisible once implanted within the skull.

Two external antennas are embedded within a personalized headset that ensures reliable coupling with the implants. The first antenna powers the implanted electronics through inductive coupling (high frequency, 13.56 MHz), whereas the second, ultrahigh frequency antenna (UHF, 402–405 MHz) transfers ECoG signals in real time to a portable base station and processing unit, which generates online predictions of motor intentions on the basis of these signals (Extended Data Fig. 1).

The decoded motor intentions are then converted into stimulation commands that are transferred to tailored software running on the same processing unit.

These commands are delivered to the ACTIVA RC implantable pulse generator (Fig. 1a), which is commonly used to deliver deep brain stimulation in patients with Parkinson’s disease. We upgraded this implant with wireless communication modules that enabled real-time adjustment over the location and timing of epidural electrical stimulation with a latency of about 100 ms (Extended Data Fig. 1).

Electrical currents are then delivered to the targeted dorsal root entry zones using the Specify 5-6-5 implantable paddle lead, which consists of an array incorporating 16 electrodes.

This integrated chain of hardware and software established a wireless digital bridge between the brain and the spinal cord: a brain–spine interface (BSI) that converts cortical activity into the analogue modulation of epidural electrical stimulation programs to tune lower limb muscle activation, and thus regain standing and walking after paralysis due to a spinal cord injury (Supplementary Video 1)


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Apple. Holdings are subject to change at any time.

How Bad is Zoom’s Stock-Based Compensation?

On the surface, the rising stock based compensation for Zoom looks bad. But looking under the hood, the situation is not as bad as it looks.

There seems to be a lot of concern surrounding Zoom’s rising stock-based compensation (SBC).

In its financial years 2021, 2022 and 2023, Zoom recorded SBC of US$275 million, US$477 million and US$1,285 million, respectively. FY2023 was perhaps the most worrying for investors as Zoom’s revenue essentially flat-lined while its SBC increased by more than two-fold.

But as mentioned in an earlier article, GAAP accounting is not very informative when it comes to SBC. When companies report SBC using GAAP accounting, they record the amount on the financial statements based on the share price at the time of the grant. A more informative way to look at SBC would be from the perspective of the actual number of shares given out during the year.

In FY2021, 2022 and 2023, Zoom issued 0.6 million, 1.8 million and 4 million restricted stock units (RSUs), respectively. From that point of view, it seems the dilution is not too bad. Zoom had 293 million shares outstanding as of 31 January 2023, so the 4 million RSUs issued resulted in only 1.4% more shares.

What about down the road?

The number of RSUs granted in FY2023 was 22.1 million, up from just 3.1 million a year before. The big jump in FY2023 was because the company decided to give a one-time boost to existing employees. 

However, this does not mean that Zoom’s dilution is going to be 22 million shares every year from now. The number of RSUs granted in FY2023 was probably a one-off grant that will likely not recur and these grants will vest over a period of three to four years.

If we divide the extra RSUs given in FY2023 by their 4-year vesting schedule, we can assume that around 8 million RSUs will vest each year. This will result in an annual dilution rate of 2.7% based on Zoom’s 293 million shares outstanding as of 31 January 2023.

Bear in mind: Zoom guided for a weighted diluted share count of 308 million for FY2024. This diluted number includes 4.8 million in unexercised options that were granted a number of years ago. Excluding this, the number of RSUs that vest will be around 10 million and I believe this is because of an accelerated vesting schedule this year.

Cashflow impact

Although SBC does not result in a cash outflow for companies, it does result in a larger outstanding share base and consequently, lower free cash flow per share.

But Zoom can offset that by buying back its shares. At its current share price of US$69, Zoom can buy back 8 million of its shares using US$550 million. Zoom generated US$1.5B in free cash flow if you exclude working capital changes in FY2023. If it can sustain cash generation at this level, it can buy back all its stock that is issued each year and still have around US$1 billion in annual free cash flow left over for shareholders.

And we also should factor in the fact that in most companies, due to employee turnover, the RSU forfeiture rate is around 20% or more, which will mean my estimate of 8 million RSUs vesting per year for Zoom could be an overestimate. In addition, Zoom reduced its headcount by 15% in February this year, which should lead to more RSU forfeitures and hopefully fewer grants in the future.

Not as bad as it looks

GAAP accounting does not always give a complete picture of the financial health of a business. In my view, SBC is one of the most significant flaws of GAAP accounting and investors need to look into the financial notes to better grasp the true impact of SBC.

Zoom’s SBC numbers seem high. But when zooming in (pun intended), the SBC is not as bad as it looks. In addition, with share prices so low, it is easy for management to offset dilution with repurchases at very good prices. However, investors should continue to monitor share dilution over time to ensure that management is fair to shareholders.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Zoom. Holdings are subject to change at any time.

What We’re Reading (Week Ending 28 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 28 May 2023:

1. Yuval Noah Harari argues that AI has hacked the operating system of human civilisation – Yuval Noah Harari

Language is the stuff almost all human culture is made of. Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws. Gods aren’t physical realities. Rather, they are cultural artefacts we created by inventing myths and writing scriptures.

Money, too, is a cultural artefact. Banknotes are just colourful pieces of paper, and at present more than 90% of money is not even banknotes—it is just digital information in computers. What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures? When people think about ChatGPT and other new AI tools, they are often drawn to examples like school children using AI to write their essays. What will happen to the school system when kids do that? But this kind of question misses the big picture. Forget about school essays. Think of the next American presidential race in 2024, and try to imagine the impact of ai tools that can be made to mass-produce political content, fake-news stories and scriptures for new cults…

…Through its mastery of language, AI could even form intimate relationships with people, and use the power of intimacy to change our opinions and worldviews. Although there is no indication that ai has any consciousness or feelings of its own, to foster fake intimacy with humans it is enough if the ai can make them feel emotionally attached to it. In June 2022 Blake Lemoine, a Google engineer, publicly claimed that the ai chatbot Lamda, on which he was working, had become sentient. The controversial claim cost him his job. The most interesting thing about this episode was not Mr Lemoine’s claim, which was probably false. Rather, it was his willingness to risk his lucrative job for the sake of the AI chatbot. If AI can influence people to risk their jobs for it, what else could it induce them to do?

In a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people. We all know that over the past decade social media has become a battleground for controlling human attention. With the new generation of AI, the battlefront is shifting from attention to intimacy. What will happen to human society and human psychology as AI fights AI in a battle to fake intimate relationships with us, which can then be used to convince us to vote for particular politicians or buy particular products?

Even without creating “fake intimacy”, the new ai tools would have an immense influence on our opinions and worldviews. People may come to use a single AI adviser as a one-stop, all-knowing oracle. No wonder Google is terrified. Why bother searching, when I can just ask the oracle? The news and advertising industries should also be terrified. Why read a newspaper when I can just ask the oracle to tell me the latest news? And what’s the purpose of advertisements, when I can just ask the oracle to tell me what to buy?

And even these scenarios don’t really capture the big picture. What we are talking about is potentially the end of human history. Not the end of history, just the end of its human-dominated part. History is the interaction between biology and culture; between our biological needs and desires for things like food and sex, and our cultural creations like religions and laws. History is the process through which laws and religions shape food and sex.

What will happen to the course of history when AI takes over culture, and begins producing stories, melodies, laws and religions? Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.

At first, AI will probably imitate the human prototypes that it was trained on in its infancy. But with each passing year, AI culture will boldly go where no human has gone before. For millennia human beings have lived inside the dreams of other humans. In the coming decades we might find ourselves living inside the dreams of an alien intelligence…

…We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, AI can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

Won’t slowing down public deployments of AI cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated AI deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When AI hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

2. What Happens if the US Defaults on its Debt? – Nick Maggiulli

As U.S. Treasury Secretary Janet Yellen recently noted, unless Congress raises (or suspends) the debt limit, the U.S. government may run out of money as early as June 1.

With such a dire warning, many investors have begun to wonder: what happens if the US defaults on its debt? Though this scenario remains unlikely, it is important to understand the potential consequences of a default and how they could impact you…

…When it comes to the term ‘default’ there are two ways that this has been broadly defined:

  • An actual default: This is the traditional meaning of the term and it occurs when a borrower fails to make a required principal or interest payment to a lender. In the case of the United States (or any other sovereign nation), a default occurs if the government is unable (or unwilling) to make payments on its debt (e.g. if the U.S. failed to make payments on its Treasury bonds). Default in these cases can either be partial (failing to pay back some of the debt) or full (failing to pay back all of the debt). However, this isn’t the only kind of default that can occur.
  • A technical default: Unlike an actual (or traditional) default when a government fails to make payments on its bonds, a technical default occurs if the government fails to pay for its other obligations even if its bond payments were made on time. For example, the U.S. Treasury could decide to prioritize Treasury bondholders and pay them in full before paying out whatever was left to Social Security recipients and government employees. While this would avoid a default in the traditional sense of the term, it could still negatively impact millions of Americans who rely on income from the U.S. government to pay their bills…

…As we navigate the political and economic complexities of raising the debt ceiling in the coming weeks, it’s important to understand what could happen if the U.S. defaults on its debt. The consequences of a such an event would have a major impact not only in the U.S., but across the globe. And while we can’t predict the exact outcomes, below are some possible scenarios that could unfold based on economic studies, expert opinions, and historical precedent:

  • Global financial turmoil: Given the reliance of the global financial system on U.S. Treasury bonds and U.S. dollars, a default could lead to a loss of confidence in the U.S. government and a global market panic. The most visible impact of this would be declining asset prices and a disruption in international trade. The duration of such a panic would be determined by the severity of the U.S. default and how quickly the U.S. could restore confidence in financial markets.
  • Possible recession: Two economists modeled the potential impact of a U.S. default on employment and the results weren’t great. They argued that a technical default (where the federal government fails to make payments for some of its responsibilities) would raise unemployment from 3.4% to 7%, and an actual default (where the federal government fails to make payments to U.S. bondholders) would raise unemployment from 3.4% to above 12%. Such a quick rise in unemployment could lead to reduced consumer spending and a recession.
  • Rising interest rates: When the U.S. Treasury failed to make payments on $122 million in Treasury bonds in 1979, short-term interest rates jumped 0.6 percent. This was true despite the fact that the failure to make payments was a clerical error on the part of the Treasury and not an actual default (since all the bondholders were eventually paid back with interest). If the U.S. were to actually default, the cost of borrowing would rise sharply for individuals and businesses, ultimately slowing economic growth.
  • Depreciating value of the dollar: A U.S. default could reduce confidence in the U.S. dollar and push many nations to seek out more reliable alternatives. This would reduce the demand for the dollar, decrease its value, and increase the cost of imports in the U.S., leading to higher inflation.
  • Lower credit rating: If the U.S. were to default, credit rating agencies would downgrade the U.S.’s credit rating, which would make future borrowing more expensive for the U.S. government. Standard & Poor’s downgraded the U.S.’s credit rating for the first time ever in 2011 even though a default never occurred. Imagine what would happen if one did?
  • Impaired government functions: An actual default (and even a technical default) could force the government to delay payments to Social Security recipients, employees, and others who rely on their services. This could disrupt the lives of millions of Americans and severely impact economic growth. The White House released a report in October 2021 that outlined the potential consequences of such a default and how it could impact various sectors of the economy.
  • Political fallout: If your job was to get Donald Trump re-elected in 2024, there are few things that would help more than a U.S. default in 2023. Regardless of political beliefs, many Americans will hold the current party in power (Democrats) ultimately responsible in the event of a default. This would influence future elections and public policy for many years to come.

While these scenarios paint a sobering picture of what could happen if the U.S. were to default on its debt, it’s important to remember that no one knows the future. Don’t just take my word for it though. Consider what Warren Buffett said on the topic at the most recent Berkshire Hathaway shareholders meeting:

It’s very hard to see how you recover once…people lose faith in the currency…All kinds of things can happen then. And I can’t predict them and nobody else can predict them, but I do know they aren’t good.

3. Microsoft Bets That Fusion Power Is Closer Than Many Think – Jennifer Hiller

In a deal that is believed to be the first commercial agreement for fusion power, the tech giant has agreed to purchase electricity from startup Helion Energy within about five years.

Helion, which is backed by OpenAI founder Sam Altman, committed to start producing electricity through fusion by 2028 and target power generation for Microsoft of at least 50 megawatts after a year or pay financial penalties.

The commitment is a bold one given that neither Helion nor anyone else in the world has yet produced electricity from fusion.

“We wouldn’t enter into this agreement if we were not optimistic that engineering advances are gaining momentum,” said Microsoft President Brad Smith…

…“I had this belief that the two things that would matter most to making the future and raising the quality of life a lot were making intelligence and energy cheap and abundant, and that if we could do that, it would transform the world in a really positive way,” Mr. Altman said.

A number of prominent investors from Mr. Altman to Bill Gates have put money into fusion firms, which have raised more than $5 billion, according to the Washington, D.C.-based Fusion Industry Association.

The process of splitting atoms in nuclear-fission power plants provides nearly 20% of U.S. electricity. But nuclear fusion systems would generate electricity from the energy released when hydrogen atoms are combined to form helium.

The industry got a boost in December when the U.S. Energy Department announced a research breakthrough by scientists after a fusion reaction at the Lawrence Livermore National Laboratory produced more energy than was used to create it by firing lasers at a target.

To be a practical source of power, the entire facility would need to net produce rather than consume energy, and at a price that competes in the broader electricity market…

…David Kirtley, CEO at Helion, said that like a wind- or solar-power developer—the more typical energy firms involved in power purchase agreements—Helion would pay Microsoft financial penalties if it doesn’t deliver power on time. The companies declined to specify the amount.

“There’s some flexibility, but it is really important that there are significant financial penalties for Helion if we don’t deliver,” Mr. Kirtley said. “We think the physics of this is ready for us to signal the commercialization of fusion is ready.”

4. Some Things I Think – Morgan Housel

The fastest way to get rich is to go slow.

Many beliefs are held because there is a social and tribal benefit to holding them, not necessarily because they’re true.

Nothing is more blinding than success caused by luck, because when you succeed without effort it’s easy to think, “I must be naturally talented.”…

…The most valuable personal finance asset is not needing to impress anyone.

Most financial debates are people with different time horizons talking over each other…

…The hardest thing when studying history is that you know how the story ends, which makes it impossible to put yourself in people’s shoes and imagine what they were thinking or feeling in the past…

…Most beliefs are self-validating. Angry people look for problems and find them everywhere, happy people seek out smiles and find them everywhere, pessimists look for trouble and find it everywhere. Brains are good at filtering inputs to focus on what you want to believe…

…The market is rational but investors play different games and those games look irrational to people playing a different game.

A big problem with bubbles is the reflexive association between wealth and wisdom, so a bunch of crazy ideas are taken seriously because a temporarily rich person said it.

Logic doesn’t persuade people. Clarity, storytelling, and appealing to self-interest do…

…Happiness is the gap between expectations and reality, so the irony is that nothing is more pessimistic than someone full of optimism. They are bound to be disappointed…

…Nothing leads to success like unshakable faith in one big idea, and nothing sets the seeds of your downfall like an unshakable faith to one big idea…

…Economies run in cycles but people forecast in straight lines.

You are twice as gullible as you think you are – four times if you disagree with that statement.

Price is what you pay, value is whatever you want Excel to say…

…We underestimate the importance of control. Camping is fun, even when you’re cold. Being homeless is miserable, even when you’re warm…

…“If you only wished to be happy, this could be easily accomplished; but we wish to be happier than other people, and this is always difficult, for we believe others to be happier than they are.” – Montesquieu

With the right incentives, people can be led to believe and defend almost anything.

Good marketing wins in the short run and good products win in the long run…

…The most productive hour of your day often looks the laziest. Good ideas rarely come during meetings – they come while going for a walk, or sitting on the couch, or taking a shower…

…A good test when reading the news is to constantly ask, “Will I still care about this story in a year? Two years? Five years?”

A good bet in economics: the past wasn’t as good as you remember, the present isn’t as bad as you think, and the future will be better than you anticipate.

5. Layers of AI – Muji

AI is such a loose term, a magical word that simply means some type of mathematically-driven black box. It is generally thought of as a compute engine that can do a task at or better than a human can, driven by a “brain” (AI engine) making decisions. Essentially, AI is a bunch of inner mathematical algorithms that interconnect & combine into one big algorithm (the overall AI model). These take an input, do logic (the black box), and send back an output.

At the highest level, AI has thus far been Artificial Narrow Intelligence (ANI), a weaker form of AI that is honed to complete a specific task. As seen over the past few months, we are quickly approaching Artificial General Intelligence (AGI), a stronger form of AI that can perform a wider range of tasks, and can think abstractly and adapt. AGI is the holy grail of many an AI researcher.

Today, AI takes a lot of forms, such as Machine Learning (learning from the past to predict the future), Computer Vision (identifying structure in video or imagery), Speech-to-Text/Text-to-Speech (converting audio to text and vice versa), Expert Systems (highly honed decision engines), and Robotics (controlling the real world)…

…It is worth having some caution with AI, but know that the hype is real, and the potential of these cutting-edge AI models is palpable. At a minimum, we are at the precipice of a new era in productivity boosts from virtual assistance and automation. But as these engines mature, combine, and integrate with others more, it suddenly feels that AGI is on our doorstep.

ML is the subset of AI that is trained on past historical data in order to make decisions or predict outcomes. In general, ML processes a lot of data upfront in a training process, analyzing it to determine patterns within it in order to derive future predictions. With the rise of better models, honed hardware (GPUs and specialized chips from hyperscalers), and continually improving scale & performance from the cloud hyperscalers, the potential of ML is now heavily scaling up. ML models can make decisions, interact with the world (through text, voice, chat, audio, computer vision, image, or video), and take action.

ML is extremely helpful for:

  • processing unstructured content (text, images, video) to extract meaning, understand intent & context
  • image or video recognition to isolate & identify objects
  • make decisions by weighing complex factors
  • categorize & group input (classification)
  • pattern recognition
  • language recognition & translation
  • process historical data to isolate trends occurring, then forecast or predict those trends from there
  • generate new output (text, image, video, audio generation)

ML models are built from a wide variety of statistical model types geared for specific problems, each with a wide number of statistical algorithms that can be used in each. Some common types include:

  • Classification models are used to classify data into categories (labels), in order to predict a discrete value (what category applies to new data).
  • Regression models are used to find correlations between variables, in order to predict continuous values (numerics).
  • Clustering models are good for clustering data together around the natural groups that exist, such as for segmenting customers, making recommendations, and image processing.

There are a number of ways that ML can be taught, including:

  • Supervised Learning is training via a dataset with known answers. These answers become labels that the ML uses to identify patterns and correlations in the data.
  • Unsupervised Learning is training via raw data and letting the AI determine the features and trends within the data. This is used by ML systems for making recommendations, data associations, trend isolation, or customer segmenting.
  • Semi-supervised Learning is in between, which uses a subset of training on a labeled dataset, and another unlabelled one to enrich it further.
  • Reinforcement Learning is a model that gets rewarded for correct and timely answers (via internal scores or human feedback). This is used when there is a known start and end state, where the ML has to determine the best way to navigate the multiple paths in between. This is being leveraged in new language models like ChatGPT to improve the way the engine “talks”…

Some of the components of building ML that are helpful to understand:

  • Features are characteristics or attributes within the raw data that help define the input (akin to columns within a database). These are then fed in as inputs to the ML model, and weighed against each other to identify patterns and how they correlate to each other. Feature Engineering is the process where a data scientist will pre-identify these variables within the data, such as categories or numerical ranges to track. Feature Selection may be needed to select a subset of features in model construction, which may be repeatedly tested to find the best fit, as well as helps simplify models and shorten training times. Features can be collaboratively tracked in Feature Stores, which are similar to Metric Stores in BI stacks [both discussed in the Modern Data Stack]. Unsupervised Learning forces the ML engine to determine the important features on its own.
  • Dimensionality is based on the number of features provided as input into the model – or rather, represents the internal dimensions of the model of how each feature relates to and impacts every other feature (how one variable in a row of input impacts another). High-dimensional data refers to datasets having a wide set of features (a high number of input variables per row).
  • Observations are the number of feature sets provided as input while building the model (akin to rows within a database).
  • Vectors are features turned into numerical form and stored as an array of inputs (one per observation or row, or a sentence of text in NLP). An array of vectors is a two-dimensional matrix. [This is why GPUs are so helpful in ML training, as they specialize in vectorized math.]
  • Tensors represent the multi-dimensional relationships between all vectors. [Hence why Google and NVIDIA use the name often in GPU products, as they specialize in highly-dimensional vectorized math.]
  • Labels are pre-defined answers given to a dataset. This can be the identification of categories that apply to that data (such as color, make, model of a car), successful or failed outcomes (such as whether this is fraud or risky behavior or not), or the tagging and definition of objects within an image or video (this image contains a white cat on a black table). These are then fed into Supervised Learning methods of training ML models.
  • Parameters are what the ML model creates as internal variables at a decision point. This is a trained variable that helps set the importance of an individual feature within the ML engine. (This can be weights & biases within a neural network or a coefficient in a regression.) The parameter count is a general way that ML models use to show how much complexity they hide. (OpenAI’s GPT-3 had 350M-175B parameters in various flavors, and GPT-4 is believed to have up to 1T.)
  • Hyperparameters are external variables the data scientist can adjust in individual statistical algorithms used within the ML model. Think of them as the knobs that can be tuned and adjusted to tweak the statistical model within (along with the fact there are countless statistical models that can be used for any specific algorithm, which can be swapped out).

As with anything data related, it is “garbage in – garbage out”. You must start with good data to have a good ML model. Data science is ultimately the art of creating an ML model, which requires data wrangling (the cleaning, filtering, combining, and enriching of the datasets used in training), selection of the appropriate models & statistical algorithms to use for the problem at hand, feature engineering, and tuning of the hyperparameters. Essentially, data science is about asking the right questions in the right way.

ML models are trained with data, then validated to assure “fit” (statistical relevance) to the task at hand, as well as can be tuned and tweaked along the creation process by the data scientist (via the training data being input, the features selected, or hyperparameters in the statistical model). Once in production, it is typical to occasionally test it to ensure it remains relevant to real-world data (fit), as both models and data can drift (such as shifting behaviors of customers). Models can be trained on more and more data to become more and more accurate in classifications, predictions, and generation. More data generally means more insights and accuracy – however, at some point the model may go off the rails, and start trying to find patterns in random outliers that aren’t helpful. This is known as being “overfit”, where its trained findings aren’t as applicable to real-world data by factoring in noise or randomness more than it should. It must then be retrained on a more up-to-date set of historical data.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Apple, and Microsoft. Holdings are subject to change at any time.

What American Technology Companies Are Thinking About AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. Here they are, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management thinks AI is a massive platform shift

Well, why don’t I start, Justin, with AI. This is certainly the biggest revolution and test since I came to Silicon Valley. It’s certainly as big of a platform shift as the Internet, and many people think it might be even bigger. 

Airbnb’s management thinks of foundational models as the highways and what they are interested in, is to build the cars on the highways, in other words, they are interested in tuning the model

And I’ll give you kind of a bit of an overview of how we think about AI. So all of this is going to be built on the base model. The base models, the large language models, think of those as GPT-4. Google has a couple of base models, Microsoft reaches Entropic. These are like major infrastructure investments. Some of these models might cost tens of billions of dollars towards the compute power. And so think of that as essentially like building a highway. It’s a major infrastructure project. And we’re not going to do that. We’re not an infrastructure company. But we’re going to build the cars on the highway. In other words, we’re going to design the interface and the tuning of the model on top of AI, on top of the base model. So on top of the base model is the tuning of the model. And the tuning of the model is going to be based on the customer data you have.

Airbnb’s management thinks AI can be used to help the company learn more about its users and build a much better way to match accommodation options with the profile of a user

If you were to ask a question to ChatGPT, and if I were to ask a question to ChatGPT, we’re both going to get pretty much the same answer. And the reason both of us are going to get pretty close the same answer is because ChatGPT doesn’t know that it’s between you and I, doesn’t know anything about us. Now this is totally fine for many questions, like how far is it from this destination to that destination. But it turns out that a lot of questions in travel aren’t really search questions. They’re matching questions. Another is, they’re questions that the answer depends on who you are and what your preferences are. So for example, I think that going forward, Airbnb is going to be pretty different. Instead of asking you questions like where are you going and when are you going, I want us to build a robust profile about you, learn more about you and ask you 2 bigger and more fundamental questions: who are you? And what do you want?

Airbnb’s management wants to use AI to build a global travel community and world-class personalised travel concierge

And ultimately, what I think Airbnb is building is not just a service or a product. But what we are in the largest sense is a global travel community. And the role of Airbnb and that travel community is to be the ultimate host. Think of us with AI as building the ultimate AI concierge that could understand you. And we could build these world-class interfaces, tune our model. Unlike most other travel companies, we know a lot more about our guests and hosts. This is partly why we’re investing in the Host Passport. We want to continue to learn more about people. And then our job is to match you to accommodations, other travel services and eventually things beyond travel. So that’s the big vision of where we’re going to go. I think it’s an incredibly expanding opportunity.

Airbnb’s management thinks that AI can help level the playing field in terms of the service Airbnb provides versus that of hotels

One of the strengths of Airbnb is that Airbnb’s offering is one of a kind. The problem with Airbnb is our service is also one of a kind. And so therefore, historically less consistent than a hotel. I think AI can level the playing field from a service perspective relative to hotels because hotels have front desk, Airbnb doesn’t. But we have literally millions of people staying on Airbnb every night. And imagine they call customer service. We have agents that have to adjudicate between 70 different user policies. Some of these are as many as 100 pages long. What AI is going to do is be able to give us better service, cheaper and faster by augmenting the agents. And I think this is going to be something that is a huge transformation. 

Airbnb’s management thinks that AI can help improve the productivity of its developers

The final thing I’ll say is developer productivity and productivity of our workforce generally. I think our employees could easily be, especially our developers, 30% more productive in the short to medium term, and this will allow significantly greater throughput through tools like GitHub’s Copilot. 

Alphabet (NASDAQ: GOOG)

Alphabet’s management thinks AI will unlock new experiences in Search as it evolves

As it evolves, we’ll unlock entirely new experiences in Search and beyond just as camera, voice and translation technologies have all opened entirely new categories of queries and exploration.

AI has been foundational for Alphabet’s digital advertising business for over a decade

AI has also been foundational to our ads business for over a decade. Products like Performance Max use the full power of Google’s AI to help advertisers find untapped and incremental conversion opportunities. 

Alphabet’s management is focused on making AI safe

And as we continue to bring AI to our products, our AI principles and the highest standards of information integrity remain at the core of all our work. As one example, our Perspective API helps to identify and reduce the amount of toxic text that language models train on, with significant benefits for information quality. This is designed to help ensure the safety of generative AI applications before they are released to the public.

Examples of Alphabet bringing generative AI to customers of its cloud computing service

We are bringing our generative AI advances to our cloud customers across our cloud portfolio. Our PaLM generative AI models and Vertex AI platform are helping Behavox to identify insider threats, Oxbotica to test its autonomous vehicles and Lightricks to quickly develop text-to-image features. In Workspace, our new generative AI features are making content creation and collaboration even easier for customers like Standard Industries and Lyft. This builds on our popular AI Bard Workspace tools, Smart Canvas and Translation Hub used by more than 9 million paying customers. Our product leadership also extends to data analytics, which provides customers the ability to consolidate their data and understand it better using AI. New advances in our data cloud enable Ulta Beauty to scale new digital and omnichannel experiences while focusing on customer loyalty; Shopify to bring better search results and personalization using AI; and Mercedes-Benz to bring new products to market more quickly. We have introduced generative AI to identify and prioritize cyber threats, automate security workflows and response and help scale cybersecurity teams. Our cloud cybersecurity products helped protect over 30,000 companies, including innovative brands like Broadcom and Europe’s Telepass.

The cost of computing when integrating LLMs (large language models) to Google Search is something Alphabet’s management has been thinking about 

On the cost side, we have always — cost of compute has always been a consideration for us. And if anything, I think it’s something we have developed extensive experience over many, many years. And so for us, it’s a nature of habit to constantly drive efficiencies in hardware, software and models across our fleet. And so this is not new. If anything, the sharper the technology curve is, we get excited by it, because I think we have built world-class capabilities in taking that and then driving down cost sequentially and then deploying it at scale across the world. So I think we’ll take all that into account in terms of how we drive innovation here, but I’m comfortable with how we’ll approach it.

Alphabet’s management does not seem concerned with any potential revenue-impact from integrating LLMs into Google’s core Search product

So first of all, throughout the years, as we have gone through many, many shifts in Search, and as we’ve evolved Search, I think we’ve always had a strong grounded approach in terms of how we evolve ads as well. And we do that in a way that makes sense and provide value to users. The fundamental drivers here are people are looking for relevant information. And in commercial categories, they find ads to be highly relevant and valuable. And so that’s what drives this virtuous cycle. And I don’t think the underpinnings over the fact that users want relevant commercial information, they want choice in what they look at, even in areas where we are summarizing and answering, et cetera, users want choice. We care about sending traffic. Advertisers want to reach users. And so all those dynamics, I think, which have long served us well, remain. And as I said, we’ll be iterating and testing as we go. And I feel comfortable we’ll be able to drive innovation here like we’ve always done.

Amazon (NASDAQ: AMZN)

Amazon’s management thinks that the AI boom will drive significant growth in data consumption and products in the cloud

And I also think that there are a lot of folks that don’t realize the amount of nonconsumption right now that’s going to happen and be spent in the cloud with the advent of large language models and generative AI. I think so many customer experiences are going to be reinvented and invented that haven’t existed before. And that’s all going to be spent, in my opinion, on the cloud.

Amazon has been investing in machine learning for more than two decades, and has been investing large sums of capital to build its own LLMs for several years

I think when you think about machine learning, it’s useful to remember that we have had a pretty substantial investment in machine learning for 25-plus years in Amazon. It’s deeply ingrained in virtually everything we do. It fuels our personalized e-commerce recommendations. It drives the pick pass in our fulfillment centers. We have it in our Go stores. We have it in our Prime Air, our drones. It’s obviously in Alexa. And then AWS, we have 25-plus machine learning services where we have the broadest machine learning functionality and customer base by a fair bit. And so it is deeply ingrained in our heritage…

…We’ve been investing in building in our own large language models for several years, and we have a very large investment across the company. 

Amazon’s management decided to build chips – Trainium for training and Inferentia for inference – that have great price and performance because LLMs are going to run on compute, which depend on chips (particularly GPUs, or graphic processing units) and GPUs are scarce; Amazon’s management also thinks that a lot of machine learning training will be taking place on AWS

If you think about maybe the bottom layer here, is that all of the large language models are going to run on compute. And the key to that compute is going to be the chips that’s in that compute. And to date, I think a lot of the chips there, particularly GPUs, which are optimized for this type of workload, they’re expensive and they’re scarce. It’s hard to find enough capacity. And so in AWS, we’ve been working for several years on building customized machine learning chips, and we built a chip that’s specialized for training, machine learning training, which we call Trainium, and a chip that’s specialized for inference or the predictions that come from the model called Inferentia. The reality, by the way, is that most people are spending most of their time and money on the training. But as these models graduate to production, where they’re in the apps, all the spend is going to be in inference. So they both matter a lot. And if you look at — we just released our second versions of both Trainium and Inferentia. And the combination of price and performance that you can get from those chips is pretty differentiated and very significant. So we think that a lot of that machine learning training and inference will run on AWS.

Amazon’s management thinks that most companies that want to use AI are not interested to build their own foundational models because it takes a lot of resources; Amazon has the resources to build foundational models, and is providing the foundational models to customers who can then customise the models

And if you look at the really significant leading large language models, they take many years to build and many billions of dollars to build. And there will be a small number of companies that want to invest that time and money, and we’ll be one of them in Amazon. But most companies don’t. And so what most companies really want and what they tell AWS is that they’d like to use one of those foundational models and then have the ability to customize it for their own proprietary data and their own needs and customer experience. And they want to do it in a way where they don’t leak their unique IP to the broader generalized model. And that’s what Bedrock is, which we just announced a week ago or so. It’s a managed foundational model service where people can run foundational models from Amazon, which we’re exposing ourselves, which we call Titan. Or they can run it from leading large language model providers like AI 21 and Anthropic and Stability AI. And they can run those models, take the baseline, customize them for their own purposes and then be able to run it with the same security and privacy and all the features they use for the rest of their applications in AWS. That’s very compelling for customers.

Every single one of Amazon’s businesses are built on top of LLMs

Every single one of our businesses inside Amazon are building on top of large language models to reinvent our customer experiences, and you’ll see it in every single one of our businesses, stores, advertising, devices, entertainment and devices, which was your specific question, is a good example of that.

ASML (NASDAQ: ASML)

ASML’s management sees that mature semiconductor technologies are actually needed even in AI systems

So I think this is something people underestimate how significant the demand in the mid-critical and the mature semiconductor space is. And it will just grow double digit, whether it’s automotive, whether it’s the energy transition, whether it’s just the entire industrial products area, where is the — well, those are the sensors that we actually need as an integral component of the AI systems. This is where the mid-critical and the mature semiconductor space is very important and needs to grow.

Block (NYSE: SQ)

Block’s management is focused on three technology trends, one of which is AI

The three trends we’re focused on: Number one is artificial intelligence; number two is open protocols; and number three is the global south. Consider how many times you’ve heard the term AI or GPT in the earnings calls just this quarter versus all quarters in history prior. This trend seems to be moving faster than anyone can comprehend or get a handle on. Everyone feels like they’re on their back foot and struggling to catch up. Utilizing machine learning is something we’ve always employed at Block, and the recent acceleration in availability of tools is something we’re eager to implement across all of our products and services. We see this first as a way to create efficiencies, both internally and for our customers. And we see many opportunities to apply these technologies to create entirely new features for our customers. More and more effort in the world will ship to creative endeavors as AI continues to automate mechanical tasks away.

Datadog (NASDAQ: DDOG)

Datadog’s management thinks AI can make software developers more productive in terms of generating more code; as a result, the complexity of a company’s technology will also increase, which will lead to more importance for observability and trouble-shooting software products

First, from a market perspective, over the long term, we believe AI will significantly expand our opportunity in observability and beyond. We seek massive improvements in developer productivity will allow individuals to write more applications and to do so faster than ever before. And as with past productivity increases, we think this will further shift value from writing code to observing, managing, fixing and securing live applications…

… Longer term, I think we can all glimpse at the future where productivity for everyone, including software engineers, increases dramatically. And the way we see that as a business is, our job is to help our customers absorb the complexity of the applications they’ve built so they can understand and modify them, run them, secure them. And we think that the more productivity there is, the more people can write in the amount of time. The less they understand the software they produce and the more they need us, the more value it sends our way. So that’s what makes us very confident in the long term here…

…And we — the way this has played out in the past typically is you just end up generating more stuff and more mess. So basically, if one person can produce 10x more, you end up with 10x more stuff and that person will still not understand everything they’ve produced. So the way we imagine the future is companies are going to deliver a lot more functionality to their users a lot faster. They’re going to solve a lot more problems in software. But the they won’t be as tight and understanding from their engineering team as to what it is they’ve built and how they built it and what might break and what might be the corner cases that don’t work and things like that. And that’s consistent with what we can see people building with a copilot today and things like that.

Etsy (NASDAQ: ETSY)

Etsy’s management thinks that AI can greatly improve the search-experience for customers who are looking for specific products

We’ve been at the cutting edge of search technology for the past several years, and while we use large language models today, we couldn’t be more excited about the potential of newer large language models and generative AI to further accelerate the transformation of Etsy’s user experience. Even with all our enhancements, Etsy search today is still key-word driven and text based and essentially the result is a grid with many thousands of listings. We’ve gotten better at reading the tea leaves, but it’s still a repetitive cycle of query result reformulation. In the future we expect search on Etsy to utilize more natural language and multimodal approaches. Rather than manipulating key words, our search engines will enable us to ask the right question at the right time to show the buyer a curated set of results that can be so much better than it is today. We’re investigating additional search engine technologies to identify attributes of an item, multi-label learning models for instant search, graph neural networks and so much more, which will be used in combination with our other search engine technologies. It’s our belief that Etsy will benefit from generative AI and other advances in search technology as much or perhaps even more so than others…

When you run a search at Etsy, we already use multiple machine learning techniques. So I don’t think generative AI replaces everything we’re doing, but it’s another tool that will be really powerful. And there are times when having a conversation instead of entering a query and then getting a bunch of search results and then going back and reformulating your query and then getting a bunch of search results, that’s not always very satisfying. And being able to say, no, I meant more like this. How about this? I’d like something that has this style and have that feel like more of a conversation, I think that can be a better experience a lot of the time. And I think in particular for Etsy where we don’t have a catalog, it might be particularly powerful.

Fiverr (NYSE: FVRR) 

Fiverr’s management thinks that the proliferation of AI services will not diminish the demand for freelancers, but it will lead to a bifurcation in the fates of freelancers between those who embrace AI, and those who don’t

We haven’t seen AI negatively impact our business. On the contrary, the categories we open to address AI-related services are booming. The number of AI-related gigs has increased over tenfold and buyer searches for AI have soared over 1,000% compared to 6 months ago, indicating a strong demand and validating our efforts to stay ahead of the curve in this rapidly evolving technological landscape. We are witnessing the increasing need for human skills to deploy and implement AI technologies, which we believe will enable greater productivity and improved quality of work when human talent is augmented by AI capabilities. In the long run, we don’t anticipate AI development to displace the need for human talent. We believe AI won’t replace our sellers; rather sellers using AI will outcompete those who don’t…

…In terms of your question about AI, you’re right, it’s very hard to understand what categories or how categories might be influenced. I think that there’s one principle that we’ve — that I’ve shared in my opening remarks, which I think is very important, and this is how we view this, which is that AI technology is not going to displace our sellers, but sellers who have better gross and better usage of AI are going to outcompete those who don’t. And this is not really different than any meaningful advancement within technology, and we’ve seen that in recent years. Every time when there’s a cool new technology or device or form factor that sellers need to become professional at, those who become professional first are those who are actually winning. And we’re seeing the same here. So I don’t think that this is a different case. It’s just different professions, which, by the way, is super exciting.

Fiverr’s management thinks that AI-produced work will still need a human touch

Furthermore, while AI-generated content can be well constructed, it is all based on existing human-created content. To generate novel and authentic content, human input remains vital. Additionally, verifying and editing the AI-generated content, which often contains inaccuracies, requires human expertise and effort. That’s why we have seen categories such as fact-checking or AI content editing flourish on our marketplace in recent months.

Mastercard (NYSE: MA)

Mastercard’s management thinks AI is a foundational technology for the company

For us we’ve been using AI for the better part of the last decade. So it’s embedded in a whole range of our products…

…So you’ll find it embedded in a range of our products, including generative AI. So we have used generative AI technology, particularly in creating data sets that allow us to compare and find threats in the cybersecurity space. You will find AI in our personalization products. So there’s a whole range of things that we set us apart. We use this as foundational technology. And internally, you can see increasingly so, that generative AI might be a good solution for us when it comes to customer service propositions and so forth.

MercadoLibre (NASDAQ: MELI)

MercadoLibre is utilising AI within its products and services, in areas such as customer-service and product-discovery

In terms of AI, I think as most companies, we do see some very relevant short- to midterm positive impact in terms of engineering productivity. And we are also increasing the amount of work being done on what elements of the consumer-facing experiences we can deploy AI on I think the focus right now is on some of the more obvious use cases, improving and streamlining customer service and interactions with reps, improving workflows for reps through AI-assisted workflow tools and then deploying AI to help a better search and discovery in terms of better finding products on our website and better understanding specific — specifications of products where existing LLM are quite efficient. And then beyond that, I think there’s a lot of work going on, and we hope to come up with other innovative forms of AI that we can place into the consumer-facing experience. but the ones I just mentioned are the ones that we’re currently working on the most.

Meta Platforms (NASDAQ: META)

Meta’s work in AI has driven significant improvements in (a) the quality of content seen by users of its services and (b) the monetisation of its services

Our investment in recommendations and ranking systems has driven a lot of the results that we’re seeing today across our discovery engine, reels and ads. Along with surfacing content from friends and family, now more than 20% of content in your Facebook and Instagram Feeds are recommended by AI from people groups or accounts that you don’t follow. Across all of Instagram, that’s about 40% of the content that you see. Since we launched Reels, AI recommendations have driven a more than 24% increase in time spent on Instagram. Our AI work is also improving monetization. Reels monetization efficiency is up over 30% on Instagram and over 40% on Facebook quarter-over-quarter. Daily revenue from Advantage+ shopping campaigns is up 7x in the last 6 months.

Meta’s management is focused on open-sourcing Meta’s AI models because they think going open-source will benefit the company in terms of it being able to make use of improvements to the models brought on by the open-source-community

Our approach to AI and our infrastructure has always been fairly open. We open source many of our state-of-the-art models, so people can experiment and build with them. This quarter, we released our LLaMA LLM to researchers. It has 65 billion parameters but outperforms larger models and has proven quite popular. We’ve also open sourced 3 other groundbreaking visual models along with their training data and model weights, Segment Anything, DINOv2 and our Animated Drawings tool, and we’ve gotten some positive feedback on all of those as well…

…And the reason why I think why we do this is that unlike some of the other companies in the space, we’re not selling a cloud computing service where we try to keep the different software infrastructure that we’re building proprietary. For us, it’s way better if the industry standardizes on the basic tools that we’re using, and therefore, we can benefit from the improvements that others make and others’ use of those tools can, in some cases, like Open Compute, drive down the costs of those things, which make our business more efficient, too. So I think to some degree, we’re just playing a different game on the infrastructure than companies like Google or Microsoft or Amazon, and that creates different incentives for us. So overall, I think that that’s going to lead us to do more work in terms of open sourcing some of the lower-level models and tools, but of course, a lot of the product work itself is going to be specific and integrated with the things that we do. So it’s not that everything we do is going to be open. Obviously, a bunch of this needs to be developed in a way that creates unique value for our products. But I think in terms of the basic models, I would expect us to be pushing and helping to build out an open ecosystem here, which I think is something that’s going to be important.

Meta’s management thinks the company now has enough computing infrastructure to do leading AI-related work after spending significant sums of money over the past few years to build that out

A couple of years ago, I asked our infra teams to put together ambitious plans to build out enough capacity to support not only our existing products but also enough buffer capacity for major new products as well. And this has been the main driver of our increased CapEx spending over the past couple of years. Now at this point, we are no longer behind in building out our AI infrastructure, and to the contrary, we now have the capacity to do leading work in this space at scale. 

Meta’s management is focused on using AI to improve its advertising services

We remain focused on continuing to improve ads ranking and measurement with our ongoing AI investments while also leveraging AI to power increased automation for advertisers through products like Advantage+ shopping, which continues to gain adoption and receive positive feedback from advertisers. These investments will help us develop and deploy privacy-enhancing technologies and build new innovative tools that make it easier for businesses to not only find the right audience for their ad but also optimize and eventually develop their ad creative.

Meta’s management thinks that generative AI can be a very useful tool for advertisers, but they’re still early in the stage of understanding what generative AI is really capable of

 Although there aren’t that many details that I’m going to share at this point, more of this will come in focus as we start shipping more of these things over the coming months. But I do think that there’s a big opportunity here. You asked specifically about advertisers, but I think it’s going to also help create more engaging experiences, which should create more engagement, and that, by itself, creates more opportunities for advertisers. But then I think that there’s a bunch of opportunities on the visual side to help advertisers create different creative. We don’t have the tools to do that over time, eventually making it. So we’ve always strived to just have an advertiser just be able to tell us what their objective is and then have us be able to do as much of the work as possible for them, and now being able to do more of the creative work there and ourselves for those who want that, I think, could be a very exciting opportunity…

…And then the third bucket is really around CapEx investments now to support gen AI. And this is an emerging opportunity for us. We’re still in the beginning stages of understanding the various applications and possible use cases. And I do think this may represent a significant investment opportunity for us that is earlier on the return curve relative to some of the other AI work that we’ve done. And it’s a little too early to say how this is going to impact our overall capital intensity in the near term.

Meta’s management also thinks that generative AI can be a very useful way for companies to have high-quality chatbots interacting with customers

I also think that there’s going to be a very interesting convergence between some of the AI agents in messaging and business messaging, where, right now, we see a lot of the places where business messaging is most successful are places where a lot of businesses can afford to basically have people answering a lot of questions for people and engaging with them in chat. And obviously, once you light up the ability for tens of millions of small businesses to have AI agents acting on their behalf, you’ll have way more businesses that can afford to have someone engaging in chat with customers.

Microsoft (NASDAQ: MSFT)

Microsoft’s management thinks there is a generational shift in online search happening now because of AI

As we look towards a future where chat becomes a new way for people to seek information, consumers have real choice in business model and modalities with Azure-powered chat entry points across Bing, Edge, Windows and OpenAI’s ChatGPT. We look forward to continuing this journey in what is a generational shift in the largest software category, search.

Because of Microsoft’s partnership with OpenAI, Microsoft Azure is now exposed to new AI-related workloads that it previously was not

Because of some of the work we’ve done in AI even in the last couple of quarters, we are now seeing conversations we never had, whether it’s coming through you and just OpenAI’s API, right, if you think about the consumer tech companies that are all spinning, essentially, i.e. the readers, because they have gone to OpenAI and are using their API. These were not customers of Azure at all. Second, even Azure OpenAI API customers are all new, and the workload conversations, whether it’s B2C conversations in financial services or drug discovery on another side, these are all new workloads that we really were not in the game in the past, whereas we now are. 

Microsoft’s management has plans to monetise all the different AI-copilots that it is introducing to its various products

Overall, we do plan to monetize a separate set of meters across all of the tech stack, whether they’re consumption meters or per user subscriptions. The Copilot that’s priced and it is there is GitHub Copilot. That’s a good example of incrementally how we monetize the prices that are there out there and others are to be priced because they’re in 3D mode. But you can expect us to do what we’ve done with GitHub Copilot pretty much across the board.

Microsoft’s management expects the company to lower the cost of compute for AI workloads over time

And so we have many knobs that will continuously — continue to drive optimization across it. And you see it even in the — even for a given generation of a large model, where we started them through the cost footprint to where we end in the cost footprint in a period of a quarter changes. So you can expect us to do what we have done over the decade plus with the public cloud to bring the benefits of, I would say, continuous optimization of our COGS to a diverse set of workloads.

Microsoft’s management has not been waiting – and is not waiting – for AI-related regulations to show up – instead, they are thinking hard about unintended consequences from Day 1 and have built those concerns into the engineering process

So overall, we’ve taken the approach that we are not waiting for regulation to show up. We are taking an approach where the unintended consequences of any new technology is something that from day 1, we think about as first class and build into our engineering process, all the safeguards. So for example, in 2016 is when we put out the AI principles, we translated the AI principles into a set of internal standards that then are further translated into an implementation process that then we hold ourselves to internal audit essentially. So that’s the framework we have. We have a Chief AI Officer who is sort of responsible for both thinking of what the standards are and then the people who even help us internally audit our following of the process. And so we feel very, very good in terms of us being able to create trust in the systems we put out there. And so we will obviously engage with any regulation that comes up in any jurisdiction. But quite honestly, we think that the more there is any form of trust as a differentiated position in AI, I think we stand to gain from that.

Nvidia (NASDAQ: NVDA)

Cloud service providers (CSPs) are racing to deploy Nvidia’s chips for AI-related work

First, CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference. Multiple CSPs announced the availability of H100 on their platforms, including private previews at Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, upcoming offerings at AWS and general availability at emerging GPU-specialized cloud providers like CoreWeave and Lambda. In addition to enterprise AI adoption, these CSPs are serving strong demand for H100 from generative AI pioneers.

Nvidia’s management sees consumer internet companies as being at the forefront of adopting AI

Second, consumer Internet companies are also at the forefront of adopting generative AI and deep-learning-based recommendation systems, driving strong growth. For example, Meta has now deployed its H100-powered Grand Teton AI supercomputer for its AI production and research teams.

Nvidia’s management is seeing companies in industries such as automotive, financial services, healthcare, and telecom adopt AI rapidly

Third, enterprise demand for AI and accelerated computing is strong. We are seeing momentum in verticals such as automotive, financial services, health care and telecom where AI and accelerated computing are quickly becoming integral to customers’ innovation road maps and competitive positioning. For example, Bloomberg announced it has a $50 billion parameter model, BloombergGPT, to help with financial natural language processing tasks such as sentiment analysis, named entity recognition, news classification and question answering. Auto insurance company CCC Intelligent Solutions is using AI for estimating repairs. And AT&T is working with us on AI to improve fleet dispatches so their field technicians can better serve customers. Among other enterprise customers using NVIDIA AI are Deloitte for logistics and customer service, and Amgen for drug discovery and protein engineering.

Nvidia is making it easy for companies to deploy AI technology

And with the launch of DGX Cloud through our partnership with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, we deliver the promise of NVIDIA DGX to customers from the cloud. Whether the customers deploy DGX on-prem or via DGX Cloud, they get access to NVIDIA AI software, including NVIDIA-based command, end-to-end AI frameworks and pretrained models. We provide them with the blueprint for building and operating AI, spanning our expertise across systems, algorithms, data processing and training methods. We also announced NVIDIA AI Foundations, which are model foundry services available on DGX Cloud that enable businesses to build, refine and operate custom large language models and generative AI models trained with their own proprietary data created for unique domain-specific tasks. They include NVIDIA NeMo for large language models, NVIDIA Picasso for images, video and 3D, and NVIDIA BioNeMo for life sciences. Each service has 6 elements: pretrained models, frameworks for data processing and curation, proprietary knowledge-based vector databases, systems for fine-tuning, aligning and guard railing, optimized inference engines, and support from NVIDIA experts to help enterprises fine-tune models for their custom use cases.

Nvidia’s management thinks that the advent of AI will drive a shift towards accelerated computing in data centers

Now let me talk about the bigger picture and why the entire world’s data centers are moving toward accelerated computing. It’s been known for some time, and you’ve heard me talk about it, that accelerated computing is a full stack problem but — it is full stack challenged. But if you could successfully do it in a large number of application domain that’s taken us 15 years, it’s sufficiently that almost the entire data center’s major applications could be accelerated. You could reduce the amount of energy consumed and the amount of cost for a data center substantially by an order of magnitude. It costs a lot of money to do it because you have to do all the software and everything and you have to build all the systems and so on and so forth, but we’ve been at it for 15 years.

And what happened is when generative AI came along, it triggered a killer app for this computing platform that’s been in preparation for some time. And so now we see ourselves in 2 simultaneous transitions. The world’s $1 trillion data center is nearly populated entirely by CPUs today. And I — $1 trillion, $250 billion a year, it’s growing of course. But over the last 4 years, call it $1 trillion worth of infrastructure installed, and it’s all completely based on CPUs and dumb NICs. It’s basically unaccelerated.

In the future, it’s fairly clear now with this — with generative AI becoming the primary workload of most of the world’s data centers generating information, it is very clear now that — and the fact that accelerated computing is so energy efficient, that the budget of a data center will shift very dramatically towards accelerated computing, and you’re seeing that now. We’re going through that moment right now as we speak, while the world’s data center CapEx budget is limited. But at the same time, we’re seeing incredible orders to retool the world’s data centers. And so I think you’re starting — you’re seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world’s data centers and build it out as accelerated computing. You have a pretty dramatic shift in the spend of a data center from traditional computing and to accelerated computing with SmartNICs, smart switches, of course, GPUs and the workload is going to be predominantly generative AI…

…The second part is that generative AI is a large-scale problem, and it’s a data center scale problem. It’s another way of thinking that the computer is the data center or the data center is the computer. It’s not the chip. It’s the data center, and it’s never happened like us before. And in this particular environment, your networking operating system, your distributed computing engines, your understanding of the architecture of the networking gear, the switches and the computing systems, the computing fabric, that entire system is your computer, and that’s what you’re trying to operate. And so in order to get the best performance, you have to understand full stack and understand data center scale. And that’s what accelerated computing is.

Nvidia’s management thinks that the training of AI models will be an always-on process

 You’re never done with training. You’re always — every time you deploy, you’re collecting new data. When you collect new data, you train with the new data. And so you’re never done training. You’re never done producing and processing a vector database that augments the large language model. You’re never done with vectorizing all of the collected structured, unstructured data that you have. And so whether you’re building a recommender system, a large language model, a vector database, these are probably the 3 major applications of — the 3 core engines, if you will, of the future of computing as well as a bunch of other stuff. But obviously, these are very — 3 very important ones. They are always, always running.

When it comes to inference – or the generation of an output – there’s a lot more that goes into it than just the AI models

The other thing that’s important is these are models, but they’re connected ultimately to applications. And the applications could have image in, video out, video in, text out, image in, proteins out, text in, 3D out, video in, in the future, 3D graphics out. So the input and the output requires a lot of pre and postprocessing. The pre and postprocessing can’t be ignored. And this is one of the things that most of the specialized chip arguments fall apart. And it’s because the length — the model itself is only, call it, 25% of the data — of the overall processing of inference. The rest of it is about preprocessing, postprocessing, security, decoding, all kinds of things like that.

Paycom Software (NYSE: PAYC)

Paycom’s management thinks AI is definitely going to have a major impact in the payroll and HCM (human capital management) industry

I definitely think it’ll be relevant. You can use AI for multiple things. There are areas that you can use it for that are better than others. And they’re front-end things you can use it for direct to the client. There are back-end things that you can use it for that a client may never see. And so when you’re talking about AI, it has many uses, some of which is front end and some back end. And I don’t want to talk specifically about what exactly we’re using it for already internally and what our opportunities would be into the future. But in answer to your question, yes, I do think that over time, AI is going to be a thing in our industry.

PayPal (NASDAQ: PYPL)

PayPal has been working with AI (in fraud and risk management) for several years, and management thinks generative AI and other forms of AI will be useful in the online payments industry

For several years, we’ve been at the forefront of advanced forms of machine learning and AI to combat fraud and to implement our sophisticated risk management programs. With the new advances of generative AI, we will also be able to accelerate our productivity initiatives. We expect AI will enable us to meaningfully lower our costs for years to come. Furthermore, we believe that AI, combined with our unique scale and sets of data, will drive not only efficiencies, but will also drive a differentiated and unique set of value propositions for our merchants and consumers…

…And we are now beginning to experiment with the first generation of what we call AI-powered checkout, which looks at the full checkout experience, not just the PayPal checkout experience, but the full checkout experience for our merchants…

…There’s no question that AI is going to impact almost every function inside of PayPal, whether it be our front office, back office, marketing, legal, engineering, you name it. AI will have an impact and allow us to not just lower cost, but have higher performance and do things that is not about trade-offs. It’s about doing both in there.

Shopify (NASDAQ: SHOP)

Shopify’s management thinks the advent of AI makes a copilot for entrepreneurship possible

But now we are at the dawn of the AI era and the new capabilities that are unlocked by that are unprecedented. Shopify has the privilege of being amongst the companies with the best chances of using AI to help our customers. A copilot for entrepreneurship is now possible. Our main quest demands from us to build the best thing that is now possible, and that has just changed entirely.

Shopify recently launched an AI-powered shopping assistant that is powered by OpenAI’s ChatGPT

We also — you’re also seeing — we announced a couple of weeks ago, Shop at AI, which is what I think is the coolest shopping concierge on the planet, whereby you as a consumer can use Shop at AI and you can browse through hundreds of millions of products and you can say things like I want to have a barbecue and here’s the theme and it will suggest great products, and you can buy it right in line right through the shopping concierge.  

Shopify has been using AI to help its merchants write product descriptions so that merchants can better focus on taking care of their customers

 For example, the task of writing product descriptions is now made meaningfully easier by injecting AI into that process. And what does that — the end result of that is merchants spend less time running product descriptions and more time making beautiful products and communicating and engaging with their customers. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management sees demand in most end-markets as being mostly soft, but AI-related demand is growing

We observed the PC and smartphone market continue to be soft at the present time, while automotive demand is holding steady for TSMC and it is showing signs of soften into second half of 2023. I’m talking about automotive. On the other hand, we have recently observed incremental upside in AI-related demand

TSMC’s management thinks it’s a little too early to tell how big the semiconductor market can grow into because of AI, but they do see a positive trend

We certainly, we have observed an incremental increase in AI-related demand. It will also help the ongoing inventory digestion. The trend is very positive for TSMC. But today, if you ask me to quantitatively to say that how much of the amount increase or what is the dollar content in the server, it’s too early to say. It still continue to be developed. And ChatGPT right now reinforce the already stronger conviction that we have in HPC and AI as a structurally megatrend for TSMC’s business growth in the future. Whether this one has been included in our previous announcement is said that we have a 15% to 20% CAGR, the answer is probably partly yes, because of — for several, we have accelerated into our consideration. But this ChatGPT is a large language model is a new application. And we haven’t really have a kind of a number that put into our CAGR. But is definitely, as I said, it really reinforced our already strong conviction that HPC and AI will give us a much higher opportunities in the future…

…We did see some positive signs of the people getting much more attention to AI application, especially the ChatGPT’s area. However, as I said, quantitatively, we haven’t have enough data to summing it up to see what is the contribution and what kind of percentage to TSMC’s business. But we remain confident that this trend is definitely positive for TSMC.

TSMC’s management sees most of the AI work performed today as being focused on training but that it will flip to inference in the future – but nonetheless, high-performance semiconductors will still need be needed for AI-related work

Right now, most of the AI concentrate or focus on training. And in the future, it will be inference. But let me say that, no matter what kind of application, they need to use a very high-performance semiconductor component, and that actually is a TSMC’s advantage. So we expect that semiconductor content starting from a data center for [indiscernible] to device and edge device or those kind of things, put all together, they need a very high-speed computing with a very power-efficient one. And so we expect it will add to TSMC’s business a lot.

Tencent (NASDAQ: TCEHY)

Tencent is using AI to deliver more relevant ads to users of its services

We upgraded our machine learning advertising platform to deliver higher conversions for advertisers. For example, we help our advertisers dynamically feature their most relevant products inside their advertisements by applying our deep learning model to the standard product unit attributes we have aggregated within our SPU database. 

Tencent’s management thinks there will be a proliferation of AI models – both foundational as well as vertical – from both established companies as well as startups

So in terms of going forward, we do believe that number one, there’s going to be many models in the market going forward for the large companies, I think each one of us would have a foundation model. And the model will be supporting our own use cases as well as offer it to the market both on a 2C basis as well as on a 2B basis. And at the same time, there will be many start-ups, which will be creating their own models, some of them may be general foundation model. Some of them may be more industry and vertical models and they will be coming with new applications. I think overall, it’s going to be a very vibrant industry from a model availability perspective.

Tencent’s management thinks AI can help improve the quality of UGC (user-generated content)

In terms of the user-to-user interaction type of services like social network and short video network and games, long lead content, there will be — a lot of usages that helps to increase the quality of content, the efficiency at which the content are created as well as lowering the cost of content creation. And that will be net beneficiary to these applications. 

Tencent’s management thinks China’s government is supportive of innovation in AI

Now in terms of — you asked about regulation. Without the government’s general stance is like it’s supportive of regulation, but the industry has to be regulated. And I think this is not something that’s specific to China, even around the world. And you look at the U.S., there’s a lot of public discussion about having regulation and even the founder of OpenAI has been testifying and asking for regulation in the industry. So I think that is something which is necessary, but we felt under the right regulation and regulatory framework, then the government stance is supportive of innovation and the industry will actually have room for healthy growth.

Tesla (NASDAQ: TSLA)

Tesla’s management thinks data will be incredibly valuable when building out AI services, especially in self-driving

Regarding Autopilot and Full Self-Driving. We’ve now crossed over 150 million miles driven by Full Self-Driving beta, and this number is growing exponentially. We’re — I mean, this is a data advantage that really no one else has. Those who understand AI will understand the importance of data — of training data and how fundamental that is to achieving an incredible outcome. So yes, so we’re also very focused on improving our neural net training capabilities as is one of the main limiting factors of achieving full autonomy. 

Tesla’s management thinks the company’s supercomputer project, Dojo, could significantly improve the cost of training AI models

So we’re continuing to simultaneously make significant purchases of NVIDIA GPUs and also putting a lot of effort into Dojo, which we believe has the potential for an order of magnitude improvement in the cost of training. 

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management thinks that generative AI is only as good as the data that it has been trained on

ChatGPT is an amazing technology, but its usefulness is conditioned on the quality of the dataset it is pointed at. Regurgitating bad data, bad opinions or fake news, where AI generated deep bases, for example, will be a problem that all generative AI will likely be dealing with for decades to come. We believe many of the novel AI use cases in market today will face challenges with monetization and copyright and data integrity or truth and scale.

Trade Desk has very high-quality advertising data at scale (it’s handling 10 million ad requests per second) so management thinks that the company can excel by applying generative AI to its data

By contrast, we are so excited about our position in the advertising ecosystem when it comes to AI. We look at over 10 million ad requests every second. Those requests, in sum, represent a very robust and very unique dataset with incredible integrity. We can point generative AI at that dataset with confidence for years to come. We know that our size, our dataset size and integrity, our profitability and our team will make Koa and generative AI a promising part of our future.

Trade Desk’s management sees AI bringing positive impacts to many areas of the company’s business, such as generating code faster, generating creatives faster, and helping clients learn programmatic advertising faster

In the future, you’ll also hear us talk about other applications of AI in our business. These include generating code faster; changing the way customers understand and interact with their own data; generating new and more targeted creatives, especially for video and CTV; and using virtual assistance to shorten the learning curve that comes with the complicated world of programmatic advertising by optimizing the documentation process and making it more engaging.

Visa (NYSE: V)

Visa, which is in the digital payments industry, has a long history of working with AI and management sees AI as an important component of what the company does

I’ll just mention that we have a long history developing and using predictive AI and deep learning. We were one of the pioneers of applied predictive AI. We have an enormous data set that we’ve architected to be utilized at scale by hundreds of AI and ML, different services that people use all across Visa. We use it — we use it to run our company more effectively. We use it to serve our clients more effectively. And this will continue to be a big part of what we do.

Visa’s management thinks generative AI can take the company’s current AI services to the next level

As you transition to generative AI, this is where — we see this as an opportunity to take our current AI services to the next level. We are kind of as a platform, experimenting with a lot of the new capabilities that are available. We’ve got people all over the company that are tinkering and dreaming and thinking and doing testing and figuring out ways that we could use generative AI to transform how we do what we do, which is deliver simple, safe and easy-to-use payment solutions. And we’re also spending a fair bit of time thinking how generative AI will change the way that sellers sell, and we all buy and all of the shop. So that is — it’s a big area of opportunity that we’re looking at in many different ways across the company.

Wix (NASDAQ: WIX)

Wix’s management thinks AI can reduce a lot of friction for users in creating websites

First, our goal at Wix is to reduce friction. The easier it is for our users to build websites, the better Wix is. We have proven this many times before, through the development of software and products, including AI. As we make it easier for our users to achieve their goals, their satisfaction goes up, conversion goes up, user retention goes up, monetization goes up and the value of Wix grows…

…  Today, new emerging AI technologies create an even bigger opportunity to reduce friction in more areas that were almost impossible to solve a few years ago and further increase the value of our platform. We believe this opportunity will result in an increased addressable market and even more satisfied users. 

Wix’s management thinks that much more is needed to run e-commerce websites than just AI and even if AI can automate every layer, it is still very far into the future

The second important point is that there is a huge amount of complexity in software, even with websites, and it’s growing. Even if AI could code a fully functional e-commerce website, for example — which I believe we are still very far from — there is still a need for the site to be deployed to a server, to run the code, to make sure the code continues to work, to manage and maintain a database for when someone wants to buy something, to manage security, to ship the products, to partner with payment gateways, and many more things. So even if you have something that can build pages and content and code…you still need so much more. This gets to my third and final point, which is that even in the far future, if AI is able to automate all of these layers, it will have to disrupt a lot of the software industry, including database management, server management and cloud computing. I believe we are very far from that and that before then, there will be many more opportunities for Wix to leverage AI and create value for our users.

Zoom Video Communications (NASDAQ: ZM)

Zoom management’s approach to AI is federated, empowering, and responsible

We outlined our approach to AI is to drive forward solutions that are federated empowering and responsible. Federated means flexible and customizable to businesses unique scenarios and nomenclature. Empowering refers to building solutions that improve individual and team productivity as well as enhance the customers’ experience. And responsible means customer control of their data with an emphasis on privacy, security, trust and safety.

Zoom recently made a strategic investment in Anthropic and management will be integrating Anthropic’s AI assistant feature across Zoom’s product portfolio

Last week, we announced our strategic investment in Anthropic, an AI safety and research company working to build reliable, interpretable and steerable AI systems. Our partnership with Anthropic further boosts our federated approach to AI by allowing Anthropic’s AI assistant cloud to be integrated across Zoom’s entire platform. We plan to begin by layering Claude into our Contact Center portfolio, which includes Zoom Contact Center, Zoom Virtual Agent, and now in-beta Zoom Workforce Engagement Management. With Claude guiding agents towards trustworthy resolutions and empowering several service for end users, companies will be able to take customer relationships to the next level.

Zoom’s management thinks that having AI models is important, but it’s even more important to fine-tune them based on proprietary data

Having said that, there are 2 things really important. One is the model, right? So OpenAI has a model, Anthropic and Facebook as well, Google and those companies. But the most important thing is how to leverage these models to fine tune based on your proprietary data, right? That is extremely important when it comes to collaboration, communication, right? Take a zoom employee, for example. We have so many meetings, right, and talk about — every day, like our sales team use the Zoom call with the customers. We accumulated a lot of, let’s say, internal meeting data. How to fine tune the model with those data, it’s very important, right?

Examples of good AI use cases in Zoom’s platform

We also look at our core meeting platform, right, in meeting summary. It is extremely important, right? And it’s also we have our team chat solution and also how to lever that to compose a chat. Remember, last year, we also have email candidate as well. How do we leverage the generative AI to understand the context, right, and kind of bring all the information relative to you and help you also generate the message, right? When you send an e-mail back to customers or prospects, right, either China message or e-mail, right? We can lever to generate is, right? I think a lot of areas, even like you like say, maybe you might be later to the meeting, right? 10 minutes later, you joined the meeting. You really want to stand in what had happened, right? Can you get a quick summary over the positive minutes. Yes, you just also have to generate AI as well. You also can get that as well. 

Zoom’s management thinks there are multiple ways to monetise AI

I think in terms of how to monetize generative I think first of all, take Zoom IQ for Sales for example, that’s a new service to target the sales deportment. That AI technology is based on generative AI, right, so we can monetize. And also seeing some features, even before the generative AI popularity, we have a live transmission feature, right? And also, that’s not a free feature. It is a paid feature, right, behind the pay wall, right? And also a lot of good features, right, take the Zoom meeting summary, for example, for enterprise — and the customers… For to customers, all those SMB customers, they did not deploy Zoom One, they may not get to those features, right? That’s the reason — another reason for us to monetize. I think there’s multiple ways to monetize, yes.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Alphabet, Amazon, ASML, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, Paycom Software, PayPal, Shopify, TSMC, Tencent, Tesla, The Trade Desk, Visa, Wix, Zoom. Holdings are subject to change at any time.

What We’re Reading (Week Ending 21 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 21 May 2023:

1. The Borlaug Report #3: Bioprocessing – Borlaug

A small molecule drug is basically any of the pills in your average household medicine cabinet – they are chemicals, synthesized and pressed into a solid tablet or coated in dissolvable pill plastic. Most of the drugs that dit this description are very popular – meaning lots of them need to be made. This has traditionally been done in a stainless-steel fermenter – such as the one below:

The process for making drugs this way, in large batches, is fairly simple logistically – you are combining active pharmaceutical ingredients and blending them to create your drug, adding things like excipients along the way as you remove moisture, mill, and blend some more. Finally, you remove everything, press the substance into pills, and coat the pill as needed. Run complete! 

Before the next run, one must thoroughly sterilize these large steel tanks with cleaning chemicals. Logistically, this is a perfectly acceptable way of manufacturing chemicals at scale, and versions of this have been done for decades. 

However, things are changing. The future of the pharmaceutical industry is looking increasingly biological in nature, and producing biologics requires a little more TLC (and money). 

How Larger Molecules are Made

While most of small molecule manufacturing can be done with just a couple of discrete pieces of equipment, making a biologic drug has a plethora of steps that can be divided into upstream and downstream bioprocessing. I would add a third category to make clear that parts of “upstream” are generally for R&D purposes only.

R&D (Scale-Up): Before biologic drugs are commercially manufactured, there is a manufacturing component – scientists have to figure out how to ensure the drug can be scalably manufactured without compromising the effectiveness or safety profile of the drug itself. This is usually called “scale up”. The process is basically a guess-and-check exercise of finding cells that excel in a smaller (150mL) bioreactor, and keep finding the best cells as they multiply and test them in larger and larger bioreactors until you’re up to 1-2,000 liters or more. Other conditions, like what goes into the cell culture media, how much oxygen is let in, temperature, stirring speeds, etc. are all tinkered with here. Once the “process” is defined, R&D is over and production can proceed. This has become a key value proposition of a lot of contract manufacturers, because it can be extremely hard to do, especially in gene therapy.

Upstream: Basically, what you are doing in upstream bioprocessing is taking a bunch of cells (the “active ingredient” per se) and putting them in a soup of nutrients (media) that stimulates them to multiply at high rates based on the R&D process you tested out. In doing so, you are getting to a giant vat of soup that has an adequate volume of those cells (the drug) floating inside. In the image below, most of this is “production” in the manufacturing process itself. It’s actually a lot like the stainless steel process up until here, given everything is going into a bioreactor – the reactor is just smaller in this case.

Downstream: Once you have your cell soup, you engage in the “downstream” half of the process which separates those cells from all of the things that you don’t want in your final product. Once you’ve purified and filtered everything, it goes into a freezer (“cryo-preservation”) and is then shipped elsewhere to be put into the right delivery mechanism (IV bags, syringe vials, etc.) and boxed/packaged – the “fill-finish” process. This is the part that is fundamentally different – in small molecule production, you’re much closer to the finished product when things come out of the bioreactor. In biologics, you are separating the active ingredient a lot more carefully from the other stuff you put in the soup.

Most of these drugs are made in smaller batches – they often serve more targeted populations of people than some of the small molecule blockbuster drugs of old. The exception here is antibody drugs, which are still finding themselves going after large populations. Cell and gene therapies, however, are a much different story. After all, healthcare was never going to be one-size-fits-all. If you tried to apply the old method of making drugs to this new reality, you’d realize quickly how much time you are spending cleaning the tanks after every run.

The Single-Use “Innovation”

As mentioned above, the economics of manufacturing small batches of a drug in a stainless steel tank stops making sense very quickly when you have to shut down the process afterward to follow strict sterilization protocols, using lots of water, chemicals, and energy just to be able to start the process up again using the same equipment. Fortunately, the industry has already adapted by commercializing single-use technology.

Single-Use Saves Money

Instead of cleaning out the fermenter every time you use it, you can just line it with a disposable bag made of a fancy polymer that guarantees the same level of sterility. Kind of like using a trash bag instead of washing out the trash can under your sink every time you empty it. The same goes for all of the tubing connecting each subsequent piece of equipment in the workflow, as well as the cartridges, capsule and columns within the machines themselves. After a run is over, downtime can be short – just replace everything and start over.

Turns out, at lower batch sizes, net of energy/water/sterilization costs, this can actually be significantly cheaper, both on COGS and capital investment…

…You should care because this is an easily investable trend for few key reasons: 

Durable Usage Trends: Manufacturing in biopharma is different from the R&D tools themselves – there is no “fad” factor like you might see in genomics, for example, where researchers will crowd into a hot new space and use the relevant technology until the next thing comes along. These changes can be quick and violent. You know what doesn’t change? The bag you line the bioreactor with and the tubes that connect it to the clarification system. That’s the same regardless of whether someone invents a new gene therapy, a cell therapy, an antibody, or an mRNA drug.

Companies selling this technology don’t benefit from one type of therapy – they benefit from the complexity of all therapies moving through the clinic.

Highly recurring revenue with deep moats: Once you file a drug with the FDA, a lot of things get set in stone – one of these things is the manufacturing process and the equipment that goes into it, specified down to the vendor. Recently, companies have been specifying second sources from a second vendor into these filings to deal with supply chain risks, but the fact remains that once something is “spec’d” into the process, it’s painfully difficult to remove or change it.

This discourages new entrants to the market because the only share you can win is for clinical-scale dosage for new drugs – meaning your initial “TAM” is extremely small. In bioprocessing, scale is a massive barrier to entry and the FDA is a massive barrier to scale.

2. Google I/O and the Coming AI Battles – Ben Thompson

If there is one thing everyone is sure about, it is that AI is going to be very disruptive; in January’s AI and the Big Five, though, I noted that it seemed more likely that AI would be a sustaining innovation:

The story of 2022 was the emergence of AI, first with image generation models, including DALL-E, MidJourney, and the open source Stable Diffusion, and then ChatGPT, the first text-generation model to break through in a major way. It seems clear to me that this is a new epoch in technology.

To determine how that epoch might develop, though, it is useful to look back 26 years to one of the most famous strategy books of all time: Clayton Christensen’s The Innovator’s Dilemma, particularly this passage on the different kinds of innovations:

Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character…

Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.

It seems easy to look backwards and determine if an innovation was sustaining or disruptive by looking at how incumbent companies fared after that innovation came to market: if the innovation was sustaining, then incumbent companies became stronger; if it was disruptive then presumably startups captured most of the value.

My conclusion in that Article was that AI would be a sustaining innovation for Apple, Amazon, Meta, and Microsoft; the big question was Google and search:

That Article assumed that Google Assistant was going to be used to differentiate Google phones as an exclusive offering; that ended up being wrong, but the underlying analysis remains valid. Over the past seven years Google’s primary business model innovation has been to cram ever more ads into Search, a particularly effective tactic on mobile. And, to be fair, the sort of searches where Google makes the most money — travel, insurance, etc. — may not be well-suited for chat interfaces anyways.

That, though, ought only increase the concern for Google’s management that generative AI may, in the specific context of search, represent a disruptive innovation instead of a sustaining one. Disruptive innovation is, at least in the beginning, not as good as what already exists; that’s why it is easily dismissed by managers who can avoid thinking about the business model challenges by (correctly!) telling themselves that their current product is better. The problem, of course, is that the disruptive product gets better, even as the incumbent’s product becomes ever more bloated and hard to use — and that certainly sounds a lot like Google Search’s current trajectory.

I’m not calling the top for Google; I did that previously and was hilariously wrong. Being wrong, though, is more often than not a matter of timing: yes, Google has its cloud and YouTube’s dominance only seems to be increasing, but the outline of Search’s peak seems clear even if it throws off cash and profits for years.

Or maybe not. I tend to believe that disruptive innovations are actually quite rare, but when they come, they are basically impossible for the incumbent company to respond to: their business models, shareholders, and most important customers make it impossible for management to respond. If that is true, though, then an incumbent responding is in fact evidence that an innovation is actually not disruptive, but sustaining.

To that end, I take this Google I/O as evidence that AI is in fact a sustaining technology for all of Big Tech, including Google. Moreover, if that is the case, then that is a reason to be less bearish on the search company, because all of the reasons to expect them to have a leadership position — from capabilities to data to infrastructure to a plethora of consumer touch points — remain. Still, the challenges facing search as presently constructed — particularly its ad model — remain.

3. An Interview with Peter Lynch in 1996, Six Years After Retirement – Conor Mac

When you first went to Fidelity, what was the market like?

Well, after the great rush of the ’50s, the market did brilliantly and everybody says, “Wow, looking backwards, this would be a great time to get in.” So a lot of people got in in the early ’60s and in the mid-60s. The market peaked in ’65-66 at around a thousand, and that’s when I came. I was a summer student at Fidelity in 1966. There were 75 applicants for three jobs at Fidelity, but I caddied for the president for eight years. So that was the only job interview I ever took. It was sort of a rigged deal, I think. I worked there the summer of ’66 and I remember the market was close to a thousand in 1966, and in 1982, 16 years later, it was 777. So we had a long drought after that. So the people were concerned about the stock market early in the ’50s. They kept watching and watching, not investing. It started to go up dramatically and they finally caved in and bought big time in the mid-60s and got the peak..

But the market really didn’t do much between ’77 and ’82, between the beginning of that bull market, and yet your fund performed quite spectacularly. What do you do?

Well, I think flexibility is one of the key things. I mean I would buy companies that had unions. I would buy companies that were in the steel industry. I’d buy textile companies. I always thought there was good opportunities everywhere and, researched my stocks myself. I mean Taco Bell was one of my first stocks I bought. I mean the people wouldn’t look at a small restaurant company. So I think it was just looking at different companies and I always thought if you looked at ten companies, you’d find one that’s interesting, if you’d look at 20, you’d find two, or if you look at hundred you’ll find ten. The person that turns over the most rocks wins the game. And that’s always been my philosophy…

Talk about the change in ’86-87.

Well, I remember in my career you’d say to somebody you worked in the investment business. They’d say, “That’s interesting. Do you sail? What do you think of the Celtics?” I mean it would just go right to the next subject. If you told them you were a prison guard, they would have been interested. They would have had some interest in that subject, but if you said you were in the investment business, they said, “Oh, terrific. Do your children go to school?” It just went right to the next subject. You could have been a leper, you know, and been much more interesting. So that was sort of the attitude in the ’60s and ’70s.

As the market started to heat up, you’d say you were an investor, “Oh, that’s interesting. Are there any stocks you’re buying?” And then people would listen not avidly. They’d think about it. But then as the ’80s piled on, they started writing things down. So I remember people would really take an interest if you were in the investment business, saying “What do you like?” And then it turned and I remember the final page of the chapter would be you’d be at a party and everybody would be talking about stocks. And then people would recommend stocks to me. And then I remember not only that, but the stocks would go up. I’d look in the paper and I’d notice they’d go up in the next three months. And then you’ve done the full cycle of the speculative cycle that people hate stocks, they despised, they don’t want to hear anything about ’em, now they’re buying everything and cab drivers are recommending stocks. So that was sort of the cycle I remember going through from the ’60s and early ’70s all the way to ’87.

Where were you when the Crash of ’87 came?

Well, I was very well prepared for the Crash of 1987. My wife and I took our first vacation in eight years and we left on Thursday in October and I think that day the market went down 55 points and we went to Ireland, the first trip we’d ever been there. And then on Friday, because of the time difference, we’d almost completed the day and I called and the market was down 115. I said to Carolyn, “If the market goes down on Monday, we’d better go home.” And “We’re already here for the weekend. So we’ll spend the weekend.” So it went down 508 on Monday, so I went home. So in two business days, I had lost a third of my fund. So I figured at that rate, the week would have been a rough week. So I went home. Like I could do something about it. I mean it’s like, you know, if there was something I could do.

I mean there I was – but I think if people called up and they said, “What’s Lynch doing,” and they said, “Well, he’s on the eighth hole and he’s every par so far, but he’s in a trap, this could be a triple bogey,” I mean I think that’s not what they wanted to hear. I think they wanted to hear I’d be there lookin’ over – I mean there’s not a lot you can do when the market’s in a cascade but I got home quick as I could.

Why did the Crash of ’87 happen?

Well, I think people had not analyzed ’87 very well. I think you really have to put it in perspective. 1982, the market’s 777. It’s all the way to ’86. You have the move to 1,700. In four years – the market moves from 777 to 1,700 in four years. Then in nine months, it puts on a thousand points. So it puts on a thousand points in four years, then puts on another thousand points in the next nine months. So in August of 1987, it’s 2,700. It’s gone up a thousand points in nine months. Then it falls a thousand points in two months, 500 points the last day. So if the market got sideways at 1,700, no one would have worried, but it went up a thousand in nine-ten months and then a thousand in two months, and half of it in one day, you would have said “The world’s over.” It was the same price.

So it was really a question of the market just kept going up and up and it just went to such an incredibly high price by historic, price-earnings multiple load, dividend yields, all the other statistics, but people forget that basically, it was unchanged in 12 months. If you looked at September 1986 to October ’87, the market was unchanged. It had a thousand points up and a thousand points down and they only remember the down. They thought, “Oh, my goodness, this is the crash. It’s all over. It’s going to go to 200 and I’m going to be selling apples and pencils,” you know. But it wasn’t. It was a very unique phenomenon because companies were doing fine. Just, you know, you’d call up a company and say, “We can’t figure it out. We’re doin’ well. Our orders are good. Our balance sheet’s good – we just announced we’re gonna buy some of our stock. We can’t figure out why it’s good down so much.”

Was that the most scared you ever were in your career?

’87 wasn’t that scary because I concentrate on fundamentals. I call up companies. I look at their balance sheet. I look at their business. I look at the environment. The decline was kinda scary and you’d tell yourself, “Will this infect the basic consumer? Will this drop make people stop buying cars, stop buying houses, stop buying appliances, stop going to restaurants?” And you worried about that. The reality, the ’87 decline was nothing like 1990. Ninety, in my 30 years of watching stock very carefully, was by far the scariest period.

What was so scary about 1990?

Well, 1990 was a situation where I think it’s almost exactly six years ago approximately now. In the summer of 1990, the market was around 3,000, Economy’s doing okay, and Saddam Hussein decides to walk in and invade Kuwait. So we have an invasion of Kuwait and President Bush sends 500,000 troops to Saudi to protect Saudi Arabia. There’s a very big concern about, you know, “Are we going to have another Vietnam War?” A lot of serious military people said, “This is going to be a terrible war.” Iraq has the fourth-largest army in the world. They really fought very well against Iran. These people are tough. This is going to be a long, awful thing. So people were very concerned about that, but, in addition, we had a very major banking crisis.

All the major New York City banks, Bank of America, the real cornerstone of this country were really in trouble. And this is a lot different than if W.T. Grant went under or Penn Central went under. Banking is really tight. And you had to hope that the banking system would hold together and that the Federal Reserve understood that Citicorp, Chase, Chemical, Manufacturers Hanover, Bank of America were very important to this country and that they would survive. And then we had a recession.

Unlike ’87 you called companies, in 1990 you called companies and say, “Gee, our business is startin’ to slip. Inventories are startin’ to pile up. We’re not doing that well.” So you really at that point in time had to believe the whole thing would hold together, that we wouldn’t have a major war. You really had to have faith in the future of this country in 1990. In ’87, the fundamentals were terrific and it was – it was like one of those three for two sales at the K-Mart. Things were marked down. It was the same story…

Tell the story about your wife stumbling on a big stock for you in the supermarket.

I had a great luck company called Hanes. They test-marketed a product called L’Eggs in Boston and I think in Columbus, Ohio, maybe three or four markets. And Carolyn, ah, brought this product home and she was buying and she said, “It’s great.” And she almost got a black belt in shopping. She’s a very good shopper. If we hadn’t had these three kids, she now – when Beth finally goes off to college, I think we’ll be able to resume her training.

But she’s a very good shopper and she would buy these things. She said, “They’re really great.” And I did a little bit of research. I found out the average woman goes to the supermarket or a drugstore once a week. And they go to a woman’s speciality store or department store once every six weeks. And all the good hosiery, all the good pantyhose is being sold in department stores. They were selling junk in the supermarkets. They were selling junk in the drugstores.

So this company came up with a product. They rack-jobbed it, they had all the sizes, all the fits, a down they never advertised price. They just advertised “This fits. You’ll enjoy it.” And it was a huge success and it became my biggest position and I always worried somebody’d come out with a competitive product, and about a year-and-a-half they were on the market another large company called Kaiser-Roth came out with a product called No Nonsense. They put it right next to L’eggs in the supermarket, right next to L’eggs in the drugstore. I said, “Wow, I gotta figure this one out.”

So I remember buying – I bought 48 different pairs at the supermarket, colours, shapes, and sizes. They must have wondered what kind of house I had at home when I got to the register. They just let me buy it. So I brought it into the office. I gave it to everybody. I said, “Try this out and come back and see what’s the story with No Nonsense.” And people came back to me in a couple of weeks and said, “It’s not as good.” That’s what fundamental research is. So I held onto Hanes and it was a huge stock and it was bought out by Consolidated Foods, which is now called Sara Lee, and it’s been a great division of that company. It might have been a thirty-bagger instead of a ten-bagger if it hadn’t been bought out.

The beginning of the bull market in 1982 and the environment. Were you surprised?

1982 was a very scary period for this country. We’ve had nine recessions since World War II. This was the worst. 14 percent inflation. We had a 20 percent prime rate, 15 percent long governments. It was ugly. And the economy was really much in a free-fall and people were really worried, “Is this it? Has the American economy had it? Are we going to be able to control inflation?” I mean there was a lot of very uncertain times.

You had to say to yourself, “I believe it in. I believe in stocks. I believe in companies. I believe they can control this. And this is an anomaly. Double-digit inflation is a rare thing. Doesn’t happen very often. And, in fact, one of my shareholders wrote me and said, “Do you realize that over half the companies in your portfolio are losing money right now?” I looked up, he was right, or she was right. But I was ready. I mean I said, “These companies are going to do well once the economy comes back. We’ve got out of every other recession. I don’t see why we won’t come out of this one.” And it came out and once we came back, the market went north.

Nobody told you it was coming.

It’s lovely to know when there’s a recession. I don’t remember anybody predicting 1982 we’re going to have 14 percent inflation, 12 percent unemployment, a 20 percent prime rate, you know, the worst recession since the Depression. I don’t remember any of that being predicted. It just happened. It was there. It was ugly. And I don’t remember anybody telling me about it. So I don’t worry about any of that stuff. I’ve always said if you spend 13 minutes a year on economics, you’ve wasted 10 minutes.

So what should people think about?

Well, they should think about what’s happening. I’m talking about economics as forecasting the future. If you own auto stocks you ought to be very interested in used car prices. If you own aluminium companies you ought to be interested in what’s happened to inventories of aluminium. If your stock is hotels, you ought to be interested in how many people are building hotels. These are facts. People talk about what’s going to happen in the future, that the average recession lasts 2 years or who knows? There’s no reason why we can’t have an average economic expansion that lasts longer. I mean I deal in facts, not forecasting the future. That’s crystal ball stuff. That doesn’t work. Futile…

Can the little guy play with the big guy in the stock market?

There’s always been this position that the small investor has no chance against the big institutions. And I always wonder whether that’s the person under four-foot-eight. I mean they always said the small investor doesn’t have a chance. And there’s two issues there. First of all, I think that he or she can do it, but, number two, the question is, people do it anyway. They invest anyway. And if they so believe this theory that the small investor has no chance, they invest in a different format.

They said, “This is a casino. I’ll buy stock this month. I’ll sell it a month later,” the same kind of performance that they do everywhere. When they look at a house, they’re very careful. They look at the school system. They look at the street. They look at the plumbing. When they buy a refrigerator, they do homework. If they’re so convinced that the small investor has no chance, the stock market’s a big game and they act accordingly, they hear a stock and they buy it before sunset, they’re going to get the kind of results that prove the small investor can do poorly.

Now if you buy a – you make a mistake on a car, you make a mistake on a house, you don’t blame the professional investors. But now if you do stupid research, you buy some company that has no sales, no earnings, a terrible financial position and it goes down, you say, “Well, it because of the programmed trading of those professionals,” that’s because you didn’t do your homework. So I – I’ve tried to convince people they can do a job, they can do very well, but they have to do certain things…

Talk about market timing.

The market itself is very volatile. We’ve had 95 years completed this century. We’re in the middle of 1996 and we’re close to a 10 percent decline. In the 95 years so far, we’ve had 53 declines in the market of 10 percent or more. Not 53 down years. The market might have been up 26 finished the year up four, and had a 10 percent correction. So we’ve had 53 declines in 95 years. That’s once every two years. Of the 53, 15 of the 53 have been 25 percent or more. That’s a bear market. So 15 in 95 years, about once every six years you’re going to have a big decline. Now no one seems to know when there are gonna happen. At least if they know about ’em, they’re not telling anybody about ’em.

I don’t remember anybody predicting the market right more than once, and they predict a lot. So they’re gonna happen. If you’re in the market, you have to know there’s going to be declines. And they’re going to cap and every couple of years you’re going to get a 10 percent correction. That’s a euphemism for losing a lot of money rapidly. That’s what a “correction” is called. And a bear market is 20-25-30 percent decline. They’re gonna happen. When they’re gonna start, no one knows. If you’re not ready for that, you shouldn’t be in the stock market.

I mean stomach is the key organ here. It’s not the brain. Do you have the stomach for these kind of declines? And what’s your timing like? Is your horizon one year? Is your horizon ten years or 20 years? If you’ve been lucky enough to save up lots of money and you’re about to send one kid to college and your child’s starting a year from now, you decide to invest in stocks directly or with a mutual fund with a one-year horizon or a two-year horizon, that’s silly. That’s just like betting on red or black at the casino.

What the market’s going to do in one or two years, you don’t know. Time is on your side in the stock market. It’s on your side. And when stocks go down, if you’ve got the money, you don’t worry about it and you’re putting more in, you shouldn’t worry about it. You should worry about what are stocks going to be 10 years from now, 20 years from now, 30 years from now. I’m very confident.

If you had invested in ’66, it would have taken 15 years to make the money back.

Well, from ’66 to 1982, the market basically was flat. But you still had dividends in stocks. You still had a positive return. You made a few percent a year. That was the worst period other than the 1920s, in this century. So companies still pay dividends, even though if their stock goes sideways for ten years, they continue to pay you dividends, they continue to raise their dividends. So you have to say to yourself, “What are corporate profits going to do?” Historically, corporate profits have grown about eight percent a year. Eight percent a year. They double every nine years. They quadruple every 18. They go up six-fold every 25 years. So guess what? In the last 25 years corporate profits have gone up a little over six-fold, the stock market’s gone up a little bit over six-fold, and you’ve had a two or three percent dividend yield, you’ve made about 11 percent a year. There’s an incredible correlation over time.

So you have to say to yourself, “What’s gonna happen in the next 10-20-30 years? Do I think the General Electrics, the Sears, the Wal-Marts, the MicroSofts, the Mercks, the Johnson & Johnsons, the Gillettes, Anheiser-Busch, are they going to be making more money 10 years from now, 20 years from now? I think they will.” Will new companies come along like Federal Express that came along in the last 20 years? Will new companies come along like Amgen that make money? Will new companies come along like Compaq Computer? I think they will. There’ll be new companies coming along that make money. That’s what you’re investing in.

4. Roughly Right or Precisely Wrong – Ben Carlson

I have a love-hate relationship with historical market data.

On the one hand, since we can’t predict the future, calculating probabilities from the past in the context of the present situation is our only hope when it comes to setting expectations for financial markets. On the other hand, an overemphasis on historical data can lead to overconfidence if makes you believe that backtests can be treated as gospel.

In some ways markets are predictable in that human nature is the one constant across all environments. This is why the pendulum is constantly swinging from manias to panics. In other ways markets are unpredictable because stuff that has never happened before seems to happen all the time…

…Let’s say you put $5,000 into the initial S&P 500 ETF (SPY) right around when it started at the beginning of 1994. On top of that you also contribute $500/month into the fund. Simple right?

Here’s what this scenario looks like:

Not bad.

This is the summary:

  • Initial investment (start of 1994): $5,000
  • Monthly investment: $500
  • Total investments: $181,000
  • Ending balance (April 2023): $915,886

Plenty of volatility along the way but this simple dollar cost averaging strategy would have left you with a lot more money than you initially put into it.

Even though things worked out swimmingly by the end of this scenario there were some dark days along the way. You can see on the chart where the purple line dips below the blue line in 2009 by the end of the stock market crash from the Great Financial Crisis. By March of 2009 you would have made $96,000 in contributions with an ending market value of a little more than $94,000. So that’s more than a decade-and-a-half of investing where you ended up underwater.

It wasn’t prudent but I understand why so many investors threw in the towel in 2008 and 2009. Things were bleak. Everything worked out phenomenally if you stuck with it but investing in stocks can be painful at times…

…Just for fun, let’s reverse this scenario to see what would happen if you started out in 1994 with the same ending balance but now you’re taking portfolio distributions.

Like this:

  • Initial balance (start of 1994): $915,886
  • Annual portfolio withdrawal: 4% of portfolio value

An ending balance of more than $4 million while spending $1.7 million along the way from a starting point of a little less than $1 million is pretty, pretty good.

The usual caveats apply here — past performance says nothing about future performance, no one actually invests in a straight line like this, no one invests in a single fund like this, no one uses this type of withdrawal strategy in retirement nor do they invest 100% in stocks while doing so, etcetera, etcetera, etcetera.

5. ‘I can’t make products just for 41-year-old tech founders’: Airbnb CEO Brian Chesky is taking it back to basics – Nilay Patel and Brian Chesky

Lots of companies are bringing their people back to the office. The idea that, you know, people are going to be in a different house every time you see them on a Zoom call has somewhat faded. Is that still part of the bet for Airbnb? Or are you shifting to this other model?

Yeah. Let me tell you how I think it’s going to play out. And of course, we’re just all in the business of predicting the future, and the problem is it doesn’t always age well. I think that, like, pure work from home or pure remote is ending.

I generally think the future is flexibility. Here’s the calculation every CEO has to make: are you more productive having people physically in an office together and then constraining who you hire to a 30-mile or a 60-mile commuting radius to the office?

Or by allowing your team to be able to hire people from anywhere? And the truth is, it probably depends on the role. A lot of our software engineers or accountants, certain types of lawyers, we probably don’t need them physically in the office with everyone else. There’s certain creative functions or people on certain teams that we probably do want together physically quite a lot.

And then the question is, “Do we need them together 50 weeks a year?” And the answer for us is no. We actually go in spurts. We do these product releases, so we kind of need people together months at a time, and they can choose to live here, but if they want to go away for a couple months, if people want to go away for the summer, that’s possible.

I think we’re going to start to live in a much more nuanced world where the companies aren’t going to have all the people in the office. They’re going to decide that some roles are most effective being on a small team in the office, but a giant sea of desks probably isn’t the most effective thing, and many roles will be much more effective when allowing flexibility so you can have a global talent pool.

I think there’s going to be a post-pandemic equilibrium that we haven’t seen yet that’s going to play out over the coming years…

You have a lot of decisions to make. You’re obviously very thoughtful about how you make decisions and how you see the company going. How do you make decisions? What’s your framework?

Can I answer that question with a story? So, in 2011, I had my first crisis. We had our first crisis. A woman named EJ was a host in San Francisco. And one day, someone came, and they trashed her apartment. And I went on, and I wrote a letter. I published it on TechCrunch and I said, “We’ve resolved the issue.”

And then, of course, EJ said, “No, you didn’t resolve the issue.” And I was misinformed, and this crisis brewed. And then basically what happened was within days, every time I tried to communicate something, I kind of seemed to keep making it worse. And then I hired these crisis communications professionals, and I had these outside counsels, and they were giving me what seemed like good counsel.

They basically said, “Be careful about admitting fault. Be careful about this. Don’t say that. Do this, do that.” And every time I got advice and every time I tried to manage to an outcome, I seemed to make the situation worse because I think what people really wanted was authenticity. They really wanted me to, you know, just speak from the heart.

And at some point there was — this is in 2011—we were one of the first hashtags. There was #ransackgate and #ripAirbnb. I mean, people literally thought we weren’t going to recover from this because they thought we had no solution.

And at this point, I came to a conclusion that the most important decision I’m going to make would be based on principles, not on outcomes. In other words, I was going to make principle decisions, not business decisions. And the principle decision is: if I can’t figure out the outcome, how do I want to be remembered?

And I said, “Well, I don’t know how this is going to play out. Whatever I’m going to do is probably going to make the situation worse. But I’m just going to say wholeheartedly, ‘I’m sorry.’ I’m going to tell the story, and I’m going to do something crazy. I’m going to do more than what is expected of me.”

What was expected was we make it right for customers. So we ended up with this $50,000 guarantee. It started as a $5,000 guarantee. Marc Andreessen came by my office at midnight. He had just funded the company, and he said, “Add a zero.” And then suddenly we said we would provide $50,000 protection retroactively to everyone on the platform.

And it actually was one of the biggest moments in the company. And ever since then, I came to the conclusion that I’m going to try to make principle decisions, not business decisions. And then this led to another development, which is first principle thinking, which I’m sure you’re aware of. I think a lot of us think by analogy, but if you can understand the first principles of something, then you can really make a decision.

So I’ve been applying this ever since. And it all came back to us during the pandemic because, in January and February 2020, I noticed our business fell off a cliff. And within eight weeks, we lost 80 percent of our business. And on March 15th, the Ides of March, we called an emergency board meeting.

It was a Sunday, I’ll never forget it. And in this board meeting, I wrote out a series of principles about how to manage the crisis. And the first principle I set is we’re going to act decisively. The second is we’re going to preserve cash. The third is we’re going to act with shareholders in mind. And the fourth is we’re going to win the next travel season.

And I had even more detailed principles, and I said to the board, “I’m going to have to make like a thousand decisions a week, and so I can’t run every decision by you. So instead, let’s agree on the principles, and I’ll use those principles to make these decisions.” And I think a lot of people really struggle in a crisis or in times when they’re moving quickly because they don’t have data or the data’s changing.

But if you have a deep understanding of something, that’s better. My issue with A-B experimentation, for example, is that a lot of times, when people choose A or B, they don’t know why B worked. So let’s say, “Oh B works.” Well, why did B work? Because if you don’t know why B works, then you can never change it because you don’t actually have any intellectual property developed around B.

So experimentation’s fine if you know why the experiment worked and if it reinforced your understanding. So I try to make decisions based on first principles. And those first principles are based on whatever we believe in, and what we believe in might be right, might be wrong in the eyes of others, but that’s how we do it.

And you know, it really comes down to listening to people. I try to have qualitative and quantitative information, art and science. I try to balance being in the lab with being in the field. And I try to be as close as I can to decisions as possible. I try to get emotionally invested. A lot of people say if you do a layoff or fire people, don’t get emotionally invested. 

I say that’s exactly what you want. You want to understand deeply all the costs. And then if you can still make the decision, then you know you’ve made the right decision. So I generally say be principled, be as close to the decision-making as possible, and get as emotionally invested in something as you can. And then explain your thinking. The exercise of having to explain your thinking clarifies your thinking. A lot of people, they feel something, but they can’t explain their thinking. It’s a good indication that their thinking is still cloudy.

So that’s kind of how we do it. It’s first-principle oriented. It’s clear, it’s hopefully compassionate. We get as close to the decision, and as connected to emotions, as possible. It’s the head and the heart.

The last time you were on, we talked a lot about the structure of the company. 

You said that when the pandemic hit, the business had cratered 80 percent. A good quote you said, that I think about all time, is, “I stared into the abyss.” And then you restructured the company. You had a functional startup structure. Then you’d gone into a divisional structure, and you said, “You know what?

I’m pulling this back into a functional structure. We have one division. I’m going to run it all. I’m going to make sure I see everything.” You’re talking about going through customer service complaints now. Are you still in that structure? Has it worked?

Yeah. I mean, we are still in that structure. We decided, let’s go back to being a functional organization. And I actually drew inspiration from Apple around the same time that the pandemic hit is when I started talking to Jony Ive.

We brought him on board a little later. I also hired somebody who changed the trajectory of the company named Hiroki Asai. He was the creative director at Apple, and they really kind of brought me along on this methodology Steve Jobs had. Steve Jobs came back to Apple in 1997.

They were like 90 days from bankruptcy or maybe even fewer. And it was divisionalized. I think it had something like 80 products. And he did two things. He cut most of the products, and he went back to a functional organization, and that’s what we did.

And the other thing we did, which seemed crazy at the time, and it’s now totally intuitive, is we put the entire company on one road map. So for most tech companies, every executive has their own swim lanes. We said, “You have no swim lanes. Everyone works on everything together. Your only swim lane is your function.

We’re going to all collaborate.” I said, “I’m not going to push decision-making down. I’m going to pull decision-making in.” I’m the chief editor. I’m like an orchestra conductor, and I have to understand enough about each instrument to make sure it creates one sound. The other thing I said is, “We’re going to connect product and marketing together.”

Product at a company are like chefs, marketing are like waiters, and they never allow the waiters in the kitchen, or they get yelled at. And I thought, well, what if you actually have them collaborate on product? What if marketing, you know, challenges engineering and engineering inspires marketing? They could actually be connected.

And I think you can tell the health of the organization by how connected engineering and marketing are. And so we did this. We then started doing release cycles, which meant instead of doing this agile, bottoms-up AB testing, shipping continuously every minute of every day… Now we do some of that still. We said 70–80 percent of our product release is going to be done like hardware.

We’re going to ship stuff twice a year. And the reason we’re going to do that is we’re going to embrace constraints. When you ship stuff at the same time, everyone’s on a deadline. Then I meet with every single team every week, every two weeks, or every four weeks. I’m working and editing the work. I’m making sure it all fits together.

It ladders up to a cohesive product story. And then we have this function called product marketing. It’s actually outbound marketing plus product management in one role. 

This is very much like Apple, by the way. Apple has product marketing at scale.

Yes, and we took that from them because they’re really good at talking about the product.

We don’t have senior product managers at Airbnb. If you’re a senior product manager, you also have to do outbound marketing. You’re not allowed to decouple the roles. We have no pure product marketers who don’t do product management.

We don’t allow that. And their job is to keep the entire company stitched together and make sure we understand the story we’re telling, who the product’s for, and make sure everything we deliver ships to that product. So we now do two releases a year. The reason we’re talking is because we just did our summer release for May, and what we found is this: when I told people, Nilay, about this development process, the first thing everyone said is, “This is going to be horrible. No one’s going to wanna work together. It’s going to stifle innovation. It’s going to be too top-down. You’re not going to have as many ideas. It’s going to be a bottleneck,” et cetera. “I can tell you all the reasons this is a bad idea.” What we found is we ship way faster. We have now shipped 340 upgrades. We shipped over 53 upgrades today.

It creates a drumbeat for the organization, a rhythm. There is very little bureaucracy. Now we do say no to more things. There are some downsides, like you can’t do as many divergent things because everything is cohesive and integrated. But anything on the road map ships. Almost never do we greenlight something and it doesn’t happen.

So the answer to your question is we’ve been able to ship significantly faster and the paradox is that people are actually happier. As I created more constraints, as the culture got a little more top-down, as it was more integrated… Everyone, if they could have, 99 percent of people would’ve voted against this idea [at the beginning] because it doesn’t intuitively sound like something fun to work in. Almost everyone, at least people still here, seem to be happier. Now, maybe there’s a bias of the people who like it decided to stay, and the people who don’t like it decided to leave. There might be that, too. I want to acknowledge that.

But ultimately I do think the company’s much more productive, and it actually bears out financially. When we were doing this bottoms-up free for all approach, which is kind of my pejorative for it, we were basically losing $250 million in EBITDA a year. We were not profitable. Growth was slowing, cost was rising.

Last year, we did $3.5 billion in free cash flow and actually I believe, Nilay — this might be true now — for every dollar we earn, I think we earn more free cash flow than Apple, Google, or Microsoft. More than 40 percent of revenue becomes free cash flow. Now we’re not nearly the size of them.

That’s not the point. But the point is it’s extremely efficient. It helps to be a marketplace that’s capital-light, but it also helps to have one marketing department. It helps to not have a lot of waste. It helps to have one rhythm of the organization…

…There’s like an AI stack. The bottom of the AI stack is what you might call base models. And there’s like three to five base models. So Google has, like, maybe a couple of ‘em. OpenAI has one.

Anthropic has one. Microsoft Research kind of has one, though they seem to be mostly tied to OpenAI at this point. So those are the base models. Think of it like a highway. Those are infrastructure companies. They’re building the highway. We’re not going to be building base models ‘cause we’re not going to be building infrastructure.

The layer on top of that is now tuning the models. Tuning the models is going to be really important. If you and I go to ChatGPT and we ask it a question, we’re probably going to get something like the same answer. And that would be because ChatGPT doesn’t know your preferences and doesn’t know my preferences.

And for many queries getting the same answer is great. But what if you ask, like, “Hey, where should I go on vacation?” Or like, “Who’s a good person to, like, date?” Well, depends. Who are you? What do you want in your life? And so I think that there needs to be a personalization layer on top of AI, and that’s going to come from the data you have and the permission you get from customers.

Now, I think our vision is eventually one day, we’re going to be one of the most personalized AI layers on the internet.

We’re going to design, hopefully, some of the leading AI interfaces. We’re going to basically try to deeply understand you, learn about you, care about you, and be able to understand your preferences…

…Here’s one of the great things that AI does: think about it — 130 years ago, only probably a few people could use a camera, right? It was a highly technical thing. It was expensive. Most people take photos now.

Anyone in the world can basically use a camera. They’re ubiquitous, they’re on our phones. I kind of think software development’s going to be like that, that pretty soon, everyone will be able to develop software because software is just a language you have to learn. Now there’s always going to be development below the stack at the deeply technical level, but a lot of that front-end development is going to be replaced by natural language. As this happens, so many more people can develop software, and as so many more people can develop software, I think you’re going to see software in everything.

We’re going to have to create interface standards because we don’t want to ping-pong back and forth and just be totally confused. I don’t even think search is the right use case for every task. Sometimes it’s voice, sometimes it’s a conversation.

Ultimately, it would be great if interfaces understood you better, right? This is a problem with Airbnb. Every time you come back to Airbnb, we show you a whole bunch of categories. And if you’re a budget traveler, we show you lux.

And if you’re wealthy, we might show you Airbnb Rooms. We should know more about you. The way companies have tried to solve personalization is through data regression of clicks, right? So if I clicked on something in the past, then I’m going to show you that in the future. But that’s actually not a great way to understand somebody.

Like maybe I went on Amazon, I bought a bunch of alcohol, but I’m actually now a recovering alcoholic and I’m trying not to drink. And you don’t know that, and the mini bar has alcohol there because I order all the time, but I actually feel bad about it and I actually don’t want to drink.

And so I think companies developing a better understanding of you, having a sense of your personalized preferences, having that interface is going to be really important. And I actually think it allows many more people to participate in the economy because, in the past, the only people that could build software were engineers…

You’ve given me so much time. Last question. We’ve talked a lot about Apple and how inspired you are by Apple structures, by their organization, by their processes, by Steve Jobs.

You do have this long-standing deal now with Jony Ive and his agency LoveFrom. Have they shipped anything with you? What does that relationship look like? What has it accomplished for you?

In 2014, we were designing our new logo, what people know now as our logo, and I knew Jony Ive, and I sent it to him, and he basically talked to me about how you shouldn’t have flat lines, you should have this continuous curvature.

And so he and the team spent some time, and he redesigned the spleens of the curb. And so the actual logo that you see on Airbnb, the final mark, was designed by Jony Ive. I kept in touch with him, and then when I read that he left Apple, I said, “We gotta work together.” And we started talking a lot in the beginning of 2020.

Again, it happened perfectly coincidentally, with a period of time when I felt like we had a crisis, almost the size of Apple’s crisis in the late ’90s. And I turned to him, and obviously, he gave me a lot of great advice. He told me a couple things.

The first thing is we used to talk about our mission as belonging. And the problem with using the word belonging is I noticed that employees were confusing belonging with inclusion. And then they were conflating inclusion with the lack of discrimination. And then they said, “Well, our mission is to not discriminate.”

And I said, “Well, that’s a really low bar.” Of course, you shouldn’t discriminate, but when we say belonging, it has to be more than just inclusion. It has to actually be the proactive manifestation of meeting people, creating connections in friendships. And Jony Ive said, “Well, you need to reframe it. It’s not just about belonging, it’s about human connection and belonging.”

And that was, I think, a really big unlock. The next thing Jony Ive said is he created this book for me, a book of his ideas, and the book was called “Beyond Where and When,” and he basically said that Airbnb should shift from beyond where and when to who and what?

Who are you and what do you want in your life? And that was a part of the inspiration behind Airbnb categories, that we wanted people to come to Airbnb without a destination in mind and that we could categorize properties not just by location but by what makes them unique, and that really influenced Airbnb categories and some of the stuff we’re doing now. 

The third thing is he really helped me think through the sense that Airbnb is a community. You know, this is really interesting. Most people think of Jony Ive as like somebody who deals with atoms, like aluminum and glass.

But actually he said that he spent 30 years building tools. And what he realizes now is that we don’t just need more tools — we need more connections. And I thought that was a really profound thing and. He really helped us think of ourselves — this is a subtle word shift, Nilay — but going from a marketplace to a community because in a marketplace, everything’s a transaction, and in a community, everything should not be a transaction.

Otherwise, those aren’t real relationships or real connections. And so he has helped me think about how to shift from a marketplace to a community. I think some of that inspiration is what led to Airbnb Rooms, what led to the creation of the host passport. But he and the team are heads-down with me working on stuff that’s going to ship next May and next November.

One of the things Jony and I talked about is we need permission to do new things. So I’ll just use a rewind. It’s the year 2005, maybe 2006, and everyone was hoping that Apple would come out with an iPhone. And in January 2007, Steve Jobs announced it.

Now the reason we all wanted Steve Jobs to come out with an iPhone in 2006 and 2007 was because most of us loved our iPods. None of us were asking Gateway computer to come out with a phone because we didn’t love Gateway’s laptops. And so basically I think we need to have permission to do new, innovative things.

And we have permission when people love the core thing. And I came to the conclusion that we needed to focus much more on our core service. People were still complaining about pricing, cleaning fees, all sorts of things about Airbnb. And again, it comes from this disease that happens to a lot of founders or this thing that happens where we fall out of love with our core business.

And, as I told you a couple years ago, when we almost lost our business, we stared into the abyss. There’s something about almost losing something that makes you fall back in love with it. And I think maybe that happened to our core business, and we said, “Before we go on to new things, before we do whatever we’re going to do, we’re going to get back to the core, back to the basics, and really just focus on making this product something that people love.”

And so for the last few years, that’s what we’ve tried to do. We’ve tried to basically fix as many things as possible. That’s why we created a blueprint, something that Jony and others helped inspire, which is to say, “Let’s be systematic about the complaints. Let’s be systematic by how we address the feedback, and let’s tell a story to the community about all the things we’re fixing.”

And my hope is that by the end of this year, we’ll have addressed to some extent every single thing people are complaining about. They really do love the service. It feels truly delightful.

So our vision for this company is the following: that Airbnb is a marriage of art and science, that we’re a truly creatively-led company. Our two core values are basically design creativity married with technologies and then this idea of community and connection. A company with this real humanistic feel that you come to Airbnb, we ask you a series of questions.

We learn about you. We understand who you are, what you want. We design these incredibly simple interfaces, and then our job as a host is we develop these really robust matching algorithms, and then we can match you to whatever you want. 

And so if we can build this incredibly robust identity system, if we can have the most robust profiles, almost like a physical social network where we can connect people together in this community, if we can use AI to augment customer service, to deeply understand and resolve your issues within seconds, not just minutes or hours, and we can then build these incredibly simple interfaces where we match you to whatever you want in your life, that’s basically the idea of where we’re trying to go. And Jony Ive and his team, they’re working on things just in that area.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, Apple, Meta, and Microsoft. Holdings are subject to change at any time.

When Share Buybacks Lose Their Power

Apple’s share buybacks have greatly benefited shareholders in the past. But with share prices much higher, buybacks may be less powerful.

Share buybacks can be a powerful tool for companies to boost their future earnings per share. By buying back shares, a company’s future earnings can now be shared between fewer shares, boosting the amount each shareholder can get.

Take Apple for example. From 2016 to 2022, the iPhone maker’s net income increased by 118%, or around 14% annualised. That’s pretty impressive. But Apple’s earnings per share (EPS) outpaced net income growth by a big margin: EPS advanced by 193%, or 19.6% per year.

The gap exists because Apple used share buybacks to decrease its share count. Its outstanding share count dropped by around 30%, or an annualised rate of close to 5.7%, over the same period.

But the power of buybacks is very much dependent on the price at which they are conducted. If a company’s share price represents a high valuation, earnings per share growth from buybacks will be less, and vice versa.

Valuations matter

To compare how buybacks lose their effectiveness when valuations rise, let’s examine a simple illustration. There are two companies, A and B, that both earn a $100 net profit every year and have 100 shares outstanding. These give them both earnings per share of $1. Let’s say Company A’s share price is $10 while Company B’s is at $20. The two companies also use all their profits in year 1 to buy back their shares.

Companies A and B would end the year with 90 and 95 shares outstanding, respectively. From Year 2 onwards, Company A’s earnings per share will be $1.11, or an 11% increase. Company B on the other hand, only managed to increase its earnings per share to $1.052, or 5.2%.

Buybacks are clearly much more effective when the share prices, and thus the valuation, is lower.

The case of Apple

As I mentioned earlier, Apple managed to decrease its share count by 30% over the last six years or 5.7% per year. A 30% decrease in shares outstanding led to a 42% increase in EPS*. 

Apple was able to decrease its share count so significantly during the last six years because its share price was trading at relatively low valuations. Apple also used almost all of its free cash flow that it generated over the last six years to buy back shares. The chart below shows Apple’s price-to-earnings multiples from 2016.

Source: TIKR

Source: TIKR

From 2016 to 2019, Apple’s trailing price-to-earnings (PE) ratio ranged from 10 to 20. But since then, the PE ratio has increased and now sits around 30.

In the last 6 years – from 2016 to 2022 – Apple was able to reduce its share count by 30% or 5.7% a year. But with its PE ratio now at close to 30, the impact of Apple’s buybacks will not be as significant. If Apple continues to use 100% of its free cash flow to buy back shares, it will reduce its share count only by around 3.3% per year. Although that’s a respectable figure, it doesn’t come close to what Apple achieved in the 6 years prior. 

At an annual reduction rate of 3.3%, Apple’s share count will only fall by around 18% over six years, compared to the 30% seen from 2016 to 2022. This will increase Apple’s earnings per share by around 22% versus the actual 42% clocked in the past six years.

In closing

Apple is a great company that has rewarded shareholders multiple folds over the last few decades. In addition to growing its business, timely buybacks have also contributed to the fast pace of Apple’s earnings per share growth. 

Although I believe Apple will likely continue to post stellar growth in the coming years with the growth of its services business and its potential in emerging markets, growth from buybacks may not be as powerful as it used to be.

When analysing the power of buybacks, shareholders should monitor the valuation of the stock and assess whether the buybacks are worthwhile for shareholders.

*(1/1-0.3)


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Apple. Holdings are subject to change at any time.

What We’re Reading (Week Ending 14 May 2023)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general.

Here are the articles for the week ending 14 May 2023:

1. Why Conscious AI Is a Bad, Bad Idea – Anil Seth

To get a handle on these challenges—and to clarify the confusing and hype-ridden debate around AI and consciousness—let’s start with some definitions. First, consciousness. Although precise definitions are hard to come by, intuitively we all know what consciousness is. It is what goes away under general anesthesia, or when we fall into a dreamless sleep, and what returns when we come round in the recovery room or wake up. And when we open our eyes, our brains don’t just process visual information; there’s another dimension entirely: Our minds are filled with light, color, shade, and shapes. Emotions, thoughts, beliefs, intentions—all feel a particular way to us.

As for intelligence, there are many available definitions, but all emphasize the ability to achieve goals in flexible ways in varied environments. Broadly speaking, intelligence is the capacity to do the right thing at the right time.

These definitions are enough to remind us that consciousness and intelligence are very different. Being intelligent—as humans think we are—may give us new ways of being conscious, and some forms of human and animal intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much species-level intelligence at all.

This distinction is important because many in and around the AI community assume that consciousness is just a function of intelligence: that as machines become smarter, there will come a point at which they also become aware—at which the inner lights come on for them. Last March, OpenAI’s chief scientist Ilya Sutskever tweeted, “It may be that today’s large language models are slightly conscious.” Not long after, Google Research vice president Blaise Agüera y Arcas suggested that AI was making strides toward consciousness.

These assumptions and suggestions are poorly founded. It is by no means clear that a system will become conscious simply by virtue of becoming more intelligent. Indeed, the assumption that consciousness will just come along for the ride as AI gets smarter echoes a kind of human exceptionalism that we’d do well to see the back of. We think we’re intelligent, and we know we’re conscious, so we assume the two go together.

Recognizing the weakness of this assumption might seem comforting because there would be less reason to think that conscious machines are just around the corner. Unfortunately, things are not so simple. Even if AI by itself won’t do the trick, engineers might make deliberate attempts to build conscious machines—indeed, some already are.

Here, there is a lot more uncertainty. Although the last 30 years or so have witnessed major advances in the scientific understanding of consciousness, much remains unknown. My own view is that consciousness is intimately tied to our nature as living flesh-and-blood creatures. In this picture, being conscious is not the result of some complicated algorithm running on the wetware of the brain. It is an embodied phenomenon, rooted in the fundamental biological drive within living organisms to keep on living. If I’m right, the prospect of conscious AI remains reassuringly remote.

But I may be wrong, and other theories are a lot less restrictive, with some proposing that consciousness could arise in computers that process information in particular ways or are wired up according to specific architectures. If these theories are on track, conscious AI may be uncomfortably close—or perhaps even among us already…

…There are two main reasons why creating artificial consciousness, whether deliberately or inadvertently, is a very bad idea. The first is that it may endow AI systems with new powers and capabilities that could wreak havoc if not properly designed and regulated. Ensuring that AI systems act in ways compatible with well-specified human values is hard enough as things are. With conscious AI, it gets a lot more challenging, since these systems will have their own interests rather than just the interests humans give them.

The second reason is even more disquieting: The dawn of conscious machines will introduce vast new potential for suffering in the world, suffering we might not even be able to recognize, and which might flicker into existence in innumerable server farms at the click of a mouse. As the German philosopher Thomas Metzinger has noted, this would precipitate an unprecedented moral and ethical crisis because once something is conscious, we have a responsibility toward its welfare, especially if we created it. The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel…

…Systems like this will pass the so-called Garland Test, an idea which has passed into philosophy from Alex Garland’s perspicuous and beautiful film Ex Machina. This test reframes the classic Turing Test—usually considered a test of machine intelligence—as a test of what it would take for a human to feel that a machine is conscious, even given the knowledge that it is a machine. AI systems that pass the Garland test will subject us to a kind of cognitive illusion, much like simple visual illusions in which we cannot help seeing things in a particular way, even though we know the reality is different.

This will land society into dangerous new territory. By wrongly attributing humanlike consciousness to artificial systems, we’ll make unjustified assumptions about how they might behave. Our minds have not evolved to deal with situations like this. If we feel that a machine consciously cares about us, we might put more trust in it than we should. If we feel a machine truly believes what it says, we might be more inclined to take its views more seriously. If we expect an AI system to behave as a conscious human would—according to its apparent goals, desires, and beliefs—we may catastrophically fail to predict what it might do.

2. Breach of Trust: Decoding the Banking Crisis – Aswath Damodaran

Banks with sticky deposits, on which they pay low interest rates (because a high percentage are non-interest bearing) and big buffers on equity and Tier 1 capital, which also earn “fair interest rates”, given default risk, on the loans and investments they make, add more value and are usually safer than banks with depositor bases that are sensitive to risk perceptions and interest rates paid, while earning less than they should on loans and investments, given their default risk…

…  It is worth noting that all of the pain that was coming from writing down investment security holdings at banks, from the surge in interest rates, was clearly visible at the start of 2023, but there was no talk of a banking crisis. The implicit belief was that banks would be able to gradually realize or at least recognize these losses on the books, and use the time to fix the resulting drop in their equity and regulatory capital. That presumption that time was an ally was challenged by the implosion of Silicon Valley Bank in March 2023, where over the course of a week, a large bank effectively was wiped out of existence. To see why Silicon Valley Bank (SVB)  was particularly exposed, let us go back and look at it through the lens of good/bad banks from the last section:

  1. An Extraordinary Sensitive Deposit Base: SVB was a bank designed for Silicon Valley (founders, VCs, employees) and it succeeded in that mission, with deposits almost doubling in 2021. That success created a deposit base that was anything but sticky, sensitive to rumors of trouble, with virally connected depositors drawn from a common pool and big depositors who were well positioned to move money quickly to other institutions. 
  2. Equity and Tier 1 capital that was overstated: While SVB’s equity and Tier 1 capital looked robust at the start of 2023, that look was deceptive, since it did not reflect the write-down in investment securities that was looming. While it shared this problem with other banks, SVB’s exposure was greater than most (see below for why) and explains its attempt to raise fresh equity to cover the impending shortfall.
  3. Loans: A large chunk of SVB’s loan portfolio was composed of venture debt, i.e., lending to pre-revenue and money-losing firms, and backed up by expectations of cash inflows from future rounds of VC capital. Since the expected VC rounds are conditional on these young companies being repriced at higher and higher prices over time, venture debt is extraordinarily sensitive to the pricing of young companies. In 2022, risk capital pulled back from markets and as venture capital investments dried up, and down rounds proliferated, venture debt suffered.
  4. Investment Securities: All banks put some of their money in investment securities, but SVB was an outlier in terms of how much of its assets (55-60%) were invested in treasury bonds and mortgage-backed securities. Part of the reason was the surge in deposits in 2021, as venture capitalists pulled back from investing and parked their money in SVB, and with little demand for venture debt, SVB had no choice but to invest in securities. That said, the choice to invest in long term securities was one that was made consciously by SVB, and driven by the interest rate environment in 2021 and early 2022, where short term rates were close to zero and long term rates were low (1.5-2%), but still higher than what SVB was paying its depositors. If there is an original sin in this story, it is in this duration mismatch, and it is this mismatch that caused SVB’s fall.

In the aftermath of SVB’s failure, Signature Bank was shut down in the weeks after and First Republic has followed, and the question of what these banks shared in common is one that has to be answered, not just for intellectual curiosity, because that answer will tell us whether other banks will follow. It should be noted that neither of these banks were as exposed as SVB to the macro shocks of 2022, but the nature of banking crises is that as banks fall, each subsequent failure will be at a stronger bank than the one that failed before.

  • With Signature Bank, the trigger for failure was a run on deposits, since more than 90% of deposits at the bank were uninsured, making those depositors far more sensitive to rumors about risk. The FDIC, in shuttering the bank, also pointed to “poor management” and failure to heed regulatory concerns, which clearly indicate that the bank had been on the FDIC’s watchlist for troubled banks.
  • With First Republic bank, a bank that has a large and lucrative wealth management arm, it was a dependence on those wealthy clients that increased their exposure. Wealthy depositors not only are more likely to have deposits that exceed $250,000, technically the cap on deposit insurance, but also have access to information on alternatives and the tools to move money quickly. Thus, in the first quarter of 2023, the bank reported a 41% drop in deposits, triggering forced sale of investment securities, and the realization of losses on those sales.

In short, it is the stickiness of deposits that seems to be the biggest indicator of banks getting into trouble, rather than the composition of their loan portfolios or even the nature of their investment securities, though having a higher percentage invested in long term securities leaves you more exposed, given the interest rate environment. That does make this a much more challenging problem for banking regulators, since deposit stickiness is not part of the regulatory overlay, at least at the moment. One of the outcomes of this crisis may be that regulators monitor information on deposits that let them make this judgment, including:

  1. Depositor Characteristics: As we noted earlier, depositor age and wealth can be factors that determine stickiness, with younger and wealthier depositors being less sticky that older and poorer depositors. At the risk of opening a Pandora’s box, depositors with more social media presence (Twitter, Facebook, LinkedIn) will be more prone to move their deposits in response to news and rumors than depositors without that presence.
  2. Deposit age: As in other businesses, a bank customer who has been a customer for longer is less likely to move his or her deposit, in response to fear, than one who became a customer recently. Perhaps, banks should follow subscriber/user based companies in creating deposit cohort tables, breaking deposits down based upon how long that customer has been with the bank, and the stickiness rate in each group.
  3. Deposit growth: In the SVB discussion, I noted that one reason that the bank was entrapped was because deposits almost doubled in 2021. Not only do very few banks have the capacity to double their loans, with due diligence on default risk, in a year, but these deposits, being recent and large, are also the least sticky deposits at the bank. In short, banks with faster growth in their deposit bases also are likely to have less sticky depositors.
  4. Deposit concentration: To the extent that the deposits of a bank are concentrated in a geographic region, it is more exposed to deposit runs than one that has a more geographically diverse deposit base. That would make regional bank deposits more sensitive that national bank deposits, and sector-focused banks (no matter what the sector) more exposed to deposit runs than banks that lend across businesses.

Some of this information is already collected at the bank level, but it may be time for bank regulators to work on measures of deposit stickiness that will then become part of the panel that they use to judge exposure to risk at banks…

… The conventional wisdom seems to be that big banks have gained at the expense of smaller banks, but the data is more ambiguous. I looked at the 641 publicly traded US banks, broken down by market capitalization at the start of 2023 into ten deciles and looked at the change in aggregate market cap within each decile. 

As you can see the biggest percentage declines in market cap are bunched more towards the bigger banks, with the biggest drops occurring in the eighth and ninth deciles of banks, not the smallest banks. After all, the highest profile failures so far in 2023 have been SVB, Signature Bank and First Republic Bank, all banks of significant size.

If my hypothesis about deposit stickiness is right, it is banks with the least stick deposits that should have seen the biggest declines in market capitalization. My proxies for deposit stickiness are limited, given the data that I have access to, but I used deposit growth over the last five years (2017-2022) as my  measure of stickiness (with higher deposit growth translating into less stickiness):

The results are surprisingly decisive, with the biggest market capitalization losses, in percentage terms, in banks that have seen the most growth in deposits in the last five years. To the extent that this is correlated with bank size (smaller banks should be more likely to see deposit growth), it is by no means conclusive evidence, but it is consistent with the argument that the stickiness of deposits is the key to unlocking this crisis.

3. Inside the Delirious Rise of ‘Superfake’ Handbags – Amy X. Wang

My plunge into the world of fantastically realistic counterfeit purses — known as “superfakes” to vexed fashion houses and I.P. lawyers, or “unclockable reps” to their enthusiastic buyers — began a couple of years earlier, in what I might characterize as a spontaneous fit of lunacy. It was early 2021 when, thrown into sensory overload by grisly pandemic headlines, I found my gaze drifting guiltily to an advertisement in the right margin of a news site, where the model Kaia Gerber arched her arms lovingly around a Celine Triomphe — a plain, itty-bitty rectangular prism that in no universe could possibly be worth, as further research informed me, $2,200.

I shut the tab, horrified. Having grown up a first-generation immigrant whose family’s idea of splurging was a monthly dinner at Pizza Hut, I refused to be the type of person who lusted over luxury handbags. I had always understood that these artifacts were not for me, in the way debutante balls or chartered Gulfstreams were not for me. But, days later and still mired in the quicksand of quarantine, I found myself cracking my laptop and Googling “buy Celine Triomphe cheap.” This led me to a Reddit community of replica enthusiasts, who traded details about “trusted sellers” capable of delivering a Chanel 2.55 or Loewe Puzzle or Hermès Birkin that promised to be indistinguishable from the original, and priced at a mere 5 percent or so of the M.S.R.P…

…Untangling the problem of duplication in the fashion industry is like trying to rewrap skeins of yarn. Designer houses spend billions fighting dupes, but even real Prada Cleos and Dior Book Totes are made with machines and templates — raising the question of what, exactly, is unique to an authentic bag. Is it simply a question of who gets to pocket the money? (Hermès recently mounted, and won, a trademark war against “MetaBirkin” NFTs.)…

…I spoke with Kelly, one such person, seeking to peek under the hood of the shadowy business. (“Kelly” is not her real name; I’m referring to her here by the English moniker that she uses on WhatsApp. I contacted more than 30 different superfake-bag-sellers before one agreed to an interview.) Five years ago, Kelly worked in real estate in Shanghai, but she got fed up with trekking to an office every day. Now she works from home in Guangzhou, often hammering out a deal for a Gucci Dionysus or Fendi Baguette on her phone with one hand, wrangling lunch for her 8-year-old daughter with the other. Kelly finds the whole business of luxury bags — the sumptuous leather, razor-straight heat stamps, hand stitches, precocious metal mazes of prancing sangles and clochettes and boucles and fermoirs — “way too fussy,” she tells me in Chinese. But the work-life balance is great. As a sales rep for replicas, Kelly makes up to 30,000 yuan, or about $4,300, a month, though she has heard of A-listers who net up to 200,000 yuan a month — which would work out to roughly $350,000 a year.

On a good day, Kelly can sell more than 30 gleaming Chloés and Yves Saint Laurents, to a client base of mostly American women. “If a bag can be recognized as fake,” she told me, “it’s not a worthwhile purchase for the customer, so I only sell bags that are high-quality but also enticingly affordable — $200 or $300 is the sweet spot.” Kelly keeps about 45 percent of each sale, out of which she pays for shipping, losses and other costs. The rest is wired to a network of manufacturers who divvy up proceeds to pay for overhead, materials and salaries. When a client agrees to order a bag from Kelly, she contacts a manufacturer, which arranges for a Birkin bag to roll out of the warehouse into an unmarked shipping box in a week or so.

In Guangzhou, where a vast majority of the world’s superfakes are thought to originate, experts have identified two main reasons behind the illicit goods’ lightning-fast new speeds: sophistication in bag-making technology and in the bag-makers themselves.

One such innovation in the latter is a disjointed, flat-string, hard-to-track supply chain. When the intellectual-property lawyer Harley Lewin was the subject of a New Yorker profile in 2007, he could often be found busting through hidden cellars on raids around the world. But increasingly, Lewin told me, “I’m sort of the guy in the spy novel who’s called ‘Control’ and sits in a room,” trying to sniff out “the bad guys” from screenshots of texts and D.M.s. Counterfeiting operations are no longer pyramid-shaped hierarchies with ever-higher bosses to roll: “Nowadays it’s a series of blocks, the financier and the designers and the manufacturers, and none of the blocks relate to each other,” Lewin explains. “So if you bust one block, odds are they can replace it in 10 minutes. The person you bust has very little information about who organizes what and where it goes.” Indeed, Kelly, even though she has sold every color variation of Louis Vuitton Neverfull under the sun, only handles bags in person on rare occasions to inspect quality. Sellers don’t stock inventory. They function as the consumer-facing marketing block, holding scant knowledge of how other blocks operate. Kelly just gets daily texts from a liaison at each outlet, letting her know of their output: “The factories won’t even tell us where they are.”

As for how the superfakes are achieving their unprecedented verisimilitude, Lewin, who has observed their factories from the inside, says it’s simply a combination of skillful artisanship and high-quality raw materials. Some superfake manufacturers travel to Italy to source from the same leather markets that the brands do; others buy the real bags to examine every stitch. Chinese authorities have little to no incentive to shut down these operations, given their contributions to local economies, the potential embarrassment to local ministers and the steady fraying of China’s political ties with the Western nations where savvy online buyers clamor for the goods. “They avoid taxes,” Lewin says. “The working conditions are terrible. But all of that goes to turning out a very high-quality fake at very low cost.”…

…Those whose business it is to verify luxury bags insist, at least publicly, that there’s always a “tell” to a superfake. At the RealReal, where designer handbags go through rounds of scrutiny, including X-rays and measuring fonts down to the millimeter, Thompson told me that “sometimes, an item can be too perfect, too exacting, so you’ll look at it and know something is up.” And, he added, touch and smell can be giveaways. Rachel Vaisman, the company’s vice president of merchandising operations, said the company will contact law-enforcement officials if it suspects a consignor is sending in items with the intent to defraud.

But one authenticator I spoke with confesses that it’s not always so clear-cut. The fakes “are getting so good, to the point that it comes down to inside etchings, or nine stitches instead of eight,” he told me. “Sometimes you really have no idea, and it becomes a time-consuming egg hunt, comparing photos on other websites and saying, ‘Does this hardware look like this one?’” (He asked to remain anonymous because he is not permitted to speak on behalf of his company.) He and his colleagues have their theories as to how the superfakes that come across their desks are so jaw-droppingly good: “We suspect it’s someone who maybe works at Chanel or Hermès who takes home real leathers. I think the really, really good ones have to be from people who work for the companies.” And every time a brand switches up its designs, as today’s fast-paced luxury houses are wont to do, authenticators find themselves in the dark again…

…A strange, complicated cloud of emotions engulfed me wherever I carried the bag. I contacted more sellers and bought more replicas, hoping to shake it loose. I toted a (rather fetching) $100 Gucci 1955 Horsebit rep through a vacation across Europe; I’ve worn the Triomphe to celebrity-flooded parties in Manhattan, finding myself preening under the approving, welcome-into-our-fold smiles of wealthy strangers. There is a smug superiority that comes with luxury bags — that’s sort of the point — but to my surprise, I found that this was even more the case with superfakes. Paradoxically, while there’s nothing more quotidian than a fake bag that comes out of a makeshift factory of nameless laborers studying how to replicate someone else’s idea, in another sense, there’s nothing more original.

While a wardrobe might reveal something of the wearer’s personality and emotion, a luxury handbag is a hollow basin, expressing nothing individualistic at all. Instead, a handbag communicates certain ineffable ideas: money, status, the ability to move around in the world. And so, if you believe that fashion is inherently all about artifice — consider wink-wink items like Maison Margiela’s Replica sneaker, or the mind-​boggling profits of LVMH’s mass-produced luxury items — then there is an argument to be made that the superfake handbag, blunt and upfront to the buyer about its trickery, is the most honest, unvarnished item of all.

I asked the writer Judith Thurman, whose sartorial insights I’ve always admired, about the name-brand handbag’s decades-long hold on women. Why do we yearn for very expensive sacks in the first place? Why do some buyers submit to thousand-dollar price hikes and risk bankruptcy for them? “It’s a kind of inclusive exclusiveness,” Thurman told me. “A handbag is a little treat, and it’s the only fashion item that is not sacrificial.” Clothes, with their unforgiving size tags and rigid shapes, can instill a cruel horror or disappointment in their wearers. Bags, meanwhile, dangle no shame, only delight. “There is an intangible sense when you are wearing something precious that makes you feel more precious yourself,” she theorizes. “And we all need — in this unbelievable age of cosmic insecurity — a little boost you can stick over your shoulder that makes you feel a bit more special than if you were wearing something that cost $24.99. It’s mass delusion, but the fashion business is about mass delusion. At what point does a mass delusion become a reality?”

4. Berkshire Hathaway – The World’s Greatest Serial Acquirer of Businesses – Eugene Ng

Warren Buffett and Charlie Munger were previously known to me as one of the greatest investors of all time with Berkshire Hathaway (“Berkshire”). But what became clearly evident to me after reading all 5,300+ pages of the Buffett Partnership Letters, Berkshire Shareholder letters, and AGM transcripts, is that they were not only great business builders, but also fantastic and disciplined risk managers. Through countless acquisitions over decades, Berkshire have become the world’s greatest serial acquirer of businesses.

Serial acquirers are companies that acquire wholly owned smaller companies to grow. After reinvesting, they use the surplus cash flows produced by each acquisition to buy even more companies, repeating the process, and compounding shareholder value over a very long time. Including its own acquisition back in 1964, we reckon Berkshire acquired over at least 80 wholly-owned insurance and non-insurance businesses over the last 57 years, and spent in excess of US$120bn on acquisitions over the last 20 years. Berkshire currently has 67 subsidiaries as of Apr 2023. In 2022, the operating businesses generated US$220bn of revenues and US$27bn of operating earnings before taxes, and the insurance business generated US$164bn of float.

In addition to the surplus cash flows from the operating businesses, Berkshire also uses the float of its insurance companies to invest in partial stakes of publicly listed companies worth US$350bn. This insurance float arises because customers pay premiums upfront, and the claims are typically only paid much later. This allows Berkshire to invest much more in higher yielding common stocks than low yielding bonds than most typical insurers. Coupled with a strong disciplined underwriting process and prudent risk management and acquisitions, it provided them with an ever growing insurance float to invest long-term at much higher rates of returns versus their competitors.

Over 57 years from 1965 to 2023, Berkshire has grown to become the 7th largest company in the US by revenues at US$302bn, and the 2nd largest company in the world by total shareholder equity (including banks) at US$472bn.

Berkshire has also grown its market capitalization to US$722bn (as of 28Apr23), generating ~20% p.a. CAGR shareholder returns for over 57 years from 1965 to 2022, beating the S&P 500’s ~10% p.a. hands down, placing it firmly in the “hall of fame”…

…Below is what we think is our best interpretation of Berkshire Hathaway’s flywheel that combines the disciplined, profitable and well-run businesses of the (1) insurance business (run by Ajit Jain) and (2) non-insurance operating businesses (run by Greg Abel), combining with strong culture, and letting solid managers run the businesses well respectively with strong autonomy, in a decentralised format. 

Warren Buffett, Charlie Munger are responsible for the overall oversight and capital allocation, with Todd Combs and Ted Weschler are responsible for investing ~11% / ~US$34bn of the overall US$309bn equity investment portfolio under the insurance business.

It is this duo flywheel of Berkshire’s insurance and non-insurance businesses with the insurance float and the surplus capital from operating profits, that allows Berkshire to keep investing in (1) partial ownership stakes of good companies at fair prices, and to keep (2) acquiring durable, predictable profitable, wholly owned companies with able and honest management at the right price.

5. An Interview with Chip War Author Chris Miller –  Ben Thompson and Chris Miller

To me that was one of the most — I mean there was a lot of interesting parts — but that was one of the most interesting parts of the book was your discussion about the Soviet Union and their attempts to compete in the semiconductor industry. It’s always tough because this is the part where you’ve been immersed in it sort of your entire life, so it’s always hard to summarize. But what’s the big picture history and lesson from Russia, I should say USSR, and its attempts to compete with the US in particular?

CM: The puzzle to me was the following: we knew the Soviets could produce a lot of impressive technology because they did it during the early Cold War. From atomic weapons — which granted they stole some of the designs, but nevertheless, they were the second country in the world to test an atomic bomb — to satellites, they were the first in the world to go into space largely thanks to indigenous innovation, the first person in space, Yuri Gagarin. So in the 1950s the Soviets weren’t seen as technologically backwards, they were seen as, if anything, overtaking the United States, and that made sense because if you had to ask what are some of the key ingredients to technological success of a country, you’d say, well, you probably want a pretty well-educated workforce, Soviets had that. Capital investment, Soviets had that. You want to focus on the industry, Soviets had that. And so the puzzle to me was why, given all these clear ingredients that were present in the Soviet Union, plus the pressure of Cold War competition to produce the next best defense technology, why was it that the Soviets couldn’t produce computing technology basically at all, and the entire Cold War they were copying IBM computers? That was the puzzle I initially started out wanting to answer and there’s a number of different ways you can answer the question.

I think this is super interesting, it’s super relevant. So walk me through them — what was it that was fundamentally different about, to your point, putting a man in space versus building a semiconductor?

CM: I think the common answer in the Western literature is “Well, they were an essentially planned economy, or they were dictatorship or both, and those societies can’t innovate”. I think that just doesn’t fit the historical facts. In fact, they did a whole lot of innovation in certain spheres at certain times, but there’s nothing about dictatorships that make them non-innovative, they innovate for their own reasons. But I think the problems the Soviets face were the following: First they didn’t have a consumer economy, hardly at all.

Why did that matter?

CM: That mattered because from almost the earliest days, the chip industry in the US, the computer industry in the US, grew thanks to sales to civilian markets and sales to consumers. The first chips that were produced were deployed in government systems, NASA and the Defense Department. But by the end of the 1960s, a decade or so after the first chips had been produced, it was civilian sales that were driving the industry. Today it’s 97% of chips produced that go to civilian uses, and so if you don’t have a civilian market, you can’t scale, simple as that.

I think this fits in because if you’re trying to get a man into space, you’re trying to get one man into space one time. Whereas the entire economics of chips and of the tech industry generally is 100% about scale. You have to put such massive investment upfront, and then the cost of goods sold for a chip is basically zero, and so to justify and to get a return on that investment and to provide the space for iteration, you need that massive demand to make it all worth it. If you just try to do a single shot, it’s probably not going to work out.

CM: Yeah, that’s absolutely right. The second thing that I didn’t realize is that I was under the impression when I started that nuclear bombs were hard to make, but computers were easy to make because there were a few nuclear bombs in the world and a lot of computers, and actually it’s the exact opposite. Nuclear bombs are so easy to make, even the North Koreans can do it.

(laughing) I don’t think I have any new North Korean subscribers, so no problem with that statement.

CM: I’m safe, okay. Whereas actually it’s the things that are the most widely produced, like chips, that are the hardest ones to make because you’ve got to drive down the cost, you’ve got to scale down the components on them, and that is the most complex manufacturing we undertake. I hadn’t really thought that through and I think most of us haven’t really thought through that dynamic and as a result, it has us focusing on the wrong types of complexity and the wrong types of technology and we, I think too often, overestimate the complexity in things that are done once and underestimate the complexity involved in scaling…

Tell me about the contrast between the Japanese approach to chips versus the Soviet approach. Why was Japan so much more successful in entering this US-dominated industry relative to the Soviet Union?

CM: Well, the Japanese entered the chip industry not by trying to copy illegally, which the Soviets did, but by licensing technology. They were among the earliest licensees of the transistor after it was first produced, early licensees of the first integrated circuits, and they produced them better. The first chips began to be commercialized in the early ’60s, and just 15 years later, the late ’70s, Japanese firms by all accounts were producing at much higher levels of quality than US firms.

The complaint about dumping was never really quite right. People bought the Japanese chips because they failed much less and performed much better.

CM: That’s absolutely right. You had US CEOs at the time saying, “Well, we’ve got the real technology, we’re the most advanced in terms of this and that criteria”, but actually the technology that mattered again was the scaling. Japanese firms could scale with quality to a much greater degree. But that’s also what did them in, because they didn’t do a good job of managing their capability of scaling with market dynamics and they weren’t guided by profitability or guided by market share as their goal. So Japanese firms took over the market for DRAM chips, the type of memory chip that was the most prominent chip at the time, and never made any money. Kind of shockingly they dominated the market for a decade and hardly any of them ever posted a profit.

Well, I guess to just speak about Japan for a moment, because I think it’s interesting, first, why did South Korea and then also Micron in the US surpass Japan in memory, and second why did Japan never build any strength in logic? They peaked with memory and that was sort of it.

CM: So I think on the second question, Japanese firms did try to move into microprocessors at a time when they were still a niche good in the late ’70s and early ’80s, but they were doing so well in memory or it seemed like they were doing so well in memory that it was an Innovator’s Dilemma type situation. They had huge market share in memory, they had just defeated TI and Intel in then DRAM business, so why would you switch your business model to produce this low volume type of chip that seemed pretty niche? Whereas if you were Intel in the early ’80s you had no choice, you’d just been knocked out of your primary market.

It’s very underrated. Everyone wants to talk about that apocryphal, or maybe I guess it was real, meeting with Andy Grove and Gordon Moore where they’re like, “We need to get out of memory”. But it’s under-appreciated that this was not a brilliant flash of insight, this was accepting reality and probably accepting it a couple years too late.

CM: Yep, I think that’s right. I think the other benefit that the US ecosystem had writ large was that it was more responsive to new trends in the PC industry, and just the emergence of the PC itself is something that — could it have happened in Japan? I think you wouldn’t say it couldn’t have happened, but it seems like all the ingredients were much more prevalent in the US. A bigger software design ecosystem, Bill Gates being the critical representative, plus companies that were willing to innovate more rapidly to produce PCs. At the time there were a couple of Japanese firms that were good at productizing new ideas, Sony being the best example, but Sony was the exception, not the rule. What really struck me about the PC industry is that IBM created the first PC, but then they were quickly out-competed by all the clones that emerged, which drove down the cost and drove up the prevalence of PCs.

For someone that started out saying, “I assumed that the story was free markets just being better at innovation and that wasn’t the case”, I don’t know, that sounds like the case that you’re kind of making right here.

CM: (laughing) Yeah, in this case, I think it was. The Japanese did a very good job at scaling, but here is the counterfactual: Suppose that Japanese firms had been disciplined by a need to make a profit, they would’ve focused less exclusively on simple scaling to win market share. They would’ve at an earlier date tried to ask themselves, can we make money in DRAM? Some of them I think would’ve exited DRAM because they didn’t make any money there and tried to do something else. So actually I still go back to the structure of the Japanese corporate and financial system as to why their chip firms just for far too long focused on producing unprofitable chips…

One thing you’ve said about TSMC and ASML is that, “The way to understand them is less about them being manufacturers and more about them being integrators.” So, what do you mean by that?

CM: If you want to turn to ASML, I think they’re the best example of this. They’re a company that on the one hand, manufactures the most complex tools humans have ever made, hands down, and we can dig into them. On the other hand, they’ll openly tell you that, “Their expertise is not in the tools themselves, but in bringing together such a complex supply chain.” At first when people from ASML tell me this, I was shocked. I thought they would be bragging about their manufacturing capabilities, but they were more focused on their systems integration and the ability to manage suppliers all over the world. I admit, I started the project not taking the people who manage supply chains all that seriously, but I came to develop a lot more respect for them, because them doing their jobs well is an extraordinarily difficult thing to do and when you’ve got a supply chain that does involve thousands of suppliers, you’ve got to do it really, really well…

My view on what China should do geopolitically speaking if I were giving advice to Xi Jinping is — which I’m not, to be clear, I think that’s obvious — is the U.S. wants to continue to allow China to import tools and technologies as you noted, to build trailing edge chips. I think a big impetus for this is they don’t want to destroy the business of a lot of U.S. tech companies, where 30% of their sales were to China, and so it seems like the rational response for China would be to, and I think we’re seeing indicators this is happening, is to basically try to dominate that market.

In this case, use a willingness to be unprofitable as a weapon and to actually do what we accuse the Japanese of doing back in the day, of flooding the market, driving all other trailing edge capacity out, which is basically TSMC and a bit of GlobalFoundries, but there’s bits and pieces still scattered it around. Once you build a foundry, you might as well keep it and then suddenly, the actual chips that are used, to your point, in guided missiles, and are used in cars and are used in appliances were totally dependent on China. That seems like where this is going, does it not?

CM: I think I agree completely, China’s going to build out a ton of capacity. I think there’s some uncertainty as to whether we’re going to have enough demand to meet that capacity build out or not and I think there’s still uncertainty about what our demand will be for lagging edge chips. In ten years time, people who are more bullish on demand say, “Look, every year, there’s on average twenty new chips added to a car.” No one knows how long this is going to go for, but it’s gone for a long time, etc.

And the chip that controls a window going up and down never actually has to get faster.

CM: Right, exactly. So, set aside the uncertainty about the demand picture. If China built out all this capacity, will non-Chinese firms go to Chinese foundries? I think five years ago, the answer was certainly yes. Today, it’s a lot less clear. And when you have Michael Dell on the front page of the Financial Times reporting that his customers are asking him to remove Chinese-made components from PC supply chains, that’s not the political environment that I think will send non-Chinese customers racing to take advantage of cheaper funding capacity in China…

...So what’s your — as someone, again, you’re coming in from sort of a historical perspective, but having dived deeply into this — what do you think about the long-term Chinese prospects as far as basically rebuilding the leading edge capacity? This is a subject of much debate amongst people that are deep in the weeds about it, but as you’ve been able to talk to people all over the place, what’s your takeaway? How far behind are they? Can they even catch up?

CM: First off, what does catch up mean? I think that this is really a key question, because catch up doesn’t mean catching up to 2023 levels of technology in ten years time, then you’re five Moore’s Laws behind. So I think we’ve got to define catching up as reaching 2033 levels of technology in exactly ten years time, just as the rest of the world does. That seems to me like a really tall order, because the trend in the chip industry has not been catch up, it’s been fall behind. Everyone’s been falling behind the leading edge in every single node of the supply chain.

At basically every major lithography transition, another foundry falls off.

CM: Yep, exactly. So the Chinese government’s going to put a lot of money behind it, that’s going to help. There’s the necessity of it that Chinese firms face, that’s going to help. I think the Chinese government’s going to do more to wall off the domestic market, which will give some end market for Chinese firms that will help, at least in the short run, for Chinese chip makers. But at the end of the day, if Chinese firms are selling to 20% of global GDP and TSMC is selling to 80% of global GDP, I think I know who I’d bet on.

So what are the implications of this? I mean, again, as you noted, it doesn’t necessarily make a difference for conventional weapons, if we think about today. Is this where the question of AI systems and stuff comes to bear?

CM: Yeah, I think that’s right and right now, we’re seeing a shortage of GPUs, given all the generative AI boom underway. But I guess there’s a more complex long term question, which is — is compute a real point at which the US can try to constrain China’s AI capabilities? I think we’re seeing the US test out that strategy right now.

What’s your prognostication? I’m going to put you on the spot here.

CM: I think there are people who say, “Well if China can’t get access to the most advanced GPUs, aren’t they just going to build data centers that are four times as large or eight times as large or sixteen times as large with sixteen times as many chips, and therefore scale up that way?” You can’t scale down your transistors, you scale up your data centers, is basically the strategy, and then we have to figure out — what are the inefficiencies involved in scaling up your data center? I’m sure they’re pretty substantial.

Well, this is why it’s interesting, I was actually surprised — what the chip ban really focused on was memory interconnects, or interconnects, which is actually the limiting factor in pursuing that exact strategy.

CM: Yeah. I mean, I think you can’t accuse the US strategy of being incoherent, I think that they put their homework into it. Whether it’s going to work, we’ll see. I’ve got a lot of faith in the Chinese government’s willingness to brute force things when it comes to national security, so I think we should expect them to try really hard. But at some point, I go back to one of the more interesting anecdotes from the Soviet experience was an interview of a weapons designer in the Soviet Union, who was asked to explain why it was that he didn’t use the most advanced integrated circuits in his guidance computer in his missile. And his answer was, “Well, our computing industry, sometimes it works, sometimes it doesn’t. The state’s pretty bureaucratic. It’s just hard to work with, it’s not as easy.” The implication was it’s not as easy as buying from TSMC. So I do think if you get a situation where we’re throwing a lot of sand into the gears of the Chinese computing industry, the Chinese government’s going to respond with lots of cash in response and that’s kind of the race that we’re playing out right now, our sand in the gears versus Chinese government cash.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in ASML and Taiwan Semiconductor Manufacturing Company (TSMC). Holdings are subject to change at any time.

What Causes Stock Prices To Rise?

A company can be valued based on its future cash flows. Dividends, as cashflows to shareholders, should therefore drive stock valuations.

I recently wrote about why dividends are the ultimate driver of stock valuations. Legendary investor Warren Buffett once said: “Intrinsic value can be defined simply as the discounted value of cash that can be taken out of business during its remaining life.”

And dividends are ultimately the cash that is taken out from a business over time. As such, I consider the prospect of dividends as the true driver of stock valuations.

But what if a company will not pay out a dividend in my lifetime? 

Dividends in the future

Even though we may never receive a dividend from a stock, we should still be able to make a gain through stock price appreciation.

Let’s say a company will only start paying out $100 a share in dividends 100 years from now and that its dividend per share will remain stable from then. An investor who wants to earn a 10% return will be willing to pay $1000 a share at that time.

But it is unlikely that anyone reading this will be alive 100 years from now. That doesn’t mean we can’t still make money from this stock.

In Year 99, an investor who wants to make a 10% return will be willing to pay $909 a share as they can sell it to another investor for $1000 in Year 100. That’s a 10% gain.

Similarly, an investor knowing this, will be willing to pay $826 in Year 98, knowing that another buyer will likely be willing to pay $909 to buy it from him in a year. And on and on it goes.

Coming back to the present, an investor who wants to make a 10% annual return should be willing to pay $0.07 a share. Even though this investor will likely never hold the shares for 100 years, in a well-oiled financial system, the investor should be able to sell the stock at a higher price over time.

But be warned

In the above example, I assumed that the financial markets are working smoothly and investors’ required rate of return remained constant at 10%. I also assumed that the dividend trajectory of the company is known. But reality is seldom like this.

The required rate of return may change depending on the risk-free rate, impacting what people will pay for the stock at different periods of time. In addition, uncertainty about the business may also lead to stock price fluctuations. Furthermore, there may even be mispricings because of misinformation or simply irrational behaviour of buyers and sellers of the stock. All of these things can lead to wildly fluctuating stock prices.

So even if you do end up being correct on the future dividend per share of the company, the valuation trajectory you thought that the company will follow may end up well off-course for long periods. The market may also demand different rates of return from you leading to the market’s “intrinsic value” of the stock differing from yours.

The picture below is a sketch by me (sorry I’m not an artist) that illustrates what may happen:

The smooth line is what your “intrinsic value” of the company looks like over time. But the zig-zag line is what may actually happen.

Bottom line

To recap, capital gains can be made even if a company doesn’t pay a dividend during our lifetime. But we have to be wary that capital gains may not happen smoothly.

Shareholders, even if they are right about a stock’s future dividend profile, must be able to hold the stock through volatile periods until the stock price eventually reaches above or at least on par with our intrinsic value to make our required rate of return.

You may also have noticed from the chart that occasionally stocks can go above your “intrinsic value” line (whatever rate of return you are using). If you bought in at these times, you are unlikely to make a return that meets your required rate.

To avoid this, we need to buy in at the right valuation and be patient enough to wait for market sentiment to converge to our intrinsic value over time to make a profit that meets our expectations. Patience and discipline are, hence, key to investment success. And of course, we also need to predict the dividend trajectory of the company somewhat accurately.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any stocks mentioned. Holdings are subject to change at any time.