What We’re Reading (Week Ending 08 March 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 March 2026:

1.Iran: The Day After – Tomas Pueyo

Persia’s Shah used to be aligned with the West: He modernized the country, invited foreign investments, built a lot of infrastructure, improved literacy and healthcare…

The radical Islamists didn’t like this modernization, so they allied with the local Left to gain power, and succeeded in 1979.1 This means the entire legitimacy of the regime is based on opposing the US, its allies, and its values.

This would not have been a problem if Iran had limited itself to hating the US and Israel. Instead, they’ve threatened to attack and eliminate them for the last 47 years, and they haven’t limited themselves to empty threats. They’ve developed ballistic missile and nuclear weapon programs to be able to obliterate Israel, and maybe attack the US too.

For the last few decades, the US and Israel have tried to manage the situation, but the closer Iran is to getting nuclear weapons, the less they can tolerate it. Until recently, they were forced to because Iran was quite strong, with proxies in Palestine, Lebanon, Syria, Iraq, and Yemen. But after October 7th 2023, Israel has systematically eliminated most of them, so it and the US saw an opening last year to weaken Iran and its nuclear program, and took it. But that was just a delay. The truth is they will only be safe when this regime falls.

The problem is that achieving regime change is going to be very difficult…

…The recent strikes have killed the existing Supreme Leader, but there’s a long chain of command to replace him and any other leader killed through strikes. Then, there’s Khamenei’s Bayt, a group of 4,000 close employees who manage Khamenei’s affairs and power, and work as a shadow government mirroring the official one…

…Through this body, Khamenei controlled the BMEE and AQR2, huge conglomerates of over 200 companies with interests in real estate, construction, industry, mining, energy, power, food, agriculture, tourism, transportation, IT, media…

Khamenei’s Bayt was also able to infiltrate the military and the IRGC (Islamic Revolutionary Guard Corps), a kind of Praetorian Guard with over 125,000 members sourced from the Basij militia, a bigger group of ~400,000 poor, Shia radical volunteers (and 25 million members!!) who police the country on behalf of the government…

…45% of the Iranian government’s income comes from oil.3 If the US and Israel prevent Iran from selling its oil, its income will dry up, and it won’t be able to pay salaries. My guess is the Iranian regime will prioritize IRGC, Basij, and military salaries, but even then, losing 50% of your income can’t be easy. Unfortunately, this takes some time to bite, as the government will use other resources to pay its forces for as long as possible, and people can sometimes withstand some time without a salary…

…The vast majority of Iranians are tired of their government.

They are now celebrating the bombings on the streets.

The first consequence is oil. Iran has closed the Strait of Hormuz, many oil pumping stations and refineries have been hit in the area, and oil has stopped flowing. This will put pressure across the world too as oil prices increase…

…Saudi Arabia can ramp up supply, and employ an east-west pipeline that should be able to bypass the strait. It won’t be enough to counter the entire drop in supply, but it might end up benefiting Saudi Arabia through higher oil prices.

Meanwhile, the biggest consumer of Iranian oil is China, but it has historically high oil and gas reserves, so it might be able to withstand the war if it’s short enough…

…Four years ago, China had collected anti-US friends in Russia, Iran and its proxies in Syria, Lebanon, Hamas and the Houthis, Venezuela, Cuba, and a host of satellites considering whether to join them or not. Israel took care of Iran’s acolytes. The US neutralized Venezuela, Cuba is isolated and cut off from oil, Russia is bogged down in Ukraine, and Iran is at risk of falling. Virtually every friend that China has cultivated over the last few years is crumbling.

Not only that, but China’s standing as a provider of technology and military power is completely exposed. If China won’t come to the rescue of its allies, and its weapons can’t stop the US, who will want to side with them?

Then there’s the oil. Venezuela and Iran together accounted for 17% of China’s oil imports.

This is a bad day for China…

…Iran has 90M people, nearly twice South Korea’s population. 42% of them are under 25, and they have a 98% literacy rate. The country birthed one of the oldest civilizations on Earth, the first empire, and has seen a succession of successful ones through the ages. Its diaspora in the world—especially in the US—is educated, rich, and powerful. It could fund and provide the leadership for a renaissance in the country.

But only if the current regime falls.

2. A Munger PA Investment – Joe Raymond

The Alfred C. Munger Foundation (named for Charlie’s father) sold 10,000 shares of Black Hills Corporation (BKH) for $23 each in June 2009, resulting in a short-term gain of 29%…

…A reasonable assumption based on this filing is that Charlie purchased this specific lot of 10,000 shares for $18 apiece in early 2009 and sold in June 2009 around $23.

He could have been buying the stock before that and holding shares after.

The only thing we know with reasonable certainty is that Charlie thought Black Hills was a good buy in 2009 at $18 per share…

…Black Hills is a utility company based in South Dakota.

It was formed in 1941 through a combination of several existing utility companies serving the Black Hills region. The earliest predecessor traces its roots back to 1883…

…Black Hills could be described as a decent and predictable business in the years leading up to Charlie’s purchase. ROE was in the low double digits and book value per share growth (adding back dividends) averaged 11% from 2002 to 2008.

Simple, clean, predictable, decent quality…

…Black Hills earned $105 million in 2008 ($2.75 per share). It paid $1.40 of dividends that year and finished the year with $27.19 of per share book value…

…I think the thesis here was pretty simple.

A durable, safe business that earns double digits on equity shouldn’t trade for 66% of book value.

The crashing economy wasn’t going to kill the utility business. People still needed to turn their lights on and fire up the stove…

…BKH’s average price three years later in 2012 was $33.66 per share, good for a return of 98% (25% CAGR) before dividends…

…The Black Hills case isn’t terribly exciting, but I do find it interesting and useful.

If I had to nail it down to one simple idea it would be this:

Buying an adequately capitalized business, that should earn at least a high-single-digit return on its common equity, at a substantial discount to book value often works very well over short- and medium-term time frames.

3. Anthropic’s AI tool Claude central to U.S. campaign in Iran, amid a bitter feud – Tara Copp, Elizabeth Dwoskin, and Ian Duncan

As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people. The pairing of Maven and Claude has created a tool that is speeding the pace of the campaign, reducing Iran’s ability to counterstrike and turning weeks-long battle planning into real-time operations, said one of the people. The AI tools also evaluate a strike after it is initiated, the person said.

Claude has also been used in countering terror plots and in the raid that captured Venezuelan president Nicolás Maduro. But this is the first time it has been used in major war operations, according to two of the people…

…“It is notable that we’re already at the point where AI has gone from hypothetical to supporting real-world operations being conducted today,” said Paul Scharre, executive vice president at the Center for a New American Security, and who has written about AI in warfare. “The key paradigm shift is that AI enables the U.S. military to develop targeting packages at machine speed rather than human speed.”

The downsides, he said, are “AI gets it wrong. … We need humans to check the output of generative AI when the stakes are life and death.”

The Pentagon began to integrate Anthropic’s Claude chatbot into Maven in late 2024, according to public announcements. The system has been used to generate proposed targets, to track logistics and provide summaries of intelligence coming in from the field. The Trump administration has vastly expanded the use of Maven into many other parts of the military, with over 20,000 military personnel using it as of last May…

…Ben Van Roo, the CEO and cofounder of Legion Intelligence, a defense software startup, said that in his work over the last two and half years integrating generative AI into software systems at the Department of War, “the baseline use case is chat and advanced search functions — essentially summarizing information.”

It’s not highly integrated into weapons or mission critical systems, he said. He said that he wasn’t aware of its use in Iran, but wondered how it built on existing software that is already able to prioritize targets.

4. The Coase Conjecture in AI Inference Markets – Soren Larson

In 1972 Coase posed a simple question: If a monopolist owns all the land––assumed to be homogenous in kind and quality––in the world, at what price does he sell it?…

…Coase’s argument is interesting and simple. Normally a monopolist would set quantity sold where marginal revenue equals marginal cost. For convenience, let’s say marginal cost is zero.

Once the monopolist land owner has sold a bit of land, he sees the remaining land is also still available, but not monetized. Maybe he should sell a bit more––it’d generate pure profit! To do that, however, he’d have to lower the price to meet demand at the price it’s willing to pay.

Doing this annoys the original buyers.

The land is now worth less than what they paid. Eventually, however, the market catches on. Candidate buyers know the monopolist can’t resist selling more land (marginal cost of selling is zero!) and so they wait.

While the monopolist technically has no competitors, he ends up with one he didn’t expect––his future self. In situations like this, the market can guess a monopolist’s future behavior, so it holds out waiting for the “future self” monopolist to depress his own prices…

…At first glance, Coase seems to apply directly: the monopolist can’t resist selling more inference, buyers anticipate this, and prices unravel.

At first glance it could appear that Coase implies that frontier labs can’t sustain monopoly prices because they can’t resist selling more and more inference at what end up being lower prices.

This, of course, is incomplete in that every inference customer can choose to buy inference from cheaper open source models. It turns out the existence of open-source alternatives protects the monopolist’s pricing power by giving customers a reason to exit the frontier market rather than wait for discounts…

…In cases where buyers have an Outside Option––where they can defect from the monopolist’s market and buy some alternative––the Coasian monopolist unraveling doesn’t happen. The monopolist can sustain the monopoly price indefinitely.

Empirically, this appears to be happening in the inference market…

…Effectively, the outside option is a self-selection device that relieves the monopolist from price-sensitive waiters who’d pressure prices downward over time. The monopolist loses some customers but gets to keep pricing power. This is broadly what we see today…

…There are clear extensions to this setting in inference markets. Suppose you’re considering developing new software using AI: for you, waiting for Anthropic to lower prices could prove costly. A competitor who pays full price today could lock in customers before you enter the market. This dynamic is likely what explains today’s inference market structure: buyers would prefer to pay full price or defect to Minimax M2.5 or GLM 4.7 today than wait and let competitors eat their lunch.

The other extension, of course, is that Outside Options keep getting better. Open source models are improving every quarter: A buyer who defects today to a mediocre alternative might have waited for a better one in a quarter––returning us to the original Coase setting…

…Suppose now that the monopolist wins on all counts: open source improvement is slow enough that buyers don’t bother to wait. Open-source capability might even plateau. The Board and Pycia result holds and the monopolist charges its optimal price at equilibrium.

Is our beloved monopolist now safe?

So far we’ve only discussed pricing power, but what about market capture? Even if the monopolist preserves its pricing power, it could be that so much of the market defects to the Outside Option that pricing power is practically irrelevant.

Consider the buyer’s problem. The inference buyer only pays the monopolist pricing premium if the frontier model offers enough additional value over the open source alternative to justify the price. When open source closes the gap it reduces the collection of buyers for whom the frontier premium is worth the price. These two dynamics compound: a shrinking price corresponding to a lower marginal benefit of frontier v Outside Option mixed with a shrinking customer base means the monopolist’s total revenue erodes faster than the capability gap closes.

Of course, this argument depends on inference buyers actually connecting their buying decisions to value actually delivered.

The market may not be doing this today––many preferring to build Tool Shaped Objects. In fairness, model capabilities are jagged and it’s a reasonable  strategy for firms to keep buying frontier, irrespective of underlying value proposition while the technology matures. On the other hand, as the technology matures and firms begin to connect their inference consumption to value delivered, demand shifts from “just buy the best” to “maximize margins” or “buy what’s worth paying for.” In this world, the monopolist’s value proposition reduces to its incremental value over the Outside Option. And that shrinks even as open-source improves…

…Board and Pycia explain why margins are high: outside options remove the price sensitivity of buyers. High margins are an artifact of the Coase selection mechanism, not evidence of a durable business.

The labs clearly can and are charging high margins today. That’s not the question. It’s whether they will be charging high margins in three years. 

If open source keeps closing the gap, the answer from Board and Pycia––and from Ronald Coase––is probably not.

5. Biggest AI Prediction & Why I’m Allocating $200,000 to it – ContraTurtle

I categorize the AI stack into six levels:

  • Level Zero: Energy (GE Vernova, Cameco Corp, Constellation Energy, etc.)
  • Level One: Chips (TSMC, Nvidia, AMD, ASML, Broadcom, etc.)
  • Level Two: Infrastructure & Data Centre (Equinix, Arista Networks, Vertiv, Amazon, Google, Microsoft, etc.)
  • Level Three: AI Foundation Model Companies (OpenAI, Anthropic, Google DeepMind, Mistral, etc.)
  • Level Four: AI Software Infrastructure (Amazon Web Services, Google Cloud Services, Microsoft Azure, Palantir, Snowflake, Databricks, etc.) – Enterprise platforms enabling AI deployment, orchestration, and data pipelines
  • Level Five: AI Applications, Apps and Services (Meta, Google, Microsoft, Amazon, ServiceNow, Shopify, Axon, Netflix, etc.) – Companies delivering end user value and capturing economic surplus from AI optimisation

I will be focusing on Level Five in this article because this is where economic validation happens.

You can have:

  • The most advanced GPUs
  • The cheapest energy
  • The largest data centres
  • The most powerful foundation models

None of it matters if end users do not generate ROI that justifies capex deployed upstream.

Level 5 determines whether the entire AI stack earns an adequate return on capital.

Over the long term, the bulk of economic surplus accrues to the layer closest to the customer. Historically in technology cycles, infrastructure enables value creation, but applications capture pricing power.

This layer is still early…

…But there is one use case where AI ROI is already direct, measurable, and immediate and that is – Advertising.

Let me explain.

Ads share two structural traits with coding (a use case that has shown the most promise in enterprise):

  • Low cost of failure with hallucination, yet provide high ROI
  • Built-in verification mechanisms

In coding, hallucinated outputs are caught through testing frameworks. Unit tests, integration tests, and runtime checks validate whether the generated code works. If it fails, it does not ship.

Advertising works similarly.

An advertiser can generate five variations of an AI-created image, headline, or video and deploy them simultaneously. Performance is verified empirically through A/B testing across metrics such as:

  • Click-through rate
  • Conversion rate
  • Return on ad spend

Poor-performing creatives are automatically filtered out by the market. Strong performers scale.

Advertising is therefore a near-perfect commercial application of probabilistic AI.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon, ASML, Meta Platforms, Microsoft, Netflix, Shopify, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 March 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 March 2026:

1.OpenAI Boost Revenue Forecasts, Predicts $111 Billion More Cash Burn Through 2030 – Sri Muppidi and Stephanie Palazzolo

As revenues climb, rising computing costs will weigh on OpenAI’s bottom line. Last year, the company burned $8 billion in cash, about $500 million less than it forecast in the summer. However, the company expects to burn $25 billion this year and $57 billion next year, about $30 billion more in total than previously predicted.

The company still expects to turn cash flow positive in 2030, when it expects to generate nearly $40 billion in cash…

…OpenAI has told investors the costs of running its AI models, a process known as inference, quadrupled in 2025. As a result, the company’s adjusted gross margin—defined as revenue minus the costs of inference—fell to 33% from 40% the year prior. That’s lower than the gross margin expectations of 46% it had set for itself for 2025. It’s also below half the 70%-plus gross margins of best-in-class software companies.

OpenAI has lowered its gross margin forecasts for the next five years, as its inference costs increase. In that period, the measure will range between 52% and 67%, according to the forecasts; previously the company had expected margins to hit 70% by 2029…

…OpenAI’s revenue more than tripled last year to $13.1 billion, $100 million more than its prior projection.

The new forecasts show OpenAI now expects revenue to rise to $30 billion this year and about $62 billion next year, slightly higher than prior forecasts, with its ChatGPT consumer business the largest driver…

…Last year, OpenAI spent more than $8 billion on the costs of running its AI models for its users, with roughly $4.5 billion on inference for paying users. Its inference costs are expected to rise to roughly $14 billion this year and $26 billion next year, or about $8 billion more in total than was earlier predicted.

The company expects to spend even more on computing costs to train its models. Last year, OpenAI spent $8.3 billion, about a billion less than it expected from its summer forecast. It plans to increase its training costs to $32 billion this year and $65 billion next year, or about $44 billion more than previously expected. These training costs add up, totaling nearly $440 billion through 2030.

2. Software bear case push back (and the real risk that I see) – Drew Cohen

There is a lot of talk of the competitve pressures on SaaS companies, but what about the AI Model businesses?

I think the key thing to remember is that the AI models have their own competition and they are all fighting for market share right now.

Partnering with existing incumbents is an easy way for them to win distribution…

…Users of these SaaS companies are already becoming a source of revenue for the AI companies. This greatly reduces the benefit of creating a product AND business support to specifically go after each vertical…

…I think in some specific cases there is a risk of internal IT departments creating their own software, but I don’t think that will be standard practice. We are already seeing that the AI companies themselves all use a variety of software vendors…

…The transition over the past decade has been for companies to outsource server maintenance to the cloud because they can’t run it as efficiently or introduce new features as quickly. It doesn’t make sense for them to run thin internally just as they often outsource facility maintenance. Unless a business has a benefit for maintaining their own software (which I can’t see), they will want to outsource this…

…I think what AI really does is it allows software companies to enter new verticals adjacent from their’s, which increases competition—I don’t think the competition is going to come directly from the AI companies though.

This is similar to the newspaper industry 20 years ago. The increase in competition didn’t come from “the internet”, but rather what the internet enabled, which was many new ways to get news.

The other risk is pricing pressure and the seat model collapsing. I think as long as the value these companies give their customers is as good, or higher, than before, they will be able to transition this.

3. A Level Headed Look at State of Software – DB

Business software as an industry is small in China and India because labor is a direct competitor to packaged software. Historically in these lower cost labor markets with exceptional technical talent, DIY has been the go to solution. Most western company leaders would be shocked to find that technically savvy Asian tech companies not only are able to in-house their own business applications, but even databases, BI and infrastructure technology.

Is this the direction the world is headed? When the token costs decreases 100x, its tempting to think that the math becomes:

Token to generate code + 2 SWE < annual cost of CRM license

But in reality, the decision is a trade-off of management bandwidth. If a vendor CRM breaks, a customer can expect a SLA for it to be brought back up. If there is a security vulnerability, thats the vendor’s responsibility. In fact, the extreme examples of DYI are only found in the most sophisticated technology companies in Asia. I fully expect AI Labs to experiment with DIY everything but with IT at ~5% of US GDP, I would consider this an edge case…

…I think the barrier to agentic success today is primarily because companies simply dont know how to implement the tools available. This is an area where AI Labs will find collaboration with traditional software businesses to be in their best interest.

This is a long way of saying AI Labs will be selective about which first party applications they themselves will go build. But they have a distinct advantage in that know the billions of questions being prompted each day. Personal health, personal finance, coding, improving writing skills/education, etc. are on the top of that list. And I were to bet, the focus of first party apps will be in these areas…

…Yes agents will be transformational, but i’d bet a good portion of the agents will come from the boring old companies you already know today

Oh wait, there’s more than a business process than code

The reality of a regulated industry is that the value proposition is the sheer volume of dirty work that needs to be done in the background to present a customer with something simple. While it may be true that a payment portal can be generated in hours vs. months now, the moat of a payment company is dealing obtaining bank licenses, putting in place a AML/KYC program with the adequate controls for SARs and fraud detection (just ask CZ at Binance). Same can be said healthcare, telcos and a variety of industries. Not only is there no value for DIY, the risk of doing so far outweighs the reward…

…Several things can be true at once:

  • Software companies need to be able to adapt, and some will do it exceptionally well while others won’t
  • New companies will be created
  • Pricing may be compressed
  • Most software companies are too bloated

At the end of the day, what the capital market is doing is applying a higher discount rate to the interim 10 year likelihood of previously forecasted cash flows and the terminal value after those 10 years.

4. Blue Owl Fouls the Nest for AI Financing – Ken Brown

Private market lender Blue Owl is living through the downturn part. The struggles of the firm, which has been a big funder of the AI build-out, could affect the flow of capital into data center developers and cloud providers that need to raise cash…

…Last year, it made at least $5.6 billion of equity investments into data centers and raised $64 billion in debt for those projects, according to internal figures…

…The firm’s effort to manage rising redemptions in one of its smaller funds backfired and appears to have tainted the whole firm. Private lenders live and die on their access to capital and deal flow, both of which are at risk of drying up for Blue Owl.

The firm’s troubles are significant because it sits at the nexus of two important funding sources for the AI build-out—private capital and individual investors. If worries about Blue Owl spread, some projects will be funded at a higher cost—or might not get funded at all…

…Last year, a $1.6 billion private fund that it runs for small investors was facing redemption requests. The firm decided to address the issue by merging the fund with a $16.5 billion publicly traded fund it also runs.

The problem was, the bigger fund was trading at a 20% discount to the value Blue Owl was placing on its assets. The smaller fund, because it wasn’t publicly traded, was priced at the value of its assets. That meant investors in the smaller fund would see the value of their investments fall by 20% when the deal got done. That didn’t make them happy…

…Blue Owl called off the merger, but the damage was done. The deal drew attention to the perennial problem of valuing private assets…

…Fast-forward to last week, when Blue Owl came up with another flawed solution to its problems. It would sell $1.4 billion of assets to three big institutional investors and to an insurance company that it has a deep financial relationship with. That money would fund investor redemptions.

One problem is that when a fund with illiquid holdings sells assets, investors assume it is selling the highest-quality and most liquid ones, meaning what’s left will be harder to sell. That makes further redemptions tougher and gives investors a signal to get out…

…Another issue: Blue Owl selling assets into the insurer, Kuvare Holdings, could indicate that there were no other buyers and that it stuck Kuvare with bad assets…

…That became clear on Friday, when Business Insider reported that Blue Owl had trouble raising funds for a $4 billion data center in Pennsylvania.

The project is relatively speculative as these things go, so there could be other reasons why Blue Owl couldn’t raise the cash. The firm said it has considered outside funding and ultimately didn’t need it.

5. History Rhymes: Large Language Models Off to a Bad Start? – Michael Burry

While mining old newspapers on a quiet Saturday – a hobby of mine – I came upon a story from June 19, 1880, that I found relevant to our modern anxieties about AI.

It is the story of Melville Ballard, who, as a child without language, spied with his eyes a tree stump and asked himself if the first man rose out of it.

This 144-year-old case study – presented at the Smithsonian Institute no less – provides a potentially devastating critique of today’s Large Language Models and the spending behind them. With a simple human story, it boldly announced that complex thought exists in the silence before words…

…There are actually two stories of interest in that old newspaper. Let’s start with the one in the middle. This is Page 3 of this edition of the New York Times, and I see a story called Thought without Language…

…The story concerns one Professor Samuel Porter, of the National Deaf-Mute College at Kendall Green, who presented a paper at the Smithsonian Institution. The paper title, “Is There Thought Without Language? Case of a Deaf Mute.”

At first discussion of deaf-mutes and children having no form of mental action that distinguishes them from brutes, well, understanding has changed a lot, and I was ready to dismiss.

The case study is of a teacher at the Columbia Institute for the Instruction of the Deaf and Dumb. This particular teacher, Melville Ballard, is also a deaf mute and a graduate of the National Deaf Mute College.

Mr. Ballard says that in his infancy he communicated with his parents and brothers by natural signs or pantomime. His father, believing that observation would help to develop his faculties, frequently took him riding.

He continues that it was during a ride two or three years before he was initiated into the rudiments of written language that he began to ask himself the question, “How came the world into being?” and his curiosity was awakened as to what was the origin of human life, its first appearance, the cause of the existence of earth, sun, moon, and stars. At one time, seeing a large stump, he asked himself the question, “Is it possible that the first man that ever came into the world rose out of that stump? But that stump is only a remnant of a once magnificent tree; and how came that tree? Why, it came only by beginning to grow out of the ground, just like these little trees now coming up;” and he dismissed from his mind as absurd the connection between the origin of man and a decaying old stump…

…One of the presentation’s attendees notes, significantly, how Ballard’s eyes conveyed meaning perfectly, without misunderstanding, above all else.

One of the most interesting features of this meeting was Mr. Ballard, by signs, explaining how his mother informed him that he was going a long way to school, where he would read from a book, write and fold a letter, and send it to her, &c., and also, by pantomime, reciting how a hunter, after killing a squirrel, accidentally shot and killed himself. Mr. Ballard’s signs and gestures, with the expression of the eyes and face, conveyed his meaning perfectly to the audience, and, in the words of a member, the expression of the eye was language which could not be misunderstood.

Let us consider these two statements:

  • “That by which we understand all things must be essentially superior to anything else that is understood by it.”
  • “…in the words of a member, the expression of the eye was language which could not be misunderstood.”

In sum,

  1. Language without the Capacity for Reason fails at Understanding
  2. Only with Capacity for Reason does Language unlock Understanding.
  3. Understanding, fully realized, transcends Language.

By putting language first, LLMs build a primitive form of reason purely through logical inference, but this form of reason has been shown flawed and prone to hallucination due to limitations at the many ragged edges of knowledge.

The capacity for reason never existed. Therefore, language cannot scale through reason to understanding.

The professor suggests, in his work with deaf and mute people, he has discovered that a capacity for true reason must exist first, before language, so language can unlock understanding — the product of that capacity for true reason and language.

“The expression of the eye is the language which cannot be misunderstood.”

To wit, expression of the eye is what flawless understanding looks like, without the need for language.

Large Language Models, by putting language first, before the capacity for true reason, can never attain understanding…

…The original approach to AI was to generate a true capacity for reason first, but it was never realized, and the field pivoted to language first because it was easier.

This ‘bad start’ has led to a “parameter trap,” where brute-force language processing powered by zillions of power-hungry chips has become an incredibly ironic bottleneck.

As my conversation with Klarna’s Sebastian Siemiatkowski highlighted, the future lies in compression—leveraging ‘System 2’ reasoning-first to work off the redundancy of information and the relatively finite query sets produced by humans to drastically reduce compute needs.

This new line rejects singularity through language models talking to each other in an infinite mirror as a directionless waste of resources made impossible by lack of a basis in economic realities.

While frontiers like Google’s AlphaGeometry and Meta’s Coconut are finally moving toward this ‘reason-first’ architecture, they are essentially rediscovering what was presented at the Smithsonian 144 years ago: that language is the output of understanding, not the engine of reason…

…I mentioned there was another story of interest, and it is on the same page. More relevant to the first story than anyone in 1880s may have guessed it would be in 2026.

This article is “San Francisco’s Wealth, A Population of Bonanza Speculators.”

This story was written June 1 in San Francisco, and only published in the New York Times on June 19th…

…California was pre-eminently the paradise of the man of small capital. To satisfy the craving for speculation, the peculiar open-board system was adopted, whereby the man who had $50 to invest, by purchasing a share therein, could acquire a small interest in a mine at a dollar a share, or two shares at 50 cents, or any number at varying prices.

A “boom” existed here in certain stocks, seemed not to reach beyond the desire to do so “just once more” it seemed to excite the same gambling fever in San Francisco, and for lines lost by the bonanza firm was eagerly grasped by the people of San Francisco, and of the “boom” having been accompanied and by speculative losses on the part of the people, the “boom” disappeared and stocks fell to their normal condition.

The story closing hits hard for reality today.

The People of San Francisco seem to have become educated to the idea that they must leap into fortune at once, and their big bonanza at Virginia City having failed, they do appear to be willing to exert themselves to hunt for wealth in other directions, such as the development of manufacturing, trade, and agricultural interests. Almost the entire population is imbued with the passion for speculation, and if a new bonanza as big as the one in Nevada were to be discovered either there or near here, stocks would mount again to absurd figures, and San Francisco would again pass through the period of flush times to again suffer as she has during the past two years.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google) and Meta Platforms (parent of Facebook). Holdings are subject to change at any time.

What We’re Reading (Week Ending 22 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 22 February 2026:

1. Google Is Exploring Ways to Use Its Financial Might to Take On Nvidia – Raffaele Huang, Kate Clark, and Berber Jin

The company’s chips are gaining wider adoption for AI workloads, including with startups such as Anthropic, but Google is dealing with myriad challenges as it seeks to grow. The issues include bottlenecks at manufacturing partners and limited interest from cloud-computing rivals that are among the largest buyers of Nvidia processors, according to people familiar with the matter.

To expand its potential market, Google is increasing its financial support to a network of data-center partners that can provide computing power to a broader swath of customers, people familiar with its plans said.

The company is in talks to invest around $100 million in cloud-computing startup Fluidstack, part of a deal that values it at around $7.5 billion, people familiar with the discussions said. Fluidstack is one of a growing number of so-called “neocloud” companies that offer computing services to AI companies and others…

…Google has also held discussions about expanding its financial commitments to other data-center partners that could lead to additional TPU demand, people familiar with the talks said. Google has backstopped financing for projects involving Hut 8, Cipher Mining and TeraWulf, which are former crypto-mining companies that are now developing data centers. Cipher Mining declined to comment. Hut8 and TeraWulf didn’t respond to requests for comment.

Some managers at Google’s cloud-computing division recently refreshed a longstanding internal debate about restructuring the TPU team into a stand-alone unit, people familiar with those discussions said. Such a plan could potentially allow Google to expand its opportunities to invest, including with outside capital.

One challenge for any potential stand-alone unit is that Google’s cloud business relies heavily on Nvidia chips, some of the people said…

…In 2018, Google started selling access to TPUs through its cloud services. The company has traditionally signed up TPU users through its cloud-computing unit, but it is also selling the TPU chips directly to external customers, according to industry research group SemiAnalysis…

…However, interest from major cloud-service providers appears to be tepid, partly because they consider Google a competitor, according to industry participants. Amazon Web Services, Amazon.com’s cloud unit, has also developed its own chips for AI.

2. 10 Years Building Vertical Software: My Perspective on the Selloff – Nicolas Bustamante

Vertical software is software built for a specific industry. Bloomberg for finance. LexisNexis for legal. Epic for healthcare. Procore for construction. Veeva for life sciences, etc.

These companies share a defining characteristic: they charge a lot and customers rarely leave. FactSet charges $15,000+ per user per year. Bloomberg Terminal costs $25,000 per seat. LexisNexis charges law firms thousands per month. And retention rates hover around 95%.

I would say that there are ten distinct moats. LLMs are attacking some of them while leaving others intact…

…Knowledge workers pay to not relearn a workflow they’ve spent a decade mastering. The interface IS a big part of the value prop…

…LLMs collapse all proprietary interfaces into one Chat…

…Vertical software encodes how an industry actually works. A legal research platform doesn’t just store case law. It encodes citational networks, Shepardize signals, headnote taxonomies, and the specific way a litigation associate builds a brief.

This business logic took years to build. It reflects thousands of conversations with domain experts. When I built Doctrine, the hardest part wasn’t the technology. It was understanding how lawyers actually work: how they research case law, how they draft documents, how they build a litigation strategy from intake to trial. Encoding that understanding into working software was a huge part of what made vertical software valuable—and defensible.

LLMs turn all of this into a markdown file…

…A massive portion of vertical software’s value proposition was making hard-to-access data easy to query. FactSet makes SEC filings searchable. LexisNexis makes case law searchable. These are genuine services. SEC filings are technically public, but try reading a 200-page 10-K in raw HTML. The structure is inconsistent across companies. The accounting terminology is dense. Extracting the actual numbers you need requires parsing nested tables, following footnote references, reconciling restated figures.

Before LLMs, accessing this public data required specialized software and significant engineering scaffolding. Companies like FactSet built thousands of parsers, one for each filing type, each company’s idiosyncratic formatting. Armies of engineers maintained these parsers as formats changed. The code to turn a raw SEC filing into queryable data was a genuine competitive advantage…

…LLMs make this trivial. Frontier models already know how to parse SEC filings from their training data. They understand the structure of a 10-K, where to find revenue recognition policies, how to reconcile GAAP and non-GAAP figures. You don’t need to build a parser. The model IS the parser. Feed it a 10-K and it can answer any question about it. Feed it the entire corpus of federal case law and it can find relevant precedent…

…At Doctrine, hiring was brutal. We didn’t just need good engineers. We needed engineers who could understand legal reasoning: how precedent works, how jurisdictions interact, what grounds for appeal to the supreme court look like. These people barely existed. So we built our own. Every week, we held internal lectures where lawyers taught engineers how the legal system actually worked. It took months before a new engineer was productive. The talent scarcity was a genuine barrier, not just for us, but for anyone trying to compete with us.

At Fintool, we don’t do any of that. Our domain experts (portfolio managers, analysts) write their methodology directly into markdown skill files. They don’t need to learn Python. They don’t need to understand APIs. They write in plain English what a good DCF analysis looks like, and the LLM executes it. The engineering is handled by the model. The domain expertise, which was always the abundant resource, can now become software directly without the engineering bottleneck.

LLMs make the engineering trivially accessible, which means the scarce resource (domain expertise) is suddenly abundant in its ability to become software. This is why the barrier to entry collapses so dramatically…

…Vertical software companies expand by bundling adjacent capabilities. Bloomberg started with market data, then added messaging, news, analytics, trading, and compliance. Each new module increases switching costs because customers now depend on the entire ecosystem, not just one product. S&P Global’s acquisition of IHS Markit for $44B was exactly this strategy. The bundle becomes the moat…

…LLM agents break the bundling moat because the agent IS the bundle…

…Some vertical software companies own or license data that doesn’t exist anywhere else. Bloomberg collects real-time pricing data from trading desks worldwide. S&P Global owns credit ratings and proprietary analytics. Dun & Bradstreet maintains business credit files on 500M+ entities. This data was collected over decades, often through exclusive relationships. You can’t just scrape it. You can’t recreate it.

If your data genuinely cannot be replicated, LLMs make it MORE valuable, not less…

…The test is simple: Can this data be obtained, licensed, or synthesized by someone else? If no, the moat holds. If yes, you’re in trouble…

…The irony is that LLMs accelerate the bifurcation. Companies with proprietary data win bigger. Companies without it lose everything…

…HIPAA doesn’t care about LLMs. FDA certification doesn’t get easier because GPT-5 exists. SOX compliance requirements don’t change because Anthropic released a new plugin…

…In fact, regulatory requirements may slow LLM adoption in exactly the verticals where compliance lock-in is strongest. A hospital can’t replace Epic with an LLM agent because the LLM agent isn’t HIPAA certified, doesn’t have the required audit trails, and hasn’t been validated by the FDA for clinical decision support…

…Some vertical software becomes more valuable as more industry participants use it. Bloomberg’s messaging function (IB chat) is the de facto communication layer for Wall Street. If every counterparty uses Bloomberg, you have to use Bloomberg. Not because of the data. Because of the network.

LLMs don’t break network effects. If anything, they might make communication networks more valuable. The information flowing through these networks becomes training data, context, signal…

…Some vertical software sits directly in the money flow. Payment processing for restaurants. Loan origination for banks. Claims processing for insurance companies. When you’re embedded in the transaction, switching means interrupting revenue. Nobody does that voluntarily.

If your software processes payments, originates loans, or settles trades, an LLM doesn’t disintermediate you. It might sit on top of you as a better interface, but the rails themselves remain essential…

…LLMs don’t directly threaten system of record status today. But agents are quietly building their own.

Here’s what’s happening: AI agents don’t just query existing systems. They read your SharePoint, your Outlook, your Slack. They collect data on the user. They write detailed memory files that persist across sessions. And when they perform key actions, they store that context. Over time, the agent accumulates a richer, more complete picture of a user’s work than any single system of record.

The agent’s memory becomes the new source of truth. Not because anyone planned it, but because the agent is the one layer that sees everything. Salesforce sees your CRM data. Outlook sees your emails. SharePoint sees your documents. The agent sees all three, and remembers…

…The real threat isn’t the LLM itself. It’s a pincer movement that vertical software incumbents didn’t see coming.

From below, hundreds of AI-native startups are entering every vertical. When building a credible financial data product required 200 engineers and $50M in data licensing, markets naturally consolidated to 3-4 players. When it requires 10 engineers and frontier model APIs, the market fragments violently. Competition goes from 3 to 300…

…From above, horizontal platforms are going deep into vertical territory for the first time. Microsoft Copilot inside Excel now does AI-powered DCF modeling and financial statement parsing. Copilot inside Word does contract review and case law research. The horizontal tool becomes vertical through AI, not through engineering…

…For any vertical software company, ask three questions:

1. Is the data proprietary? If yes, the moat holds. If no, the accessibility layer is collapsing.

2. Is there regulatory lock-in? If yes, LLMs don’t change the switching cost equation. If no, switching costs are primarily interface-driven and dissolving.

3. Is the software embedded in the transaction? If yes, LLMs sit on top of you, not instead of you. If no, you’re replaceable.

Zero “yes” answers: high risk. One: medium risk. Two or three: you’re probably fine.

3. Rebuttal to Nicolas – Unemployed Capital Allocator

I used to work for a relatively large long only shop.

We switched from Factset to Bloomberg + CapIQ.

We spent approximately 0 seconds discussing the UI change…

…Where does learned UI really matter? Tools with tons of degree of freedom, and where action per minute actually does matter. Professional workflow tools. Modelling software. Video editing software. Ones where knowing the shortcut is a decent part of the job.

A text box isn’t replacing this.

The idea is quite alluring – to those that don’t know the UI. Look! You can just tell it to do something and … it does it!

Until you need to do it multiple times. Then you start to go – man, I wish there was a quick way for me to send this prompt, to do this exact thing I want it to do. Oh and remember all the info I’m supposed to provide so that I get back exactly what I want. Maybe I can map it to a button and a keyboard shortcu…

Oh wait – that’s UI.

Text is amazing because it’s universal. Text is also absolutely horrible because it has infinite degree of freedom, and introduces another level of abstraction. This is not what you want when you need to do a lot of specific things, quickly.

Oh and btw – these ‘legacy providers’ with pesky, hard to learn UI and custom codes? They can very easily tack on a text box to help new users – or power users that are doing a new workflow. While providing the flexibility of getting shit done when you need to…

…There’s zero chance that a complex web of markdown files is going to replace business logic entirely.

The reason is quite simple. You do not want to introduce a layer of unpredictability and degree of freedom to your core business logic. This is stuff of nightmares even at simple levels. When you introduce complexity and interdependency, it’s straight line to system failure and bankruptcy…

…I am not sure why an agent would choose one vendor for alerts functionality and another for watchlist and 3rd for news – or how it would even go about doing this – or why this would save money. Maybe these will all be new providers? Maybe the model will just vibe code point solutions as needed? Maybe there will be perfect interoperability between all the modules? Or maybe LLM will learn to translate them all perfectly? I don’t know…

…SoRs exist as the core, singular database of truth that the whole org agrees is the truth.

Why are we splitting this across thousands of markdown files???? With no way to audit, reconcile, track … basically all the things we need a SoR to do????

4. The Golden Age of Software – Unemployed Capital Allocator

There’s a classic CS exercise: write instructions for making a PB&J sandwich, then watch someone follow them literally. “Put peanut butter on the bread” — and they place the sealed jar on top of the loaf. The lesson: every instruction you write is full of assumptions the other person doesn’t share.

This is what’s happening every time you prompt an LLM. You say “build me a user dashboard” and the model fills in hundreds of implicit assumptions about the world that you never specified. And here’s the thing: it’s really good at this. Good enough that the code runs, the demo looks great, and you feel like a genius. But those decisions are educated guesses. The model built you a PB&J. It doesn’t know that you’re allergic to peanuts.

When you’re vibe coding a demo or a small CRUD app, none of this matters. You’re on the happy path, everything works, nobody cares about code quality. It’s beautiful. But enterprise software in the real world is about every path but the happy one — a world where failure on one of those paths means losses that dwarf annual costs…

…So what happens when the market gets carpet-bombed with new products and DIY builds — in a market where customers ask “who else uses this?” as a standard question?

Decision fatigue. Procurement asking, “Who even are these guys?”

In a world where production becomes free, the existing distribution relationship becomes the chokehold. And this is what every incumbent has. Yes — this is the tired old distribution vs. product debate. But I’d argue the current moment makes it more true than it’s ever been, precisely because the supply explosion makes trust, brand, and existing relationships much more valuable…

…While existing relationships holds the line, incumbents also get to play offence.

Your development team now has a new source of leverage. Properly harnessed, everything from research to product creation to debugging and maintenance gets faster. “Where is this logic?” stops being a week-long archaeology expedition. You simply do more with the same team.

In addition, the value ceiling of software today is dramatically higher than it was two years ago. Stuff that was “too expensive,” “too custom,” or “not worth the engineering time” suddenly becomes shippable. LLMs and VLMs have unlocked capabilities that were science projects two years ago…

…What about agents taking over corporate workflows and becoming a key user of software products? Doesn’t that leave a lot of products open to disintermediation?

I have three pushbacks.

First — a lot of workflow shifting to agents is not the same as all workflow shifting to agents. The gap between those two things is enormous, and the bear case tends to hand-wave right past it.

Second — agentic workflow is still a pipeline. And when you have a working production pipeline, you don’t rip out a key component to save a couple thousand bucks. But this isn’t just an inertia argument — it’s a structural one. The agent replacing that component needs to match the accumulated production knowledge baked into the existing solution: every edge case, every integration quirk, every failure mode discovered over years of real-world use. That’s not a matter of writing code. It’s a matter of replicating hard-won context that doesn’t exist in any training set. The idea that agents will vibe code an alternative for a critical piece of a high-speed production system isn’t just unlikely because of switching costs — it’s unlikely because the agent literally doesn’t know what it doesn’t know.

Third — non-humans using software is not a new thing. There’s a whole class of software that is mostly consumed by other software, and these still make amazing businesses. The identity of the user changing from human to agent doesn’t inherently destroy the value of the product.

5. How will OpenAI compete? – Ben Evans

“Jakub and Mark set the research direction for the long run. Then after months of work, something incredible emerges and I get a researcher pinging me saying: “I have something pretty cool. How are you going to use it in chat? How are you going to use it for our enterprise products?” 

– Fidji Simo, head of Product at OpenAI, 2026

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it”

– Steve Jobs, 1997

It seems to me that OpenAI has four fundamental strategic questions.

First, the business as we see it today doesn’t have a strong, clear competitive lead. It doesn’t have a unique technology or product. The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable. Nor does OpenAI have consumer products on top of the models themselves that have product-market fit. 

Second, the experience, product, value capture and strategic leverage in AI will all change an enormous amount in the next couple of years as the market develops. Big aggressive incumbents and thousands of entrepreneurs are trying to create new features, experiences and business models, and in the process try to turn foundation models themselves into commodity infrastructure sold at marginal cost. Having kicked off the LLM boom, OpenAI now has to invent a whole other set of new things as well, or at least fend off, co-opt and absorb the thousands of other people who are trying to do that.

Third, while much of this applies to everyone else in the field as well, OpenAI, like Anthropic, has to ‘cross the chasm’ across the ‘messy middle’ (insert your favourite startup book title here) without existing products that can act as distribution and make all of this a feature, and to compete in one of the most capital-intensive industries in history without cashflows from existing businesses to lean on. Of course, companies that do have all of that need to be able to disrupt themselves, but we’re well past the point that people said Google couldn’t do AI.

The fourth problem is expressed in the quotes I used above…

…There are something like half a dozen organisations that are currently shipping competitive frontier models, all with pretty-much equivalent capabilities. Every few weeks they leapfrog each other…

…There is no equivalent of the network effects seen at everything from Windows to Google Search to iOS to Instagram, where market share was self-reinforcing and no amount of money and effort was enough for someone else to to break in or catch up.

This could change if there was a breakthrough that enabled a network effect, most obviously continuous learning, but we can’t plan for that happening…

…The one place where OpenAI does have a clear lead today is in the user base: it has 8-900m users. The trouble is, there’re only ‘weekly active’ users: the vast majority even of people who already know what this is and know how to use it have not made it a daily habit. Only 5% of ChatGPT users are paying, and even US teens are much more likely to use this a few times a week or less than they are to use it multiple time a day. The data that OpenAI released in its ‘2025 wrapped’ promotion tells us that 80% of users sent less than 1,000 ‘messages’ in 2025. We don’t know how that changed in the year (it probably grew) but at face value that’s an average of less than three prompts per day, and many fewer individual chats. Usage is a mile wide but an inch deep…

…OpenAI’s ad project is partly just about covering the cost of serving the 90% or more of users who don’t pay (and capturing an early lead with advertisers and early learning in how this might work), but more strategically, it’s also about making it possible to give those users the latest and most powerful (i.e. expensive) models, in the hope that this will deepen their engagement. Fidji Simo says here that “diffusion and scale is the most important thing.” That might work (though it also might drive them to pay, or drive them to Gemini). But it’s not self-evident that if someone can’t think of anything to do with ChatGPT today or this week, that will change if you give them a better model. It might, but it’s at least equally likely that they’re stuck on the blank screen problem, or that the chatbot itself just isn’t the right product and experience for their use-cases no matter how good the model is.

In the meantime, when you have an undifferentiated product, early leads in adoption tend not to be durable, and competition tends to shift to brand and distribution. We can see this today in the rapid market share gains for Gemini and Meta AI: the products look much the same to the typical user (though people in tech wrote off Llama 4 as a fiasco, Meta’s numbers seem to be good), and Google and Meta have distribution to leverage. Conversely, Anthropic’s Claude models are regularly at the top of the benchmarks but it has no consumer strategy or product (Claude Cowork asks you to install Git!) and close to zero consumer awareness…

…So: you don’t know how you can make your core technology better than anyone else’s. You have a big user base but one that has limited engagement and seems really fragile. The key incumbents have more or less matched your technology and are leveraging their product and distribution advantages to come after the market. And, it looks like a lot of the value and leverage will come from new experiences that haven’t been invented yet, and you can’t invent all of those yourself. What do you do?

For a lot of last year, it felt like OpenAI’s answer was “everything, all at once, yesterday”. An app platform! No, another app platform! A browser! A social video app! Jony Ive! Medical research! Advertising! More stuff I’ve forgotten!  And, of course, trillions of dollars of capex announcements, or at least capex aspirations…

…As we all know, OpenAI has been running around trying to join the club, claiming a few months ago to have $1.4tr and 30 gigawatts of compute commitment for the future (with no timeline), while it reported 1.9 gigawatts in use at the end of 2025…

…But, again, does that get you anything more than a seat at that table? TSMC isn’t just an oligopolist – it has a de facto monopoly on cutting edge chips – but that gives it little to no leverage or value-capture further up the stack. People built Windows apps, web services and iPhone apps – they don’t build TSMC apps or Intel apps.

Developers had to build for Windows because it had almost all the users, and users had to buy Windows PCs because it had almost all the developers (a network effect!). But if you invent a brilliant new app or product or service using generative AI, or add it as a feature to an existing product, you use the APIs to call a foundation model running in the cloud and the users don’t know or care what model you used. No-one using Snap cares if it runs on AWS or GCP. When you buy an enterprise SaaS product you don’t care if it uses AWS or Azure. And if I do a Google Search and the first match is a product that’s running on Google Cloud, I would never know…

…As I’ve written this essay, I’ve returned again and again to terms like platform, ecosystem, leverage and network effect. These terms get used a lot in tech, but they have pretty vague meanings. Google Cloud, Apple’s App Store, Amazon Marketplace, and even TikTok are all ‘platforms’ but they’re all very different.

Maybe the word I’m really looking for is power. When I was at university, a long time ago now, my medieval history professor, Roger Lovatt, told me that power is the ability to make people do something that they don’t want to do, and that’s really the question here. Does OpenAI have the ability to get consumers, developers and enterprises to use its systems more than anybody else, regardless of what the system itself actually does?…

…Foundation models are certainly multipliers: massive amounts of new stuff will be built with them. But do you have a reason why everyone has to use your thing, even though your competitors have built the same thing? And are there reasons why your thing will always be better than the competition no matter how much money and effort they throw at it? That’s how the entire consumer tech industry has worked for all of our lives. If not, then the only thing you have is execution, every single day. Executing better than everyone else is certainly an aspiration, and some companies have managed it over extended periods and even persuaded themselves that they’ve institutionalised this, but it’s not a strategy.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google and Google Cloud), Amazon (parent of AWS), Apple, Meta Platforms, Microsoft (parent of Azure), Salesforce, and TSMC. Holdings are subject to change at any time.

What We’re Reading (Week Ending 15 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 15 February 2026:

1. Before the market declared SaaS dead, it should have tested Anthropic’s new tools first. We did – Jim Wagner

Lawyers are not early adopters by temperament, and they don’t grade on a curve. A tool that reviews a contract and misses a material protection doesn’t get classified as “promising but incomplete.” It risks being shelved. Permanently. The standard is binary: either the tool is reliable enough that I can build a workflow around it, or it isn’t. There is no middle ground where a legal team says “it caught seven out of ten critical issues, so let’s use it for now.”

This is especially true in regulated environments — clinical trials, financial services, healthcare — where a missed clause isn’t an aesthetic problem. It’s a liability exposure, a regulatory finding, or a damaged institutional relationship. The question isn’t whether AI can review contracts. It can. The question is whether it can do so at the threshold required for a professional to rely on it…

…A clinical trial agreement is a different animal. It’s longer, more technically complex, and touches regulatory frameworks — HIPAA, FDA reporting obligations, IRB oversight, 21 CFR Part 54 financial disclosure — that require genuine domain expertise. The provisions interact with each other in ways that matter: a change to monitoring visit procedures can impact confidentiality obligations; a publication review period needs to account for patent deferral timelines; a subject injury provision needs to include a safe harbor for protocol deviations made to protect patient safety.

Once again, we gave Claude the identical playbook TCN uses — one specifically structured for AI consumption, with clear logic and well-defined positions — and ran both systems against the same clinical trial agreement.

The gap didn’t narrow. It widened.

TCN made 101 insertions of required protective language and 62 targeted deletions — 163 substantive changes in total. Claude made 7 insertions and 4 deletions. Tellingly, Claude’s changes were largely find-and-replace-level revisions: substituting “immediately” with “promptly,” replacing “sole” with “reasonable,” increasing an insurance figure, and adding pandemic language to a force majeure clause. These are real edits. They are also the edits a first-year associate would make in the first twenty minutes of review…

…These results are not a reflection of Claude’s quality as a language model. Claude is an extraordinarily capable general-purpose AI, and we use it daily in our own work. The gap is a reflection of architecture and ambition.

Claude’s legal plugin reads an entire agreement and an entire playbook, then attempts to produce all of its analysis and redlines in a single pass. This is analogous to asking a lawyer to read a thirty-page contract and a fifty-topic playbook simultaneously, then dictate every markup from memory in one sitting. Issues inevitably get lost — not because the lawyer lacks ability, but because the task exceeds what any single-pass process can reliably accomplish.

A purpose-built system works differently. Each playbook position is matched against the agreement independently and analyzed in a dedicated step with only the relevant clause text and guidance in front of it. Nothing competes for attention. Every position in the playbook is programmatically guaranteed to be evaluated. The system doesn’t need to “remember” to check a provision — it cannot skip one.

This also explains why the gap widened on the longer, more complex clinical trial agreement. The more provisions, the more playbook positions, and the more regulatory context a single-pass system must hold in working memory simultaneously, the more it drops. A purpose-built pipeline scales linearly. A single-pass approach degrades…

…The stock market’s reaction treated Anthropic’s announcement as if a general-purpose model with a vertical plugin is architecturally equivalent to purpose-built vertical software. It isn’t — and the evidence is now available for anyone willing to run an actual test.

But there’s a more fundamental point. Nothing Anthropic announced addresses multi-document congruence, multi-party collaboration, or institutional workflow orchestration. A Claude user reviewing a clinical trial agreement operates in a single chat window with a single document. The protocol, consent form, budget, and coverage analysis — all of which must be internally consistent with the contract — exist nowhere in that workflow. Imagine five users with five separate skills in five disconnected chat windows, each trying to keep their work coordinated, cross-checked, and accurate. There is no shared data model. No audit trail. No collaboration layer. No mechanism to ensure that a change to the protocol ripples correctly through the budget, the consent form, and the contract.

The natural counterargument is that agentic AI frameworks — autonomous agents that chain tasks, manage state, and coordinate across documents — will close this gap. They will have an impact, we use them ourselves and we take that seriously. But agentic frameworks don’t arrive pre-built with plug-and-play domain solutions. They are tools, not answers. An agent orchestrating clinical trial study startup still needs deep context understanding of the subject matter, the stakeholder requirements, and the interconnectedness of every document and every party involved. It needs to know that a change to a protocol’s schedule of events must ripple through the budget, the consent form, and the coverage analysis — and it needs to know how. That’s not something you install. It’s something you build — substantial work that relies on deep expertise with respect to the subject matter and AI implementation, refined across thousands of agreements. The same architectural principles that separate a plugin from a platform will separate a generic agent from a team of purpose-built ones.

2. As AI enters the operating room, reports arise of botched surgeries and misidentified body parts – Jaimi Dowdell, Steve Stecklow, Chad Terhune and Rachael Levy

In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.

The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.

At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations…

…In May 2023, Dean was using TruDi in another sinuplasty operation when patient Donna Fernihough’s carotid artery allegedly “blew.” Blood “was spraying all over” – even landing on an Acclarent representative who was observing the surgery, according to a lawsuit Fernihough filed in U.S. District Court in Fort Worth against Acclarent and several manufacturers. One of Fernihough’s carotid arteries was damaged. She suffered a stroke the day of the surgery, according to her suit.

Acclarent “knew or should have known that the purported artificial intelligence caused or exacerbated the tendency of the integrated navigation system product to be inconsistent, inaccurate, and unreliable,” the suit alleges.

Acclarent has denied the allegations in both suits, which are ongoing, according to court filings. The company says it did not design or manufacture the TruDi system but only distributed it, according to court filings. Acclarent’s owner, Integra LifeSciences, told Reuters there’s no evidence of a link between the AI technology and any alleged injuries…

…Reuters found that at least 1,401 of the reports filed to the FDA between 2021 and October 2025 concern medical devices that are on an FDA list of 1,357 products that use AI. The agency says the list isn’t comprehensive. Of those reports, at least 115 mention problems with software, algorithms or programming.

One FDA report in June 2025 alleged that AI software used for prenatal ultrasounds was misidentifying fetal body parts. Called Sonio Detect, it uses machine learning techniques to help analyze fetal images.

“Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts,” stated the report, which does not say that any patient was harmed. Sonio Detect is owned by Samsung Medison, a unit of Samsung Electronics. Samsung Medison said the FDA report about Sonio Detect “does not indicate any safety issue, nor has the FDA requested any action from Sonio.”…

…The FDA requires clinical trials for new drugs, but medical devices face different screening. Most AI-enabled devices coming to market aren’t required to be tested on patients, according to FDA rules. Instead, makers satisfy FDA rules by citing previously authorized devices that had no AI-related capabilities, says Dr. Alexander Everhart, an instructor at Washington University’s medical school in St. Louis and an expert on medical device regulation.

Positioning new devices as updates on existing ones is a long-established practice, but Everhart says AI brings new uncertainty to the status quo.

“I think the FDA’s traditional approach to regulating medical devices is not up to the task of ensuring AI-enabled technologies are safe and effective,” Everhart told Reuters. “We’re relying on manufacturers to do a good job at putting products out. I don’t know what’s in place at the FDA represents meaningful guardrails.”

3. Clouded Judgement 2.13.26 – Build vs Buy – Jamin Ball

The cost of creating software is going to zero. The risk isn’t that someone will vibe code a internal CRM replacement…The risk is that 10 companies could now create a new CRM, from the ground up, built for a new end user in mind (agents vs people), with a business model for the AI world (consumption / usage vs seats), and now all of a sudden the market is flooded with offerings and the legacy space commoditizes.

This, to me, is the real risk. Software broadly commoditizes, with a new crop of software / value emerging. A big constraint to the development of software is engineering resources. Before the cloud, a constraint was how quickly could you stand up racks of servers to support user growth. In the cloud era that was commoditized, and engineering resources became the constraining factor (how quickly could you develop software). With AI, that constraining resource (engineering velocity) is going away.

So what happens from here…The world is about to be flooded with software. For companies that can’t innovate and capture this next S-Curve of innovation, they will slowly fade to irrelevance. The will be valued as companies in a post-growth industry, and receive a post-growth valuation multiple (see ya revenue multiples…). For those who can, a new vector of growth lays ahead of them…

…If we bring this back to the “is software dead” conversation, many are pointing to the recent Q4 earnings reports (we’re in the middle of earnings season right now) as “evidence” that AI isn’t eating software. For the most part, earnings have been good! Retention figures don’t seem to show any sign of cracking. However, I found an awesome graphic floating around X this week (copied below). It showed an index of newspaper companies stock performance and earnings over time (starting in 2002). What you’ll see, is the voting machine of the market saw the disruption coming from the internet, and started to discount the newspaper stocks right away. From 2002 to 2009 those stocks basically went down in a straight line. However, if you look at earnings estimates for that same set of companies, they actually grew for about 5 straight years! During that time, the stocks continued to drop. It wasn’t until 2007 when the earnings really started to get disrupted. Earnings then fell off a cliff. All of this to say – don’t take too much comfort in the short term quarterly results 🙂 Disruption generally takes a bit longer

4. Earnings Drive Stocks – Matt Cerminaro

Below I’m showing you the net income share vs the market cap share of each sector within the S&P 500 since 2005…

…Each color represents a sector. Net income share is on the left and market cap share is on the right.

Let’s start on the left.

See how the Technology Sector’s net income share has grown over time? It’s the light blue shade at the bottom of the chart.

Now look at the chart on the right.

That same light blue shade rising over time is the market cap share of Tech growing concurrently with the net income share.

Energy, the orange shade, used to command a larger share of the S&P 500’s overall net income, but it has shrunk over time.

Its market cap share has done the same.

5. AI and the Economics of the Human Touch – Adam Ozimek

The player piano, or pianola, was invented by Edwin Votey in 1895. At first it was a stand-alone machine that would be pushed up against an existing piano, like the one shown below.

Within a few years, player pianos could be built into the pianos themselves. The machines “read” music that was encoded onto rolls of paper. The notes were represented as holes in the paper that directed pneumatic airflow, which then pushed down the levers that depressed the piano keys.

The only role for humans to play in the functioning of a player piano was to pump the pneumatic foot pedals to keep the piano playing. No need for a skilled human piano player.

And yet, despite the technology to fully automate the job having been invented more than a century ago, people still make a living playing the piano today.

The job is not just limited to piano players performing in ticketed concert events, which of course are quite common. Hotels, bars, and restaurants continue to hire live piano players to provide background music as if it was 1894, the year before the invention of the pianola, which itself is hardly ever used anymore.

Listeners simply prefer music from a piano player rather than a player piano…

…In 2007, a restaurant entrepreneur named Jack Baum was teaching an executive MBA program at Southern Methodist University. He challenged the class to come up with a way to help restaurant customers pay their bill faster than simply waiting for the server to bring the check. Three students arrived at such a compelling answer that the four of them turned it into a company called Ziosk.

Ziosk’s tabletop ordering system provides customers with a tablet that allows them to order, pay, play games, enter coupons, and much else. Thus was born the ability to automate away the job of waiter.

The tablet debuted at 125 Chili’s locations in 2013, and today they are in thousands of restaurants. Ordering devices like this are much more commonplace today, including QR codes that allow customers to order from their own smartphones.

On paper, the job of waiter has been fully automated for over a decade. And yet, today there remain 1.9 million waiters across the US. It’s true that this number has dipped recently, and is slightly below the historical peak. Under the pressure of automation, the BLS forecasts that it will further decline within the next decade… by 1 percent. Is that the worst that full automation can do to this job?…

…Consider first that even some restaurants that have implemented automation nevertheless have wait staff. At Olive Garden, you can order and pay from a provided tablet at any point, but you still have a waiter who greets you, offers to take your order if you don’t want to use the tablet, and checks in on you throughout the meal. If you wait long enough, they will even bring the check. That is a strong signal that the waiter is adding value above and beyond automation…

…If productivity surges from AI, the United States will become a far richer country per capita. It’s not clear whether this will translate into much faster income growth for the median workers. In recent decades, after all, median wage growth has lagged mean wage growth — likely reflecting the trend that overall productivity growth has exceeded the growth in productivity of the typical worker.

Median wage growth has been positive, so it is not true that the typical workers fails to benefit from faster productivity growth. But the benefit for the typical worker is not proportional to the economy-wide growth in productivity, raising the spectre that future productivity growth could be even less proportional.

The result would be rising income inequality — which can straightforwardly be offset with policies that redistribute income. Redistribution might be expensive, but the same AI-driven economic growth that generated the rising inequality would also create the fiscal space needed to offset it. In short, spreading income around is a political challenge, not a policy or economic challenge.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Company Notes Series (#13): SR Bancorp

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first 11 editions in the series can be found hereherehereherehereherehere,  here,  herehere,  here, and here. Please share your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes for SR Bancorp

Data as of 2025-04-24

General background on SR Bancorp

  • SR Bancorp (ticker symbol “SRBK”) is the holding company for Somerset Regal Bank.
  • Somerset Regal Bank was established in 1887 as The Bound Brook Building and Loan Association; it became Somerset Regal Bank in 2023.
  • Somerset Regal Bank conducted a standard conversion that was completed in September 2023; trading of SR Bancorp shares on the NASDAQ exchange started on 20 September 2023. When the IPO was completed, SR Bancorp had 9.50793 million shares outstanding.
  • A day prior to the conversion, SR Bancorp acquired Regal Bancorp and its subsidiary, Regal Bank. The Somerset Regal Bank of today is thus the combination of Somerset Savings Bank and Regal Bank.
  • SR Bancorp’s branches are under either the Somerset Savings Bank banner or the Regal Bank banner; all the branches are in the North Eastern part of the state of New Jersey in the USA.
Figure 1; Source: IPO prospectus
  • SR Bancorp engages primarily in the lending of fixed-rate and adjustable-rate commercial real estate and residential mortgage loans to individuals. Within commercial real estate loans, most of them are in multi-family loans, which are still related to residential real estate (see Figure 2). Loan-to-value ratios for the loans are acceptable: generally no more than 75% for commercial loans, 80% for multifamily loans and 80% for residential loans (residential mortgage loans granted in excess of the 80% loan-to-value ratio criterion generally require private mortgage insurance). Nearly all of SR Bancorp’s loan portfolio is in New Jersey.
Figure 2; Source: SR Bancorp FY2025 Q2 10-Q

Investing information on SR Bancorp

  • SR Bancorp is a thrift conversion – see here for how to invest in thrifts
  • As of 31 December 2024, SR Bancorp had total assets of US$1.065 billion and shareholders’ equity of US$0.198 billion, giving a total equity to assets ratio of an excellent 18.6%. SR Bancorp’s total assets include securities held-to-maturity at amortized cost of US$148.8 million as of 31 December 2024; these securities have a marked-to-market value of US$122.6 million. If SR Bancorp’s shareholders’ equity is adjusted for the marked-to-market value, it would be US$0.172 billion, which would give a total equity to assets ratio of a still-robust 16%.
  • As of 24 April 2025, SR Bancorp has a stock price of US$13.23. Its latest financials (for the 3 months ended 31 December 2024) has its adjusted tangible shareholders’ equity (adjusted for mark-to-market value of securities and intangible assets) at US$0.144 billion, and its share count as 9,255,948, giving an adjusted tangible book value per share of US$15.56, and thus a price-to-tangible book (PTB) ratio of 0.85. If tangible shareholders’ equity was used, the tangible book value per share would be US$18.45 and the PTB ratio would be even better at 0.72
  • On 20 September 2024, SR Bancorp adopted a program to repurchase up to 950,793 shares, which was around 10% of its outstanding share count back then. Since the adoption of the buyback programme, SR Bancorp’s management has led buybacks of 347,067 shares, as of 31 December 2024, at an average price of US$11.29 each. Considering SR Bancorp’s low PTB ratio, the buybacks are accretive to shareholder value. Moreover, the adoption of the repurchase program happened exactly on the 1st anniversary of the thrift’s IPO, which is the earliest date on which a converted thrift can start repurchasing shares; this is a sign that management understands capital allocation and is trying to do the right things for shareholders
  • SR Bancorp has no non-performing assets as of 31 December 2024. Non-performing assets were 0.00% and 0.03% of total assets in FY2024 (fiscal year ended 30 June 2024) and FY2023. This points to well-run lending practices.
  • SR Bancorp’s annualised return on average equity in the first half of FY2025 was a decent (relative for a thrift!) 2.47%.
  • SR Bancorp’s three senior-most leaders are:
    • William Taylor, CEO of SR Bancorp and Somerset Regal Bank, and Chairman of Somerset Regal Bank; Taylor has been CEO since 2013, and Chairman since 2018; Taylor is already 67
    • Christopher Pribula, President and COO of SR Bancorp and Somerset Regal Bank; Pribula has been COO since 2013; Pribula is already 60
    • David Orbach, Executive Chair of SR Bancorp and Executive Vice Chair of Somerset Regal Bank; Orbach had been Executive Chairman of Regal Bancorp since its formation and of Regal Bank since 2011; Orbach is only 51
  • The compensation of Taylor, Pribula, and Orbach, are reasonable, as shown in Figure 3 below. As of 12 February 2024, Taylor, Pribula, and Orbach control 49,269 shares, 30,166 shares, and 133,919 shares respectively; based on SR Bancorp’s share price of US$13.23 as of 24 April 2025, the value of their stakes are US$0.652 million, US$0.399 million, and US$1.77 million, respectively. For Orbach, who has the most shares among the leadership team, his equity value significantly outstrips his annual compensation.
Figure 3; Source: SR Bancorp FY2024 proxy filing
  • Taylor, Pribula, and Orbach have compensation plans that include change in control provisions. In the event that SR Bancorp or Somerset Regal Bank is acquired and the trio’s employment ends, they are each entitled to a severance payment that is equal to 3x the sum of (1) their highest base salary in the three years before their termination, and (2) their average annual total incentive bonus for the three years before their termination. In addition, the terminated executive would also receive a lump sum payment equal to the value of the cost of 36 months of health care.
  • Putting everything together, it appears that SR Bancorp is a thrift with (1) a low valuation, (2) a management team that understands capital allocation, (3) well-run lending operations, (4) a management team with reasonable capability in running a profitable banking operation, and (5) a management team with reasonable compensation and some incentive to sell the bank. SR Bancorp’s standard conversion was completed in September 2023, so the earliest it can sell itself will be September 2026. The ages of Taylor and Pribula suggest that they would be very open to sell SR Bancorp, but Orbach is still relatively young so Orbach’s age could be a “risk” of SR Bancorp choosing to remain independent – the saving grace is that Orbach’s equity value significantly outstrips his annual compensation, as mentioned earlier
  • Assume that SR Bancorp (a) has a return on equity of 2.5% each year, (b) has a P/TB ratio that consistently hovers at 0.7, (c) uses up its repurchase program by April 2026, and subsequently buys back 5% of its outstanding shares annually, (d) gets acquired at a P/TB ratio of 1.4 eventually. Under such a scenario, the returns we could theoretically earn are shown in Table 1
Table 1

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What We’re Reading (Week Ending 08 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 08 February 2026:

1. Software Is Dead. Long Live Software – Eugene Ng

SaaS software stocks have declined significantly since Oct 2025 amid broader concerns that software is in decline, disrupted, displaced, and replaced by AI. Companies will use AI to redesign and unbundle their workflows over time, and the markets are effectively pricing in a software apocalypse.

The selloff has been almost indiscriminate, and the market is overly pessimistic…

…Software is a digital tool. It does not make sense to keep reinventing tools (e.g., a calculator or a hammer). If there are new tasks that have not yet been automated and can now be automated with software, now is the best time. Software is a TAM accelerator, and companies can create new and more products in shorter time frames.

The future appears to be agentic, with agents constituting the new digital workforce for humans, working for us and with other agents on exploratory, low-value, and repetitive tasks, thereby allowing us to focus on higher-value creative and strategic tasks.

The fact that everyone has a pen or a keyboard does not mean that we will have a rush of great writers, authors, or coders. The best work will still be done by the select minority, not the vast majority. Writing code is easy. Shipping a basic V1 is just 1% of the work. 99% of building enterprise software is about writing code that actually works and keeps working, maintaining it, iterating on it, securing it, and scaling it, and that is where the real difficulties lie. Vibe coding might be incredible for prototypes, internal tools, and new products, but it is not replacing a proven tool.

It is the same with AI. It does not mean that, if one can code faster with AI assistants, one can write great code or develop a great product. It still requires deep understanding, intent, judgment, and taste. And that’s where the bottleneck lies. Try getting a first-year coder to “vibe-code” and build a massive CRM database, and you will soon realise that it is not as easy. Automation scales whatever structure already exists. Agents tend to work best when intent is explicit and stable, and struggle when it is implicit and judgment-intensive.

SaaS is heterogeneous, not homogeneous. One cannot simply be lazy and lump everything into a single category of thought. The idea that enterprises will dump all software to “vibe-code” their own software with AI agents is wildly optimistic. Larger, more complex SaaS platforms with substantial codebases, deep workflows, extensive API connectors/regulatory licenses, strong network effects, and extensive hardware infrastructure are likely to be more insulated.

Deterministic systems where precision is critical, non-negotiable, requiring it to be 100% all the time, are more likely to be more insulated, as “close enough” is simply unacceptable. Probabilistic systems, conversely, tend to tolerate some errors and accept good-enough performance, and are primarily focused on pattern recognition, content generation, basic automation, and simple decision-making. If an LLM can replicate your probabilistic product with 90% of the quality at 10% of the cost, you are likely not to have a sound business model any longer. Even having a great UI or UX won’t save you.

High-value, mission-critical, must-have software is likely to be more insulated than low-value, non-mission-critical, good-to-have software. Functions such as cybersecurity, payments, and infrastructure are likely to remain robust. Because when these go down, the business stops. Customers should continue to be willing to pay premium prices for quality and peace of mind, remain highly sticky, and rarely switch because the cost of failure is too high. They tend to have high gross retention (customers don’t leave), high net retention (customers spend more over time), and are willing to pay more as their business grows.

2. The Utilities Analyst Who Says The Data Center Demand Story Doesn’t Add Up (Transcript here) – Tracy Alloway, Joe Weisenthal, and Andy DeVries

Tracy: Interesting. One of the reasons we wanted to talk to you is because you have that contrarian take on the data center built out, and we wrote it up in the Odd Lots newsletter, which everyone should subscribe to. It got a lot of attention. Your analysis, interestingly, is just based on some pretty simple math. So why don’t you, just to start out with, why don’t you walk us through the calculations that you’re actually making to try to analyze how much capacity the utilities are taking on to actually power data centers?

Andy: As you said, it’s pretty simple math here. So data centers now are consuming around 45 GW of power. And you can switch between capacity and throughput – I’m going to stick with capacity. So 45 GW of power. And then there’s lots and lots of third party estimates for where they’re going to be in 2030, and they are centered around this, 90 GW, 95 GW. So you need to add 50 GW. For 2035, there’s a lot fewer estimates. You come around 160 GW. These estimates, they’re all over the place, they come from sell-side banks, they come from consultants, they come from everyone. BNEF has one. They’re I think one of the best out there.

Joe: Thank you.

Andy: We use them a lot. So that’s on the demand side on where you’re going to come out on these. Then you look at the supply – and everyone talks about the demand right – but then you look at the supply and all these tech bros are too cool to actually look at the supply and do utility analysis. Who wants to be a utility analyst? You were making fun of us before. So you look at the supply and these utilities are tracking all these data centers connecting to the grid because they’ve got to do a lot of work. Spend a lot of money on transmission, distribution, new substations, transformers, it’s a lot of work. But it boosts their earnings growth so they’re happy to talk about this. You look at where they’re at and where they see things coming, they’ve got around 140 GW of near-term supply. Kudos to the utilities, they break out what’s firm, committed, signed, contracted, versus pipeline behind it. Because there’s a lot of double, triple, quadruple counting. If you’re going to build a data center in the Southeast, you’re going to tell Duke, you’re going to tell Southern, you’re going to tell Dominion, you’re going to build one. So that’s the pipeline potential. But looking just at the firm, committed, whatever they want to call it, around 140 GW.

Now you got to PUE adjust that. When you connect a data center to the grid, you’ve got lights, you’ve got cooling. Those third party estimates I gave you are just for raw compute.

Tracy: Why did you split those out though? All data centers are going to need to be cooled down, right? What’s the point of splitting it out?

Andy: I’m not splitting up. I’m just adjusting it downward, because the third-party estimates are just compute. So you’re connecting to the grid, you’re going to ask for the lights, the cooling and everything. I want to go apples to apples versus the third party.

Joe: What does PUE stand for? 

Andy: Power usage efficiency. So they’re at 140 GW. So that power is down to 110 GW on apples to apples. Just to go back, you only need 50 GW on the demand side between now and 2030. The utilities are working at connecting 110 GW, so the utilities are working on already connecting almost as much as you need by 2035. Again, just to make sure we are on the same page, third party estimates 45 GW for data centers now, going to 95 GW. That’s 50 GW. Utilities are working on 110 GW. They don’t give timing for that. Some of it’s going to be past 2030. What I’m trying to say is there is a lot of supply of data centers coming and it’s very unclear if there’s going to be demand for this…

…Tracy: The wild card to me seems to be the demand forecasts. We’re already seeing those change pretty wildly. I know you mentioned Bloomberg NEF – they’ve raised their forecast, because of the data center buildout. They’ve raised their forecast of how much energy is actually needed. How much confidence do you have in those demand numbers, and how could they change over time?

Andy: Moderate confidence. Look where we’re right now. OpenAI built all the ChatGPT using 2 GW. All the big tech hyperscalers, they haven’t given their 2025 volumes yet, but if you take their 2024 volumes and then double it – and this is output, so I’m going to transfer it back to capacity – and you assume a 60% capacity factor, all the hyperscalers combine around 15 GW. That’s got to be over half the data center demand. To talk about 95 GW  – it’s a staggering number. Then you get more advances and Nvidia chip efficiency – obviously Jevon’s Paradox kicks in, you’ve had numerous guests talk about that – it’s just a lot of power.

Tracy: Can you just remind us 1 GW is enough to power what? I like these comparisons.

Andy: A million homes. It depends if you’re in Florida or the northeast. But generally speaking, that’s where you’re at…

…Andy: But then you don’t need as many new power plants as everyone’s saying.  Constellation’s CEO said on a call the other day. He said, “Use the Texas market.” He said, “87 GW peak market, you could add 10 GW to Texas tomorrow, which would be the equivalent of sending every single Nvidia chip for an entire year to Texas and running them 24/7. That’s 10 GW. You could run it right now, existing grid, existing plants for all but 40-50 hours a year.” We stress tested it. There are some coal plants that could ramp up capacity factor. There’s plenty of gas plants that can. I don’t know if it’s 40 hours, 100 hours, 140 hours, but it makes more sense to pay someone else not to run their chemical company, the refinery company, for 40-50 hours a year, rather than have the utilities go out and spend $10 billion connecting faraway wind farms. That’s the argument. We’ve come in the middle of it, but there is plenty of existing capacity on the grid that could ramp up to meet it. Then other guests have pointed out at Odd Lots, the peak demand of the grid is 850 GW. The overall size of the grid is are 1,200 GW and then you’re adding 50 GW a year of solar, and then you’re going to start adding 20 GW of gas. We’re going to handle it. I’m not really worried about any brownouts or anything.

3. Incentives > Intelligence: The Real Barrier(s) to Agentic AI – Abdullah Al-Rezwan

Such “disingenuous yet clever” strategy is actually a good glimpse of the barrier to agentic AI’s adoption. While most of us focus too much on technical capabilities of AI, we may still be underestimating the challenges related to (lack of) incentives of incumbents as well as legal frameworks for agentic AIs to flourish. “Ghosts of Electricity” had a very good piece explicitly laying out couple of real headaches:

“we highlight two main obstacles that stand in the way of AI agents becoming true digital partners. The first has to do with the design of the internet itself–the interface of nearly every website was meticulously optimized for humans. But what works for humans does not necessarily work for AI agents. Until AI can truly emulate every aspect of a human being, we will likely need to design a parallel internet for agentic commerce to work. But there’s reasons to suspect that this will not happen soon: some firms have little to gain, and potentially much to lose, from investing and facilitating a machine-readable web. This leads us to the second obstacle, which is even simpler: many use-cases for AI agents are illegal, or at least legally ambiguous. The rights around AI agents need to be clarified and developed in order for agents to participate meaningfully in economic transactions and interactions.”

In the piece, they substantiated these headaches with a couple of examples. Some excerpts below:

“Let’s say you tell your favorite AI tool (ChatGPT Atlas, Perplexity Comet, Claude, Gemini Antigravity) to purchase a concert ticket for you or to shop on Amazon. Take seat selection. The agent reaches the seat map and gets stuck because it can’t tell what’s actually available or what counts as a “good” choice. The map isn’t a simple list: seats change color when you hover, prices only appear after clicking, and availability updates every second as other people buy tickets. While the agent pauses to figure out what to do, the seat disappears, the page refreshes, and it loses its place. Every pause, waiting for pages to load, retrying after errors, handing control back to you, adds friction. What takes a human a few minutes to do turns into a brittle, ten-minute ordeal

4. The Slow Singularity – Abdullah Al-Rezwan

To understand why the future might be sluggish, the authors first had to decode the past. In a methodological twist that fits the subject matter perfectly, they employed OpenAI’s Deep Research to dig through economic history and construct a dataset of 150 essential tasks over the last century. This analysis revealed a counterintuitive “Zero Productivity Paradox” as switching a task from labor to capital contributes zero to Total Factor Productivity (TFP) growth at the exact moment it happens. This is because firms switch exactly when the costs are equal. The growth comes entirely from what happens after the switch: the task is now performed by a machine that improves exponentially faster than a human.

They estimate that while machine productivity on automated tasks grows at a blistering 5% annually, human task efficiency grows at a meager 0.5% and in some sectors, human efficiency appears to be declining. To prove how vital this dynamic is, they calculated a “frozen” counterfactual: if we had stopped automating new tasks in 1950, but allowed computers to keep getting faster at the things they were already doing, US economic growth would have essentially flatlined for the last 70 years…

…The same logic explains why the AI “singularity” is likely to be a slow burn rather than an explosion. The economy operates on a “weak link” principle. Production requires a chain of complementary tasks; you need high-speed coding, but you also need management, legal compliance, physical logistics etc. Because these tasks are interlinked, the economy is constrained by its slowest components. Even if AI automates cognitive tasks with infinite speed, total output remains bottlenecked by the essential tasks that still require slow-improving human labor.

5. The Hidden Book Value of Community Banks: Why Call Reports Matter More Than Public Financials – Dirt Cheap Banks

Call Reports exist for safety and soundness, not for investors. They are not designed to be friendly, summarized, or marketed. They are designed to tell regulators whether a bank can survive stress, fund itself, and absorb losses. That is exactly why they are so valuable.

The first thing to understand is structure. When you buy stock in a small community bank, you are almost always buying the holding company, not the bank itself. The holding company often has no real operations. It owns one asset, the bank. It might have a little cash, maybe some legal expenses, sometimes holding company debt, but that is it. The bank owns the loans, the deposits, the securities, the real estate, and the earnings power…

…Public financial statements typically show the holding company only, and often only once a year…

…Call Reports are different. They are filed quarterly by the bank itself. They show the full balance sheet, income statement, and capital position of the operating bank. If the bank earns money and retains it, equity goes up in the Call Report immediately, whether or not a dividend is paid to the parent. If securities move and AOCI changes, you see it. If credit costs rise, you see it. If loan growth accelerates, you see it.

When people ask which book value is the real one, the answer from decades of bank investing is simple. The bank level equity in the Call Report is the economic book value. That is what generates earnings. That is what a buyer would pay for in a sale. That is what regulators protect. The parent level equity is just an accounting wrapper…

…West Shore Bank Corporation $WSSH is a textbook case of how public financials can materially misstate economic reality for small community banks, and why Call Reports create an information advantage…

…At December 31, 2024, the consolidated balance sheet shows:

Total stockholders’ equity of approximately $48.2 million.

This is the number scraped by data aggregators. It is the number displayed on OTC Markets. It is the number most investors implicitly anchor to when thinking about book value.

With a current market capitalization of roughly $45 million, West Shore appears to be trading at or near book value based on these public financials. To a casual observer, the stock looks fairly valued. There is no obvious discount screaming off the page…

…In the Call Report, under Total bank equity capital, the number is dramatically higher.

As of the most recent Call Report dated 9/30/2025, total bank equity capital is approximately $73 million.

This is the capital base regulators use to determine whether the bank is well capitalized. It reflects retained earnings, balance sheet growth, and changes in AOCI on a quarterly basis.

Nothing magical happened between these two documents. There was no recapitalization. No asset sale. No accounting maneuver.

The difference exists because the two statements are answering different questions.

The annual report answers:

What does the holding company’s GAAP equity look like at year end?

The Call Report answers:

How much capital does the operating bank have today?

Those are not the same question, and in small community banks, the answers often diverge significantly over time.

Using the same $45 million market capitalization:

  • Based on public financials, West Shore appears to trade at roughly 0.9x to 1.0x book value
  • Based on Call Report data, West Shore is trading at approximately 0.6x bank-level book value

That is the entire disconnect.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Gemini) and Amazon. Holdings are subject to change at any time.

What We’re Reading (Week Ending 01 February 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 01 February 2026:

1. Anthropic Lowers Gross Margin Projection as Revenue Skyrockets – Sri Muppidi

Anthropic last month projected it would generate a 40% gross profit margin from selling AI to businesses and application developers in 2025, according to two people with knowledge of its financials. That margin was 10 percentage points lower than its earlier optimistic expectations, though it’s still a big improvement from the year before…

…If Anthropic also counted inference costs for Claude chatbot users that don’t pay for a subscription, its gross margin would be about 38%, or a few percentage points lower than for paid users, based on The Information’s analysis…

…Anthropic has previously projected gross margins above 70% by 2027, and OpenAI has projected gross margins of at least 70% by 2029, which would put them closer to the gross margins of publicly traded software and cloud firms. But both AI developers also spend a tremendous amount on renting servers to develop new models—training costs, which don’t factor into gross margins—making it more difficult to turn a net profit than it is for traditional software firms.

The inference costs are in addition to costs from training the models. Anthropic last month expected its costs for training its AI models for 2025 to be roughly $4.1 billion, up roughly 5% from its summer projections. OpenAI, meanwhile, expected to spend $9.4 billion on compute for training its AI models last year.

2. A business that scales with the value of intelligence – Sarah Friar

We launched ChatGPT as a research preview to understand what would happen if we put frontier intelligence directly in people’s hands…

…As ChatGPT became a tool people rely on every day to get real work done, we followed a simple and enduring principle: our business model should scale with the value intelligence delivers…

…Looking back on the past three years, our ability to serve customers—as measured by revenue—directly tracks available compute: Compute grew 3X year over year or 9.5X from 2023 to 2025: 0.2 GW in 2023, 0.6 GW in 2024, and ~1.9 GW in 2025. While revenue followed the same curve growing 3X year over year, or 10X from 2023 to 2025: $2B ARR in 2023, $6B in 2024, and $20B+ in 2025. This is never-before-seen growth at such scale. And we firmly believe that more compute in these periods would have led to faster customer adoption and monetization.

3. 50x in 5 Years – Joe Raymond

I discovered Cable Information Systems (CIS) in a page of one-liner descriptions of companies in the OTC edition of Moody’s Manual.

It was trading for a dollar per share…

…Believe it or not, Cable Information Systems had 50,000 subscribers in 1980 which placed them in the top 10 U.S. cable companies.

The company had about 1 million shares outstanding which were inactively trading at a dollar a share in the pink sheets.

At the time, it was said that cable subscribers were going to be worth $1,000 each to an operator of cable services. Thus, it became apparent that Cable Information with 1 million shares outstanding was worth $50 million although it was selling at a market value of only $1 million.

A second way of valuing a cable company was to apply the then-going multiple to cash flow, deduct debt, and divide by outstanding shares. Doing that I also came up with $50 a share.

So, using the two ways of valuing a cable company at the time, I found a $1 stock worth $50.

I asked Peter if he knew of anyone that cared to sell shares, and he told me that some of the employees were shareholders and, from time to time, some of them were interested in selling. I asked him if he would give them my name and number and he said gladly, they’d be happy to know of me.

Over a period of months, some of these employees called and asked if I would buy their shares. I said yes, I am glad to pay the current market price of approximately $1 per share.

Before buying, I told any caller offering shares to me, “Look, I want to make clear to you that I’m buying because I think the shares are worth a heck of a lot more than a dollar and if I were you, I would not be eager to sell.”

As employees, I wanted them to know I felt strongly it was not a good idea to sell. After questioning them and explaining why they should not sell, some people still sold me their shares…

…Late in the year, 1981, Peter telephoned me to tell me that he was selling out at $48 in cash to John Malone, who was the biggest cable operator in the United States.

My first reaction was, “Wow, two dollars short of what we had calculated it was worth.”

But Peter told me that there were two dollars being put into escrow and they will probably be paid to shareholders as well, bringing the total consideration to $50…

…Here’s what Larry was looking at in Moody’s Manual back in 1977:

Sales were growing double digits and accelerating. Margins were expanding.

The stock traded between $0.38 and $1.00 in 1977. The normalized P/E ratio was 1x on the low end and 3x on the high end.

There was some debt, as was common with fast growing cable companies at the time. The EV/EBITDA at $1 per share was 5x.

4. My Interview With Andy Jassy: OpenAI, Trump, Power and the Future of AWS – Jessica E. Lessin and Andy Jassy

Andy Jassy: I think that we’re excited about agentic commerce. I think that it has the chance to make it easier for customers to find what they want. If you know what you want, it’s pretty hard to find a better experience than popping onto Amazon and searching and finding it.

But the one place still where physical retail has some advantages, in my opinion, is the ability to go in, not know what you want, ask questions, refine those questions, have somebody point you to different things. And I think agents are going to help customers with that type of discovery. And it’s part of why we’ve invested so much in Rufus, which is our shopping assistant, which has really gotten quite good.

And I think that over time that we will work with other third-party agents as well. I think today the experience hasn’t been great yet. You know, I think that a lot of these third-party agents, they don’t have your buying history, they don’t have what you like, a lot of the information about pricing and the product is off.

But over time, I do believe that will get better. I also think there needs to be the right value exchange between the agents and between the retailers themselves, but I am optimistic that those will work out. We’re having conversations with lots of people and I’m very bullish on agentic commerce…

…Jassy: As you know, the chips are such an important part of the performance and the cost structure for people running technology infrastructure. We learned in the CPU side of the business, we had this deep relationship with Intel, which we still do. But when you have a significant leader, it’s not always their priority to take price performance down for customers.

And one thing we learn about customers over and over and over again is they want better price performance. And so we built Graviton, our own custom CPU silicon, which is about 40% more price performance than the leading other x86 processors. And that has been really great for our customers and business.

And about 90% of our top 1,000 customers now use Graviton in a very significant way. And we just saw this same movie happening in the AI space. And we have a very deep partnership with Nvidia, and we will for as long as I can foresee. But customers badly want better price performance. And so that’s why we built Trainium.

Our Trainium2 chip has been fully subscribed. Anthropic runs hundreds of thousands of Trainium2 chips as they’re training their next model of Claude on top of it. It’s a multi-billion dollar business. And we just released Trainium3 which is our next version of chip, which is 40% more price performant than Trainium2.

And Trainium2 was about 30% to 40% more price performant than the other leading GPUs out there. If you want to allow customers to be able to use AI as expansively as they want, you must take the cost of inference down. And the chip is a big piece of it…

…[Jassy:] I think we’re just in this stage right now where there is so much demand. And, you know, we’re not at this point, we’re not just trying to guess whether there’s demand. We have so much demand. I think the industry would tell you as a whole, there is still not enough capacity, even though it’s gotten better than it was 18 months ago, we could still be growing faster if we had more capacity…

…[Jassy:] We’re in this really interesting stage of AI adoption, in my opinion. It’s very bar-belled.

You have a lot of use by the AI labs who are consuming gobs and gobs of compute right now, and maybe a runaway app or two like ChatGPT. Then the other side of the barbell are enterprises who are really using AI for cost avoidance or productivity. Customer service, business process automation, things like that.

But the middle of that barbell are all the enterprise workloads in production that are not using inference yet. That will. We’re still at this relatively early stage. I believe that the middle part of the barbell is going to be the largest absolute segment. And I think when enterprises get to deploying their production apps using inference and AI, they’re going to want those applications to run close to the rest of their other applications and where their data is.

And just the largest amount by a fair bit, resides in AWS. And so we’re making it easier and easier for customers to be able to run their core workloads with their AI workloads.

5. An Early Buffett Partnership Investment – Joe Raymond

The first investment Buffett disclosed in his partnership letters was Commonwealth Trust in 1958…

…Buffett started buying Commonwealth at $50 and thought it was worth $125…

…Warren was paying 5x earnings and 80% of book value. Seems like a good deal for a bank earning 20% on equity.

The second is the nature of the bank.

Commonwealth Trust had $50 million of deposits and only $20 million of loans, most of which were residential mortgages. It also had $21 million of government securities.

The asset mix appeared highly conservative, at least from a credit perspective.

While the assets looked solid, there was little equity in the business ($2 million of equity on $53 million of assets). You don’t see this sort of leverage today, but it was common practice amongst small thrifts in the ’50s…

…A sharp increase in reserves, coinciding with rising interest rates, caused a big hit in 1954. This was magnified by the fact that Commonwealth’s equity was only 4% of assets going into the year. Book value per share fell 34%.

By the time Buffett was buying in 1957, interest rates were moderating, reserves were healthy, and earnings and equity were about to resume their growth…

…Warren didn’t hold long.

He sold his shares for $80 apiece about a year after buying them.

This was a 25% premium over the prevailing market price at the time and represented a profit of 57% for the partnerships…

…Buffett said the buyer at $80 could expect to do well over time and that he was selling to recycle the proceeds into a better opportunity (Sanborn Map)…

…About a year after Buffett sold, Commonwealth Trust merged with Hudson County National Bank (HCNB). It was a share-for-share deal, and the combined bank kept the Hudson County name…

…Over the next eight years, HCNB grew its book value from $135 to $183 per share (4% CAGR) and paid $57 per share of dividends. The average stock price in 1968 was $228 (1.25x book value).

So, the buyer from Buffett at $80 in 1958 had $228 by 1968 plus $58 of dividends.

Including dividends, the total annual return was in the mid-teens…

…This is a good example of successful value investing.

Corporate performance was mediocre, but big follies were avoided. Equity grew slowly and dividends were paid.

A cheap entry price and average exit price produced a mid-teens IRR over more than a decade.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Amazon (parent of AWS). Holdings are subject to change at any time.

What We’re Reading (Week Ending 18 January 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 18 January 2026:

1. “The Compute Theory of Everything” – Abdullah Al-Rezwan

Albanie referred two seminal essays by Hans Moravec: “The Role of Raw Power in Intelligence” (1976), and “When will computer hardware match the human brain?” (1998)

I glanced through the first essay, but read the second one. I was moved just by reading the abstract of the paper:

“This paper describes how the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. The processing power and memory capacity necessary to match general intellectual performance of the human brain are estimated. Based on extrapolation of past trends and on examination of technologies under development, it is predicted that the required hardware will be available in cheap machines in the 2020s.”…

…Despite acknowledging valid reasons to harbor skepticism, Moravec relied on his simple observations on computing:

“Computers doubled in capacity every two years after the war, a pace that became an industry given: companies that wished to grow sought to exceed it, companies that failed to keep up lost business. In the 1980s the doubling time contracted to 18 months, and computer performance in the late 1990s seems to be doubling every 12 months…

…At the present rate, computers suitable for humanlike robots will appear in the 2020s. Can the pace be sustained for another three decades? The graph shows no sign of abatement. If anything, it hints that further contractions in time scale are in store. But, one often encounters thoughtful articles by knowledgeable people in the semiconductor industry giving detailed reasons why the decades of phenomenal growth must soon come to an end.”

2. Venezuelan Historical Primer: Friend, Foe, Vassal – Collapse Intelligence Agency

Before the US Shale Revolution (Fracking) in ~2010, the consensus view among energy majors was that US domestic light sweet oil was dying.

It was thought the world had burned all the easy, high-quality oil. Future reserves were geographically concentrated in the Middle East or were “Trash Grade” (Canadian Bitumen, Venezuelan Extra-Heavy, Mexican Maya).

US Refiners (Valero, Chevron, LyondellBasell) decided that to stay profitable, they had to spend billions upgrading their facilities to process the “Trash Grade” oil that nobody else wanted. They built massive Delayed Cokers and Hydrocrackers.

By building machines that could eat $10/barrel sludge and turn it into $50/barrel gasoline, they guaranteed massive margins that simple refineries in Europe couldn’t touch…

…Meanwhile, Gulf Coast refiners weren’t building just for Venezuela; they were building for the neighborhood.

In the 90s, Mexico’s massive Cantarell Field was pumping huge volumes of “Maya” crude (heavy/sour)

Venezuela had Orinoco (extra heavy/sour).

The logic was that the Gulf of Mexico basin was destined to be the global hub for processing heavy oil. Refiners poured tens of billions of dollars into capital expenditures (CapEx) to optimize specifically for this metallurgic sludge…

…When fracking exploded in 2010, the US flooded the market with Light Sweet Crude (LTO).

The US refiners looked at all this light oil and realized, “We can’t use it efficiently.”

If you put Light Oil into a refinery built for Heavy Sludge, you run the equipment inefficiently. You under-utilize the coker units (billions in wasted sunk costs).

The US exports its own high-quality light oil to Asia/Europe (who have simple refineries) and must import heavy oil to satisfy the diet of the Gulf Coast processing complex.

The capacity exists because Venezuela effectively paid to build it (via Citgo) and US executives in the 90s bet the house that heavy oil was the only game in town. Formerly permissive national economic policies supercharged the technological development.

The recent US military operation isn’t just about seizing new resources; it’s about feeding a starving industrial monster that was specifically designed to eat only what Venezuela produces. And that industrial monster must feed the US economy because now the shale party is about to end. The US administration knows this. They have made a 100% rational decision to force a bloody showdown with Venezuela to fund US energy needs.

3. The AI revolution is here. Will the economy survive the transition? – Michael Burry, Dwarkesh Patel, Patrick McKenzie, and Jack Clark

Jack: Yes, something we say often to policymakers at Anthropic is “This is the worst it will ever be!” and it’s really hard to convey to them just how important that ends up being. The other thing which is unintuitive is how quickly capabilities improve—one current example is how many people are currently playing with Opus 4.5 in Claude Code and saying some variation of “Wow, this stuff is so much better than it was before.” If you last played with LLMs in November, you’re now wildly miscalibrated about the frontier…

…Dwarkesh: The million-dollar question is whether the METR productivity study (which shows that developers working in codebases they understood well had a roughly 20% decrease on merging pull requests from coding tools) or human equivalent time horizons of self-contained coding tasks (which are already in the many-hours range and doubling every four to seven months) is a better measure of how much speedup researchers and engineers at labs are actually getting. I don’t have direct experience here, but I’d guess it’s closer to the former, given that there isn’t a great feedback verification loop and the criteria are open-ended (maintainability, taste, etc.).

Jack: Agreed, this is a crucial question—and the data is conflicting and sparse. For example, we did a survey of developers at Anthropic and saw a self-reported 50% productivity boost from the 60% of those surveyed who used Claude in their work. But then things like the METR study would seem to contradict that. We need better data and, specifically, instrumentation for developers inside and outside the AI labs to see what is going on. To zoom out a bit, the massive and unprecedented uptake of coding tools does suggest people are seeing some major subjective benefit from using them—it would be very unintuitive if an increasing percentage of developers were enthusiastically making themselves less productive…

…Michael: Do you think the podium will keep rotating? From what I’m hearing, Google is winning among developers from both AWS and Microsoft. And it seems the “search inertia” has been purged at the company.

Dwarkesh: Interesting. Seems more competitive than ever to me. The Twitter vibes are great for both Opus 4.5 and Gemini 3.5 Pro. No opinion on which company will win, but it definitely doesn’t seem settled.

Jack: Seems more competitive than ever to me, also!…

…Jack: Coding has a nice property of being relatively “closed loop”—you use an LLM to generate or tweak code, which you then validate and push into production. It really took the arrival of a broader set of tools for LLMs to take on this “closed loop” property in domains outside of coding—for instance, the creation of web search capabilities and the arrival of stuff like Model Context Protocol (MCP) connectivity has allowed LLMs to massively expand their “closed loop” utility beyond coding.

As an example, I’ve been doing research on the cost curves of various things recently (e.g. dollars of mass to orbit, or dollars per watt from solar), and it’s the kind of thing you could research with LLMs prior to these tools, but it had immense amounts of friction and forced you to go back and forth between the LLM and everything else. Now that friction has been taken away, you’re seeing greater uptake. Therefore, I expect we’re about to see what happened to coders happen to knowledge workers more broadly—and this feels like it should show up in a diffuse but broad way across areas like science research, the law, academia, consultancy, and other domains.

Michael: At the end of the day, AI has to be purchased by someone. Someone out there pays for a good or service. That is GDP. And that spending grows at GDP rates, 2% to 4%—with perhaps some uplift for companies with pricing power, which doesn’t seem likely in the future of AI.

Economies don’t have magically expanding pies. They have arithmetically constrained pies. Nothing fancy. The entire software pie—SaaS software running all kinds of corporate and creative functions—is less than $1 trillion. This is why I keep coming back to the infrastructure-to-application ratio—Nvidia selling $400 billion of chips for less than $100 billion in end-user AI product revenue.

AI has to grow productivity and create new categories of spending that don’t cannibalize other categories. This is all very hard to do. Will AI grow productivity enough? That is debatable. The capital expenditure spending cycle is faith-based and FOMO-based. No one is pointing to numbers that work. Yet.

There is a much simpler narrative out there that AI will make everything so much better that spending will explode. It is more likely to take spending in. If AI replaces a $500 seat license with a $50 one, that is great for productivity but is deflationary for productivity spend. And that productivity gained is likely to be shared by all competitors…

…Michael: At some point, this spending on the AI buildout has to have a return on investment higher than the cost of that investment, or there is just no economic value added. If a company is bigger because it borrowed a lot more or spent all its cash flow on something low-return, that is not an attractive quality to an investor, and the multiple will fall. There are many non-tech companies printing cash with no real prospects for growth beyond buying it, and they trade at about 8x earnings…

…Michael: Well, value accrues, historically, in all industries, to those with a durable competitive advantage manifesting as either pricing power or an untouchable cost or distribution advantage.

It is not clear that the spending here will lead to that.

Warren Buffett owned a department store in the late 1960s. When the department store across the street put an escalator in, he had to, too. In the end, neither benefited from that expensive project. No durable margin improvement or cost improvement, and both were in the same exact spot. That is how most AI implementation will play out.

This is why trillions of dollars of spending with no clear path to utilization by the real economy is so concerning. Most will not benefit, because their competitors will benefit to the same extent, and neither will have a competitive advantage because of it.

I think the market is most wrong about the two poster children for AI: Nvidia and Palantir. These are two of the luckiest companies. They adapted well, but they are lucky because when this all started, neither had designed a product for AI. But they are getting used as such.

Nvidia’s advantage is not durable. SLMs and ASICs are the future for most use cases in AI. They will be backward-compatible with CUDA [Nvidia’s parallel computing platform and programming model] if at all necessary. Nvidia is the power-hungry, dirty solution holding the fort until the competition comes in with a completely different approach…

…Jack: The main thing I worry about is whether people succeed at “building AI that builds AI”—fully closing the loop on AI R&D (sometimes called recursively self-improving AI). To be clear, I assign essentially zero likelihood to there being recursively self-improving AI systems on the planet in January 2026, but we do see extremely early signs of AI getting better at doing components of AI research, ranging from kernel development to autonomously fine-tuning open-weight models…

…Michael: If I had the ear of senior policymakers, I would ask them to take a trillion dollars (since trillions just get thrown around like millions now) and bypass all the protests and regulations and dot the whole country with small nuclear reactors, while also building a brand-new, state-of-the-art grid for everyone. Do this as soon as possible and secure it all from attack with the latest physical and cybersecurity; maybe even create a special Nuclear Defense Force that protects each facility, funded federally.

This is the only hope of getting enough power to keep up with China, and it is the only hope we have as a country to grow enough to ultimately pay off our debt and guarantee long-term security, by not letting power be a limiting factor on our innovation.

4. Is Venezuela’s Oil Worth the Hassle? – Tomas Pueyo

This depends on how much oil can be extracted from Venezuela. Today, it’s ~1.1M barrels per day.

A barrel of oil is currently worth about $60:

But Venezuela’s oil is worse quality than most, so it sells for cheaper, ~$8 less as of today, or $52…

…But how much does it cost to extract a barrel of Orinoco oil and transport it and treat it to be sellable?

So of these $52, about $23 are hard costs, and each barrel yields around $29 in profit…

…The oil [in the Orinoco Valley] is extremely dense (heavier than water), extremely viscous (like pitch or molasses) and extremely dirty (over 5% sulfur and masses of metals like vanadium). The only deposit like this elsewhere in the world is Canada’s Athabasca oil sands.

To extract the oil, you have to first pump large amounts of steam into the formation, to melt the hydrocarbons, then use electrical pumps at the surface or in the bottom of the well, up to a kilometer deep, to lift it to the surface. Once there, the “oil” is far too viscous to transport by pipeline or ship, and far too heavy and dirty for most refineries to tackle. So it is diluted by mixing with a much lighter crude oil, or the “condensate” liquids from a gas field, or refined naphtha (a solvent which you can buy as “white spirit” in UK DIY stores). The resulting diluted crude oil (DCO) is exported as Merey blend. This is still one of the heaviest, dirtiest crude oils in the world (16 API, 3.5% sulfur, high acidity and metals content), but it flows just well enough to be transported if kept warm, and some of the world’s more complex refineries can handle it, and make transport fuels from it, although usually alongside other lighter crudes…

…The two best estimates suggest it would take tens of billions to maintain the existing infrastructure, and tens of billions more to go beyond that.

5. A Few Things I’m Pretty Sure About – Morgan Housel

I think the majority of society problems are all downstream of housing affordability. The median age of first-time homebuyers went from 29 in 1981 to 40 today. But the shock this causes is so much deeper than housing. When young people are shut out of the life-defining step of having their own place, they’re less likely to get married, less likely to have kids, have worse mental health, and – my theory – more likely to have extreme political views, because when you don’t feel financially invested in your community you’re less likely to care about the consequences of bad policy…

…There’s a long history of Americans cycling through how they feel about government and how politicians treat each other.

The 1930s were unbelievably vicious. There was a well organized plot to overthrow Franklin Roosevelt and replace him with a Marine general named Smedley Butler, who would effectively become dictator. The Great Depression made Americans lose so much faith in government that the prevailing view was, “hey, might as well give this a shot.”

It would have sounded preposterous if someone told you in the 1930s that by the 1950s more than 70% of Americans said they trusted the government to do the right thing almost all the time. But that’s what happened.

And it would have sounded preposterous in the 1950s if you told Americans within 20 years trust would collapse amid the Vietnam War and Watergate.

It would have sounded preposterous if you told Americans in the 1970s that within 20 years trust and faith in government would have surged amid 1990s prosperity and balanced budgets.

And equally absurd if you told Americans in the 1990s that we’d be where we are today.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google), Amazon (parent of AWS), and Microsoft. Holdings are subject to change at any time.

What We’re Reading (Week Ending 11 January 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 11 January 2026:

1. Ten things about Venezuela: on oil, geopolitics and drugs – Michael Cembalest

 Venezuela is not a large part of the global oil production picture, at least not right now.  The impact on global oil markets from the US invasion/arrest of Maduro should be minor…

…The US is still highly reliant on petroleum for 90% of transport energy consumption with the remainder mostly made up of natural gas and biomass, and for ~33% of industrial production (mostly high temperature heat and industrial feedstocks).   The amounts of oil used for residential and commercial heating is pretty negligible, less than 10% of the respective totals…

…The oil intensity of GDP is gradually declining in most of the world.  At some point, this ratio may drop low enough that disruptions in oil supplies will be less of an issue for growth and consumer spending…

…While US oil production is tilted towards light oil, US refining capacity is more even split among light, medium and heavier grades.  Note how heavy and medium normalized oil production in Venezuela aligns better with US refining gaps…

…Venezuela also possesses largely untapped reserves of critical minerals like coltan (niobium-tantalum), rare earth elements (REEs), nickel, gold, bauxite and iron ore.  The Orinoco Mining Arc, which spans 111,843 sq km, contains documented deposits of coltan (tantalum ore), cassiterite (tin ore), rare earth elements, bauxite, gold, and lithium reserves.  Coltan is used for manufacturing tantalum capacitors used in advanced electronic systems, including military communications equipment, missile guidance computers and radar systems. Rare earth elements enable permanent magnets required for precision-guided munitions, aircraft actuators and electromagnetic systems. Cassiterite provides tin for solder in electronics assembly, including defense systems while bauxite feeds aluminum production for aerospace applications…

…Iran and Venezuela have exchanged oil, gold and infrastructure assistance using Iran’s Islamic Revolutionary Guard Corps and Hezbollah-linked front companies for money laundering and sanctions evasion…

…Over 120 Russian troops reportedly operate in Venezuela and lead the “Equator Task Force.”  Russian advisers provide training across multiple domains including infantry, drone operations, special forces, military intelligence, signals intelligence, armor, aircraft, artillery and domestic surveillance

China has extensive ties with Venezuela; note the disproportionate amount of Chinese loans to Venezuela vs other Latin American countries (most of these loans were originated over a decade ago).  China’s military connections with Venezuela involve arms sales (missiles, jets, naval vessels), defense cooperation and strategic support; it’s not clear what the benefit has been for Venezuela, at least based on last week.

2. Steam, Steel, and Infinite Minds – Ivan Zhao

My co-founder Simon was what we call a 10× programmer, but he rarely writes code these days. Walk by his desk and you’ll see him orchestrating three or four AI coding agents at once, and they don’t just type faster, they think, which together makes him a 30-40× engineer. He queues tasks before lunch or bed, letting them work while he’s away. He’s become a manager of infinite minds…

…With AI agents, someone like Simon has graduated from riding a bicycle to driving a car.

When will other types of knowledge workers get cars? Two problems must be solved.

First, context fragmentation. For coding, tools and context tend to live in one place: the IDE, the repo, the terminal. But general knowledge work is scattered across dozens of tools. Imagine an AI agent trying to draft a product brief: it needs to pull from Slack threads, a strategy doc, last quarter’s metrics in a dashboard, and institutional memory that lives only in someone’s head. Today, humans are the glue, stitching all that together with copy-paste and switching between browser tabs. Until that context is consolidated, agents will stay stuck in narrow use-cases.

The second missing ingredient is verifiability. Code has a magical property: you can verify it with tests and errors. Model makers use this to train AI to get better at coding (e.g. reinforcement learning). But how do you verify if a project is managed well, or if a strategy memo is any good? We haven’t yet found ways to improve models for general knowledge work. So humans still need to be in the loop to supervise, guide, and show what good looks like…

…Before steel, buildings in the 19th century had a limit of six or seven floors. Iron was strong but brittle and heavy; add more floors, and the structure collapsed under its own weight. Steel changed everything. It’s strong yet malleable. Frames could be lighter, walls thinner, and suddenly buildings could rise dozens of stories. New kinds of buildings became possible.

AI is steel for organizations. It has the potential to maintain context across workflows and surface decisions when needed without the noise. Human communication no longer has to be the load-bearing wall. The weekly two-hour alignment meeting becomes a five-minute async review. The executive decision that required three levels of approval might soon happen in minutes. Companies can scale, truly scale, without the degradation we’ve accepted as inevitable…

… At the beginning of the Industrial Revolution, early textile factories sat next to rivers and streams and were powered by waterwheels. When the steam engine arrived, factory owners initially swapped waterwheels for steam engines and kept everything else the same. Productivity gains were modest.

The real breakthrough came when factory owners realized they could decouple from water entirely. They built larger mills closer to workers, ports, and raw materials. And they redesigned their factories around steam engines (Later, when electricity came online, owners further decentralized away from a central power shaft and placed smaller engines around the factory for different machines.) Productivity exploded, and the Second Industrial Revolution really took off.

We’re still in the “swap out the waterwheel” phase. AI chatbots bolted onto existing tools. We haven’t reimagined what organizations look like when the old constraints dissolve and your company can run on infinite minds that work while you sleep.

3. Our Approach to the Future – Hirotaka Shimizu

Venture companies seeking to go public typically expand by increasing sales through their hard-earned business models. Once sales exceed the break-even point, they begin to generate profits. During this process, they develop the organizational structures, governance frameworks, and compliance systems required of listed companies, steadily advancing toward an IPO. Only a limited number of these companies, under favorable conditions, ultimately succeed in going public.

Yet many of those that do achieve an IPO, often after significant struggles and setbacks, find their growth peak around the time of listing. According to the Ministry of Economy, Trade and Industry’s March 2024 report, “Research on How Startups Can Continue to Grow after Listing,” market capitalization growth typically peaks in the first year after listing and then declines uniformly from the second year onward. In fact, although the Tokyo Stock Exchange (TSE) Growth Market is intended to function as a gateway to higher-tier markets, only about one quarter of listed companies successfully make such a transition. Most are unable to achieve their anticipated growth trajectory and remain on the Growth Market. This is why IPOs are sometimes called jokingly by the public as “the final goal” of venture companies. To address this issue, the TSE reportedly plans to revise its continued listing criteria for the Growth Market by requiring companies listed for five years or more to have a market capitalization of at least ¥10 billion, thereby encouraging stronger post-listing growth.

Now then, why does growth come to a halt? There must be a reason. In my view, many venture owners concentrate too much of their attention and energy on their hard-earned business models. Yet all business models, even highly unique ones, have a shelf life. Every business inevitably moves from a growth phase to a maturity phase, and eventually to a decline phase. Companies that push aggressively during the growth phase and succeed in going public often discover that the differentiation they once created has diminished by the time they reach maturity. Their once-unique business models are imitated by competitors, or they unavoidably face intensified competition from companies with adjacent business models. As a result, they find themselves in a red ocean. In addition, the company growth cycle itself is shortening as information and technology continue to advance. This trend is particularly evident among venture companies in B2B marketing and technology domains. Once they face such situations, developing a new business model becomes increasingly difficult. Furthermore, listed companies are required to disclose financial information on a quarterly basis. In my view, this requirement can also discourage new investment, given the potential impact on share prices. I suspect that the current framework functions as a kind of “trap” into which many companies that manage to go public eventually fall.

To avoid this outcome, companies must continually conceive and pursue new business models while their existing models are still in a growth phase. However, most business managers fail to direct their attention to this imperative. In my view, this is because they lack long-term, ambitious goals. If managers were to set long-term goals, they would recognize that such goals cannot be achieved through a single business model and would therefore feel a natural imperative to develop the next one. Companies should, in my opinion, pursue growth driven by long-term goals, such as missions, visions, principles, aspirations, and ambitions, rather than relying on business models. I believe that the continued pursuit of these goals ultimately enables sustained corporate growth.

4. Peace and prosperity in Venezuela will come from democracy, not oil – Ricardo Hausmann

But then, concern: just hours after the raid President Donald Trump declared that he would now “run” Venezuela. He talked much about oil but not at all about democracy other than to dismiss María Corina Machado, Nobel peace laureate and leader of the democratic opposition…

…Instead, Mr Trump made clear, America will work with the dictator’s own vice-president. He spoke as if he owned the country and its assets. Venezuelans will be recipients of his benevolence, not agents of their destiny.

Removing a dictator—especially if leaving his henchmen and -women in charge—is not the same as rebuilding a country. And there is much to rebuild. When Mr Maduro came to power in 2013, Venezuelans were four times richer than they are today. A disaster followed: the largest economic contraction ever recorded in peacetime, triggering the departure of 8m Venezuelans. Brutality, repression and corruption accompanied the catastrophe.

At its heart was a systematic dismantling of rights: property rights, independent courts and free elections. Speaking out became a crime. As rights vanished, so did security, investment, trust and the power to imagine. People stopped planning for the future because the future no longer belonged to them.

The lesson is simple: prosperity does not come from oil, decrees or even benevolent rulers, but from rights. Rights create private property and security. They allow people to invest, innovate and dream. Restore rights, and society can recover.

Venezuelans now need neither revenge nor Trumpian improvisation, but a return to freedom and peace. The technology for that has already been invented: democracy, which is not just about voting but is a system for aggregating preferences while protecting liberties. Democracy aligns political authority with social consent and is the formula for sustained prosperity. Venezuela enjoyed it for much of the latter part of the 20th century. 

5. Trump’s Enormous C-Length Win over China – Collapse Intelligence Agency

When we talk about “Oil,” we are using a lazy bucket term. In reality, a barrel of oil is a soup of thousands of different molecules. Each geographic barrel is a unique fingerprint.

“C-Length” refers to the number of Carbon atoms chained together in a single molecule.

This is the fundamental biophysics of the economy. The length of the carbon chain determines State of Matter (Gas vs. Liquid vs. Solid) and Energy Density (how much work it can do).

Short Chains (C1–C4): Gases. They float away.

Medium Chains (C5–C12): Thin Liquids (Gasoline). They evaporate quickly.

Long Chains (C13–C20): Oily Liquids (Diesel/Jet). The “Goldilocks” zone for heavy work.

Very Long Chains (C50+): Solids (Asphalt).

The US/Venezuela/China trade war is essentially a fight over C20+ chains…

…To run a modern economy, you need a specific ratio of products: roughly 40% Gasoline, 30% Diesel, 10% Jet, 20% Industrial/Asphalt. This matches the general demand pattern of the economy.

But nature never gives you that exact ratio in the ground.

Scenario A: Refining Light Oil (US Shale – Mostly C5-C10)

You have too much Gasoline/Naphtha.

To make Diesel (C16), you have to mathematically glue molecules together.

Biophysics: It is energetically difficult and expensive to “Oligomerize” (fuse) small chains into big ones. You cannot efficiently run an industrial economy on shale oil alone because you can’t make enough Diesel/Jet fuel without massive waste.

Scenario B: Refining Heavy Oil (Venezuelan Orinoco – Rich in C20-C100)

You have huge long chains.

The Coker: You heat them up and “Chop” them. A C50 chain can be snapped into three C16 chains (Diesel).

Biophysics: It is thermodynamically efficient to “Crack” (break) a long chain into specific smaller pieces. This is why US Coking Refineries are the “Golden Key.” They take the cheapest feedstock (C50+ sludge) and turn it into the most valuable product (C16 Diesel)…

…The US possesses the “Holy Grail” of refining: Single-Site Deep Conversion.

US Advantage: A barrel of Orinoco sludge enters a Texas refinery and leaves as 80% High-Value Diesel/Jet and 20% solid Petcoke. It is processed in one location, efficiently.

Russian Flaw – The Mazut Glut: Russia cannot fully refine its own heavy barrels. Its refineries lack the depth of US “Coking” capacity.

Russia is forced to export massive volumes of Mazut (M-100)—a cheap, low-value heavy fuel oil—because they can’t crack it into diesel domestically. They have to ship this half-refined trash to buyers who can finish the job.

China: “Teapot” refineries in Shandong have effectively become the “Trash Cans” for the Eastern Bloc. They import Russian Mazut and Venezuelan Bitumen blend to crack it into diesel and asphalt.

The Eastern Bloc relies on shipping half-refined residue between countries to achieve what Texas does inside a single fence line. That creates a massive Thermodynamic Friction (shipping fuel oil is heavy and dirty) that the US avoids…

…Iranian Oil (Soroosh/Nowruz) and Russian Mazut: Heavy, but optimized for fuels (Energy).

Venezuelan Oil (Merey 16) and Canadian Tar Sands: The global gold standard for high-yield Bitumen (Asphalt).

China consumes massive amounts of asphalt for its ceaseless road/infrastructure construction. Losing Venezuelan supply implies a structural shortage of road-paving material.

With Venezuela (Orinoco) gone to the US, and Canada (Tar Sands) logically aligned with the US (despite mercantile friction), China has only one source left for heavy, complex oil: Iran.

The Bottleneck: This forces China into a single-point dependency. If the US/Israel acts against Iranian export terminals (Kharg Island), the Eastern Bloc has minimal access to the heavy oil required for their specific refinery configurations.

Russia can’t help: Russia produces “Urals” (Medium Sour), it’s true heavy oils are limited in production and export.

Canada via the TMX pipeline supplies 200 000 bpd. This is the bpd spoken for CHINA crude. TMX total is 800 – 900 thousand bpd. And this pipeline is MAXED out. China can’t get any more. TMX schedules are spoken for. Other consumers have contractual claim.

You can’t pave a road with Iranian Soroosh/Russian Heavy efficiently; you get less asphalt and more waste…

…By seizing Venezuelan Orinoco heavy oil, the US also effectively secures the highest-value feedstock for its specialized machine, forcing China to run its “Teapot” refineries on inferior or politically volatile alternatives. This heavy oil sludge can be more easily cracked into lower forms as needed for desired usage.

Heavy oils give US optionality in refining. It is more efficient to “chop” that it is to “glue.”

The US will very likely install governance and corporate structure that is supplicating to its national needs. It can begin to squeeze the Eastern Bloc slowly by reducing exports of Merey 16. Or it can simply increase prices. China was able to buy this sanctioned oil at discount.

Now the US controls this oil supply. It’s categorization is “Clean.” So China pays fair market prices for continuing their infrastructure construction.

The same way that China uses REE controls.

We can make an estimation that China currently relies upon Venezuelan bitumen for roughly 50% of its asphalt production needs.

Depending on the mood of the US administration, this is about to get very expensive or outright disappear from China’s procurement.

Whether by design or coincidence, the US now has a very real wartime advantage against China.

It’s likely the US does not recognize this fully. They just wanted China OUT.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any companies mentioned. Holdings are subject to change at any time.

Company Notes Series (#12): Descartes Systems Group

Editor’s note: This is the latest edition in the “Company Notes Series”, where we periodically share our notes on companies we’ve studied in the recent past but currently have no vested interest in (we may invest in or sell shares in the companies mentioned at any time). The notes are raw and not updated, and the “as of” date for the data is given at the start of the notes. The first 11 editions in the series can be found hereherehereherehereherehere,  here,  here, here, and here. Please share your thoughts on the series through the “Contact Us” page; your feedback will determine if we continue with it. Thanks in advance!

Start of notes for Descartes Systems Group

Data as of 2023-02-21

Background

  • Founded in 1981
  • Listed in 1999, dual listing on NASDAQ (NASDAQ: DSGX) and Toronto Stock Exchange (TSX: DSG)
  • Headquartered in Ontario, Canada
  • Over 1500 employees
  • Reports financials in the US$

Business

  • The problem Descartes is trying to solve:
    • We believe logistics-intensive organizations are seeking to reduce operating costs, differentiate themselves, improve margins, and better serve customers. Global trade and transportation processes are often manual and complex to manage. This is a consequence of the growing number of business partners participating in companies’ global supply chains and a lack of standardized business processes.
    • Additionally, global sourcing, logistics outsourcing, imposition of additional customs and regulatory requirements and the increased rate of change in day-to-day business requirements are adding to the overall complexities that companies face in planning and executing in their supply chains. Whether a shipment is delayed at the border, a customer changes an order or a breakdown occurs on the road, there are increasingly more issues that can significantly impact the execution of fulfillment schedules and associated costs.
    • The rise of e-commerce has heightened these challenges for many suppliers with end-customers increasingly demanding narrower order-tofulfillment periods, lower prices and greater flexibility in scheduling and rescheduling deliveries. End customers also want real-time updates on delivery status, adding considerable burden to supply chain management as process efficiency is balanced with affordable service.
    • In this market, the movement and sharing of data between parties involved in the logistics process is equally important to the physical movement of goods. Manual, fragmented and distributed logistics solutions are often proving inadequate to address the needs of operators. Connecting manufacturers and suppliers to carriers on an individual, one-off basis is too costly, complex and risky for organizations dealing with many trading partners. Further, many of these solutions do not provide the flexibility required to efficiently accommodate varied processes for organizations to remain competitive. We believe this presents an opportunity for logistics technology providers to unite this highly fragmented community and help customers improve efficiencies in their operations.”
  • Provides software for logistics and supply chain management business processes; helps customers to streamline their logistics processes and save costs. Customers use Descartes’ software “route, schedule, track and measure delivery resources; plan, allocate and execute shipments; rate, audit and pay transportation invoices; access and analyze global trade data; research and perform trade tariff and duty calculations; file customs and security documents for imports and exports; and complete numerous other logistics processes by participating in a large, collaborative multi-modal logistics community.” In other words, Descartes help customers manage their end-to-end shipment, including researching global trade information, booking of shipment, tracking of shipment, regulatory compliance filings, settlement of audit etc. Descartes offers many software applications that are modular and interoperable.
  • The company has historically lost, over a 1-year period, 4% to 6% of aggregate annualised recurring revenue 
  • Customers include logistics companies (3P logistics providers, freight forwarders, and custom brokers), transportation companies (air/land/ocean), and distribution-intensive companies where logistics is critical in their own product or service offering (direct-to-consumer e-commerce companies for example); these customers include Delta Air, CMA CGM, FedEx, DHL, Home Depot, WayFair, Coca-Cola, Toyota, Fresenius. 
  • Has a mostly SaaS model subscription model but also has a few clients on perpetual licenses – worth noting that some of the revenue earned by Descartes from its software is tied to volume of shipments being processed.
  • Descartes’ tailwinds: Can benefit from the rise of e-commerce and greater demand for logistics
  • Created a Global Logistics Network (GLN) – a state-of-the-art messaging network – of trading partners that customers can use. This GLN is the moat behind Descartes as it is the foundation of the company’s technology platform that manages the real-time flow of data and documents that tracks and control the movement of inventory, assets, and people. Customers can use the GLN to access and collaborate with a wide range of trading partners.
  • In first 9 months of 2022, USA was 63% of total revenue, EMEA 26%, Canada 7%, and Asia Pac 4% 

Sales strategy

  • Sales in North America and Europe are through direct sales
  • Use channel partners in APAC, India, LATAM, and Africa. Channel partners include distributors, alliance partners, and value-added resellers
  • Has a “United by Design” alliance with numerous companies so that Descarte’s software is interoperable with numerous other service providers (this is another moat in my view) 

Customer Stats

  • 25,000+ customers worldwide, from 160+ countries
  • On an annual basis, Descartes now tracks >575 million shipments in real time and processes >18.6 billion messages

Growth strategy

  • Acquisitions are a key factor in Descartes’ historical growth (see “Financial Results” below); the acquisition strategy is focused on “complementary technologies, industry consolidation and close adjacencies across logistics”
  • Made 31 acquisitions for a total sum of $1.04 billion since 2014. This is more than its total free cash flow generated, so it funded some acquisitions through secondary public offerings in July 2014 (in FY2015) and June 2019 (in FY2020). But Descartes has recently been building back its cash position; as of October 2022, it has net cash of US$229 million (cash minus capital leases)
  • Likely will accelerate acquisitions again?

Financial Results

  • Fiscal year ends on 31 Jan
  • Revenue compounded at 15.7% per year (FY2010 – FY2022)
  • FCF compounded at 22%
  • FCF ex WC (working capital) margin has grown from 22% to 36%, aided by some WC change. But even excluding WC changes, FCF has compounded at 20.7%
  • Cash conversion is 231% of net income. Cash conversion ratio is super high for two reasons: (1) Net income impacted by amortisation of intangible assets which is not a cash expense but quite significant on the income statement; (2) Some SBC
  • FCF per share has compounded at 16.5% (FY2010 – FY2022) which accounts for the dilution from the two secondary offerings made in the period
  • Company is now net cash positive at US$229M
  • There is some dilution from SBC (stock-based compensation) but it is minimal and well-controlled (weighted average diluted share count up 3.6% from FY2010 to FY2022)

FY2023 Q3 Results

  • New financial year starts on 1 Feb
  • Q3 FY23 revenues were up 12% but down QoQ due to forex to US$121.5 million
  • 91% of revenue is service revenue and 8% is professional fees
  • License only makes up 1%
  • CFO was US$50.9 million, up 18%, and 42% of revenue
  • Year-to-date revenue was up 16% and CFO up 8%
  • Has US$237 million in cash and US$350 million of credit facilities (which can be expanded to US$500 million upon lenders’ approval), so there’s ability to leverage up the balance sheet which could be good for shareholders

Management

  • CEO Edward J. Ryan (54). Been CEO since November 2013, was previously Chief Commercial Officer (2011-2013). Joined Descartes in 2000. 
  • President and COO J. Scott Pagan (49). Been COO since November 2013. Joined Descartes in 2000.
  • CFO Allan Brett (55). Been CFO since May 2014. Joined Descartes as CFO
  • Hard to tell exactly how many shares are owned by them because the data reported is only for market-value of shares held, and market value of “in the money” of unexercised but vested options held by them. But nonetheless, as of 29 April 2022, based on a share price of C$80.14 (the weighted-average price for the 5 days prior to 29 April 2022) – price is C$101.29 as of 21 Feb 2023 – the value of Descartes shares controlled by Ryan, Pagan, and Soctt, is US$34.0 million, US$35.0 million, and US$14.0 million, respectively. That is a decent amount of skin in the game.

Compensation of Management

  • Compensation consists of 3 components: (1) Base salary and benefits, (2) Short-term incentives, and (3) Long-term incentives.
  • Base salary for FY2022 was US$500,000 for Ryan, US$350,000 for Pagan, and US$350,000 for Brett.
  • Short-term incentive for FY2022 was a maximum of US$750,000 for Ryan, US$446,250 for Pagan, and US$367,500 for Brett. Short-term incentive for FY2022 was based on Descartes’ adjusted EBITDA, revenue, and OCF as % of adjusted EBITDA. Descartes had to meet targets in FY2022 of 10% growth in adjusted EBITDA (actual was 31%), 9% growth in revenue (actual was 22%), and OCF as % of adjusted EBITDA of 80-50% (actual was 95%). All 3 executives were paid short-term incentives of the maximum amount stated. 
  • Long-term incentive for FY2022 consists of:
    • PSU grants which vest at the end of a three-year performance period
    • RSU grants which vest over a period of three fiscal years; and
    • stock options that vest over a period of three fiscal years. 
  • The actual PSU to be received by the executives ranges from 0% to 200% of the granted target PSUs and depends on the total shareholder return of Descartes relative to a Comparator Group over a 3-year period. If Descartes is less than the 30th percentile, the actual PSU distributed will be 0%; if Descartes is in the the 90% percentile or higher, the actual PSU distributed will be 200%. On the date of the grant, the target PSUs were worth US$2 million; the RSUs were worth US$1.4 million, and the stock options were worth US$0.6 million. The Comparator Group includes Enghouse Systems, Kinaxis, Wisetech Global, Aspen Technology, Ebix QAD, and more.
  • Sensible compensation structure, since it emphasises long-term stock price return. Dollar-amounts are reasonable (though on the high-side) since the total compensation of each executive in FY2022 is still a single-digit percentage of net income and FCF.

Valuation

  • US$6.4 billion market cap (as of 21 February 2023) and trailing FCF of US$182 million
  • ~35 PFCF ratio 
  • EV of US$6.1 billion, so EV-to-FCF of 34
  • Doesn’t pay a dividend nor does it buyback shares, so no cash is being returned to shareholders yet
  • But there are good capital allocators at the helm so far, judging from growth of business through acquisitions
  • Pricey given that all the cash flows need to be reinvested back to drive growth in the form of acquisitions

Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have no vested interest in any company mentioned. Holdings are subject to change at any time.