What We’re Reading (Week Ending 03 May 2026)

The best articles we’ve read in recent times on a wide range of topics, including investing, business, and the world in general.

We’ve constantly been sharing a list of our recent reads in our weekly emails for The Good Investors.

Do subscribe for our weekly updates through the orange box in the blog (it’s on the side if you’re using a computer, and all the way at the bottom if you’re using mobile) – it’s free!

But since our readership-audience for The Good Investors is wider than our subscriber base, we think sharing the reading list regularly on the blog itself can benefit even more people. The articles we share touch on a wide range of topics, including investing, business, and the world in general. 

Here are the articles for the week ending 03 May 2026:

1. Oracle’s Deluge of AI Debt Pushes Wall Street to the Limit – Peter Rudegeair and Berber Jin

Banks including JPMorgan Chase struggled for months to spread the risk of billions of dollars in loans they made to build data centers leased to Oracle in Texas and Wisconsin, people familiar with the matter said. Many financial institutions that would ordinarily buy those loans face restrictions on how much exposure they can have to a single counterparty, and the sheer size of these debt packages pushed them to the limit with Oracle. As a result, bank balance sheets got clogged, constraining the financing prospects of future projects tied to Oracle and OpenAI.

For example, lenders balked at financing the expansion of a data-center complex in Abilene, Texas, if Oracle were the tenant, according to people familiar with the matter. That led the developer, Crusoe, to lease it to Microsoft instead…

…Lenders grew more comfortable with Oracle-related projects after the company said it would raise all the money it needed for 2026 by issuing roughly $50 billion in stock and bonds. Oracle said in a post on X last week that each data center it is developing for OpenAI is moving forward on time.

But even after it raises that amount, Oracle still has additional cash funding needs of $100 billion or more for 2027 and the first half of 2028, according to Morgan Stanley credit analysts. “We’ve pondered how [Oracle’s] considerable funding needs over the next three years may test the depths of different fixed-income markets,” the analysts wrote in February…

…Oracle, though, is in a comparatively weaker financial position than big tech rivals. It has a lower investment-grade credit rating, more debt and is burning cash. Much of its future revenue is tied to a money-losing startup that is facing growing competitive pressure. The cost of protecting Oracle’s bonds against a potential default via credit-default swaps roughly quadrupled between late September and late March, though it has fallen slightly since then…

…Much of the borrowing tied to the OpenAI megacontract was done by projects involving data center developers working with Oracle. The debt was structured as short-term construction loans meant to be syndicated among a group of banks and other institutions. Oracle is the tenant and OpenAI is the subtenant on the deals, but the debt doesn’t sit on Oracle’s balance sheet.

2. OpenAI Misses Key Revenue, User Targets in High-Stakes Sprint Toward IPO – Berber Jin

Chief Financial Officer Sarah Friar has told other company leaders that she is worried the company might not be able to pay for future computing contracts if revenue doesn’t grow fast enough, according to people familiar with the matter. 

Board directors have also more closely examined the company’s data-center deals in recent months and questioned Chief Executive Sam Altman’s efforts to secure even more computing power despite the business slowdown, the people said…

…OpenAI missed an internal goal of reaching one billion weekly active users for ChatGPT by the end of last year, according to people familiar with the goals. The company still hasn’t announced that milestone, unnerving some investors. It also missed its yearly revenue target for ChatGPT as well after Google’s Gemini saw massive growth late last year and ate into OpenAI’s market share, the people said. The company has also struggled with defection rates among subscribers, according to people familiar with those figures.

OpenAI missed multiple monthly revenue targets earlier this year after losing ground to Anthropic in the coding and enterprise markets, people familiar with its finances said.

3. If AI is so great, why isn’t it working? – Vas M.

AI is working for one group of people right now, at scale, because it’s the group of people that rely the least on business logic. It’s software engineers. The biggest winner from 18 months of AI improvement, by miles, has been engineers writing code in Cursor, Claude Code, Codex, etc. Some stats for you if for some reason you still don’t believe in agentic engineering:

  • GitHub’s 2024 study clocked Copilot users at 55% faster on real tasks. 1 hour 11 minutes vs 2 hours 41 minutes on the same work.
  • Anthropic ran an internal study in August 2025 across 132 engineers and 100,000 real Claude conversations. AI cuts developer task completion time by roughly 80%.
  • Sundar Pichai said at the start of 2026 that 75% of new code at Google is AI-generated and engineer-approved. That number was 30% in April 2025.

Yes, the tools still overpromise on the hard stuff: security review, complex distributed systems, novel debugging. Caveat very real and noted. But the bread-and-butter productivity gain on shipping code is the biggest jump engineering has had since the IDE…

…So why does AI work for engineers and not for any of these? What’s different about engineers? As a former software engineer, engineering work has four properties that basically no other enterprise function has. Yes there are nuances but these are directionally correct, please relax in the comments.

  • It’s bounded. A function takes inputs and returns outputs. The scope of “fix this bug” lives inside a file or a module. The dependencies are explicit and importable.
  • It’s checkable. Compilers tell you in milliseconds whether the code parses. Tests tell you whether it works. Type systems catch entire classes of error before runtime. Feedback loop: seconds.
  • The substrate is structured. Code lives in files, in version control, with a deterministic build pipeline underneath. Same input, same output. You can replay any state.
  • The output is verifiable. A pull request is a discrete artifact. A reviewer can look at the diff in 10 minutes and say yes or no.

When you point a capable AI at work that’s bounded, checkable, structured, and verifiable, the leverage is enormous. Cursor and Claude Code are the proof. And if we’re being honest, the biggest reason is that the AI labs (OpenAI, Anthropic, Cursor) poured every single ounce of resources they had into figuring out software engineering. If they can make their own engineers better, they can make the models better, faster, and achieve the ever-elusive “AGI”, which will then make every other task on the planet (Finance, Sales, Operations, Marketing, etc) much easier downstream.

But contrast software engineering with a finance close.

Finance involves AP, AR, intercompany reconciliations, FX, accruals, journal entries, and exception handling that spans NetSuite, Concur, three banks, two ERPs from acquisitions, a custom intake form, and a Slack channel where the controller flags “weird stuff she sees.” The “process” is documented in an SOP that doesn’t match what actually happens. The output is “the close was clean,” which takes two senior accountants two days to verify.

Sales ops involves a CRM, an outbound tool, a calendar, a notes platform, an enrichment vendor, an attribution tool, and a Slack channel where the AE is asking the CRO whether to discount this deal. None of those systems share state cleanly. The process for qualifying a lead is different across reps, even on the same team.

This is what every ops function looks like in every company Varick has ever audited. None of it is bounded, checkable, structured, or verifiable the way code is. And trying to wrangle generic AI to these functions that are incredibly specific to your company and its processes is a fools errand.

Pointing an LLM at this work gives you negative ROI. The operator was doing the work in 30 minutes. Now they’re doing the work in 30 minutes plus another 30 minutes correcting the AI’s mistakes. Most if not every vendors’ “AI for [department]” has the same arc. A nice flashy demo showing how great it works for startups, then a big series A, then quietly killed after it fails to work for enterprise…

…Ok so what does the 5% that ships and stays in production do consistently that makes them so good:

1. They audit before they build. Four weeks (often longer) of mapping the actual workflow before anyone touches a model. The audit produces a digital twin: a live map of how work moves through the org, where the conformance gaps are, what’s pattern-matchable, and what genuinely needs human judgment. The document itself matters less than the alignment it forces between the AI team and the operators. Make sure everyone is aligned on what the bottle-necks are, what the optimal state should be, and what is going to be done to fix it.

2. They decompose the work until most of it is deterministic. LLM goes ONLY where judgment is absolutely required, while plain code goes everywhere else. Most production systems we ship at Varick end up as 5-10 deterministic steps with maybe one or two model calls in specific places. Boring in production is genuinely the goal, and is how we’ve seen the most success.

3. They build a single orchestration layer that sits on top of the existing software stack. At Varick, we call this the single pane of glass. Finance, sales, ops, and engineering agents all live on the same platform, share the same context, and can talk to each other when they need to. Every new use case lands as configuration on top of the platform. In turn, sprawl is dead on arrival.

4. They stay model-agnostic. Abstractions get built at the task level, not at the model level. Each step routes to the best-fit model at any given moment. When OpenAI deprecates a model or Anthropic ships something dramatically better, the routing layer absorbs the change and your workflow keeps running without anyone noticing.

5. They treat the deployment as continuously evolving infrastructure. There is a real team responsible for ongoing tuning, retiring agents that aren’t earning their keep anymore, and shipping improvements every quarter. The deployments that pay off over five years are the ones that get tuned every quarter if not every month, not the ones declared “done” at go-live. You have to get over this fact if you want to succeed with AI. 

4. Software Is Eating the World (But Actually This Time) – Siddharth Ramakrishnan

In 2011, software ate the world. At least that’s what Marc Andreessen told us. But if that’s true, then why does the Bay Area still exist? If software really ate everything, wouldn’t we all have moved to New York or Miami by now?

Well, let’s look at what software actually ate: banks got apps, retail got websites, hospitals got EHR systems, and taxis got dispatched with a few taps instead of a phone call at 2am when you maybe don’t remember exactly where you are.

Software ate the interfaces, but the actual work? That mostly stayed human.

A customer calls about a billing dispute and software routes the call, pulls up the account screen, and then logs the resolution afterward. But here a person is still the one listening, figuring out whether the refund policy applies here, deciding what to do, and actually talking to the customer. A loan officer reviewing an application gets the credit score surfaced by software and the documents pulled up on screen, but they’re the one reading those documents and making the judgment call. For 15 years, software has been really good at the plumbing while humans kept doing the actual work.

Now, AI can actually do the work! A customer service call is becoming an agent loop where the system handles speech recognition, looks up the account via API, pulls the relevant policy, reasons about whether the customer qualifies, triggers the refund, and responds with text-to-speech. An insurance claim is becoming document intake followed by coverage checks, fraud flags, reserve calculations, and settlement workflows, all running as code. A coding task is already 30 rounds of reading files, editing code, running tests, and revising with no human involved at all…

…I think most people dramatically underestimate how much inference these converted workflows actually consume, because they’re picturing one model, one call, one response, and some hallucinations along the way, but the reality is very different.

Take a voice support agent handling something simple but real, like rescheduling a medical appointment. To the customer, it feels like one conversation. Under the hood, it is a small autonomous system running continuously. As the caller speaks, a speech recognition model transcribes audio in real time. An orchestration model then reasons over the transcript, pulls the patient record, checks scheduling constraints, looks up provider availability, decides what to ask next, and calls the relevant tools. Once it has enough information, it synthesizes the result into a response, and a text-to-speech model turns that back into natural audio. In parallel, other models may be monitoring sentiment, checking compliance, or deciding whether the call should be escalated.

The system is doing all the work itself: listening, retrieving, deciding, tool-calling, verifying, and responding in a loop. An 8 minute call might contain only ~3k tokens of raw transcript, but the orchestration layer can easily consume ~40k tokens once you account for repeated reasoning over the growing conversation, retrieved context, and tool outputs, on top of continuous ASR and TTS inference running for the duration of the call. “One AI phone call” is really a multi-model inference stack operating continuously…

…In customer support, a basic FAQ bot in 2023 might have consumed around 3,500 tokens for a ticket, better retrieval pushed that higher, then tool use and reasoning pushed it higher again, and now full voice support stacks are higher still. Coding follows the same pattern, just more violently: what used to be tens of thousands of tokens for a bounded coding task has become hundreds of thousands or even well over a million as agents became capable enough to handle real debugging, refactoring, and multi-file work. Each useful task now justifies much more inference than it did a year or two ago, because the model can actually finish the job.

This is a subtle version of Jevons paradox. The sticker price per token has actually been rising for frontier models, not falling. But the value per million tokens has gone up much faster: a frontier model today can complete a workflow in one coherent session that would have required dozens of brittle attempts a year ago, or simply could not have been done. Effective cost per useful outcome is dropping even as nominal cost per token climbs. And that dynamic is what opens up entirely new categories: complex insurance claims, broad code refactors, long-running research tasks, multi-step back-office processes. These were not meaningfully part of the inference market two years ago because the models could not stay coherent long enough to do them.

The aggregate numbers suggest this is already happening. OpenAI’s API is processing more than 15B tokens per minute as of April 2026, up from 6B half a year earlier. Google went from 9.7T tokens per month to 480T in a year, about 50x growth. OpenAI says reasoning token consumption per enterprise organization grew 320x year over year. Anthropic’s latest reported annualized revenue of $30B (up from $10B to start the year…) speaks for itself, especially given the main driver is Claude Code and their API…

…As models commoditize, the durable application companies will be the ones that see the real work: the tool calls, retries, escalations, corrections, and edge cases that never show up in a benchmark. That is where the system learns how a specific workflow actually runs, and where proprietary context starts to accumulate. Over time, the advantage is not just access to a model. It is knowing how this insurer handles claims, how this hospital works denials, how this codebase breaks, how this finance team closes. The apps that capture that messy operational data will be the ones that improve fastest and defend their position longest.

5. Nike and the Arithmetic of Durability – Andrew Chou

As of April 2026, Nike stock sat below US$45 – a market capitalisation of US$68 billion, its lowest level in over a decade, and a fall of more than 75% from the US$280 billion the company commanded at its 2021 peak.

How does what was once considered one of the widest consumer brand moats in the world, built over half a century, erode over the course of a few short years?

A good starting point is January 2020, when John Donahoe took over as Nike’s new CEO. The board wanted a digital-first operator, and Donahoe had the résumé – ServiceNow, eBay, and Bain – even if he was one of the few leaders in Nike’s history not to have risen through its operating ranks…

…Under Donahoe, Nike began systematically pulling back from these wholesale relationships. The logic was straightforward: move more volume through direct channels, control the brand experience, and capture more margin.

By September 2021, Nike had exited roughly half its retail partners. Big names like Foot Locker, Zappos, Dillard’s, and Big 5 Sporting Goods saw their allocation of the most sought-after models shrink in favour of Nike’s directly owned stores. Gross profit margins expanded immediately.

The vacated shelf space that followed was quickly and eagerly filled by competitors. Adidas, New Balance, Puma, Hoka, On, Brooks, and Salomon—brands that had suddenly found themselves with prime real estate in the stores Nike had walked away from…

…That same model of deep, sport-specific immersion was eventually replicated across basketball, football, tennis, and dozens of other categories. Teams embedded in each discipline accumulated years of insight about athletes, usage patterns, and the fine distinctions that matter in performance products. This kind of expertise accumulates slowly—through proximity to athletes, coaches, biomechanics, and the subtle demands of each sport.

Under Donahoe, Nike restructured around a simpler model: Men’s, Women’s, and Kids. The rationale was familiar—less duplication, cleaner accountability, more consistency across segments—and the resulting redundancies left the org chart looking tidier on paper. Overhead expenses came down immediately.

What it also did was dissolve the sport-by-sport expertise and institutional knowledge accumulated over decades. Product lines that had once been shaped by deep category knowledge were now filtered through broader consumer-demographic lenses…

…Nike has long been famous for marketing that built meaning before it chased sales. The ability to turn a product into a cultural moment was arguably Nike’s most valuable and least replicable asset.

The Banned Air Jordan story is perhaps the purest illustration. In 1984, Michael Jordan wore black-and-red sneakers that violated the NBA’s uniform rules. The league threatened fines. Nike’s response was not to comply—it was to lean in. The company shot a television commercial showing the shoes blacked out by censorship bars, declaring that the league had thrown them out of the game but could not stop you from wearing them. That single ad helped sell 50,000 pairs almost immediately…

…Under the new model, marketing spend shifted from broad, culture-shaping storytelling into programmatic digital advertising designed to drive traffic to Nike’s own e-commerce channels. Performance marketing has direct, measurable KPIs – but by its nature, it harvests existing demand rather than creating it.

Anyone can pay for web traffic, but doing so does not build a competitive advantage. Just ask the direct-to-consumer startups built on performance marketing in the 2010s that failed to sell to a large incumbent with real distribution before the music stopped…

…Nike shares climbed from around $100 when Donahoe took over to an all-time high of $179 in November 2021 – a company valued at roughly $280 billion. The “transformation” was working.

But these gains came from somewhere. They were, in effect, the monetisation of business value painstakingly built over decades: the distribution footprint Knight and his team had cultivated since the 1960s; the product expertise and institutional knowledge that Bowerman’s culture had embedded across dozens of categories; the brand equity that campaigns like the Banned Air Jordan and Just Do It had compounded over generations.

Most business decisions sit on a spectrum between maximising long-term net present value and maximising short-term accounting profit. When the asset being spent is the moat itself, the spending does not show up as a cost. Each of Nike’s three shifts boosted reported profitability immediately and reduced the long-run NPV of the franchise meaningfully. The trajectory of the income statement and the moat moved in opposite directions – but only the income statement was visible quarter to quarter.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently have a vested interest in Alphabet (parent of Google). Holdings are subject to change at any time.

Leave a Reply

Your email address will not be published. Required fields are marked *