7 Investing Mistakes to Avoid 

Investing is a negative art. It’s more important to avoid mistakes than it is to find ways to win.

From what I see, most investors are often on the lookout for ways to win in the stock market. But that may be the wrong focus, as economist Erik Falkenstein writes:

“In expert tennis, 80% of the points are won, while in amateur tennis, 80% are lost. The same is true for wrestling, chess, and investing: Beginners should focus on avoiding mistakes, experts on making great moves.”

In keeping with the spirit of Falkenstein’s thinking, here are some big investing blunders to avoid.

1. Not realising how common volatility is even with the stock market’s biggest long-term winners

From 1971 to 1980, the American retailer Walmart produced breath-taking business growth. Table 1 below shows the near 30x increase in Walmart’s revenue and the 1,600% jump in earnings per share in that period. Unfortunately, this exceptional growth did not help with Walmart’s short-term return.

Based on the earliest data I could find, Walmart’s stock price fell by three-quarters from less than US$0.04 in late-August 1972 to around US$0.01 by December 1974 – in comparison, the US stock market, represented by the S&P 500, was down by ‘only’ 40%. 

Table 1; Source: Walmart annual reports

But by the end of 1979, Walmart’s stock price was above US$0.08, more than double what it was in late-August 1972. Still, the 2x-plus increase in Walmart’s stock price was far below the huge increase in earnings per share the company generated.

This is where the passage of time helped – as more years passed, the weighing machine clicked into gear (I’m borrowing from Ben Graham’s brilliant analogy of the stock market being a voting machine in the short run but a weighing machine in the long run). At the end of 1989, Walmart’s stock price was around US$3.70, representing an annualised growth rate in the region of 32% from August 1972; from 1971 to 1989, Walmart’s revenue and earnings per share grew by 41% and 38% per year. Even by the end of 1982, Walmart’s stock price was already US$0.48, up more than 10 times where it was in late-August 1972. 

Volatility is a common thing in the stock market. It does not necessarily mean that anything is broken.

2. Mixing investing with economics

China’s GDP (gross domestic product) grew by an astonishing 13.3% annually from US$427 billion in 1992 to US$18 trillion in 2022. But a dollar invested in the MSCI China Index – a collection of large and mid-sized companies in the country – in late-1992 would have still been roughly a dollar as of October 2022, as shown in Figure 1. 

Put another way, Chinese stocks stayed flat for 30 years despite a massive macroeconomic tailwind (the 13.3% annualised growth in GDP). 

Figure 1; Source: Duncan Lamont

Why have the stock prices of Chinese companies behaved the way they did? It turns out that the earnings per share of the MSCI China Index was basically flat from 1995 to 2021.

Figure 2; Source: Eugene Ng

Economic trends and investing results can at times be worlds apart. The gap exists because there can be a huge difference between a company’s business performance and the trend – and what ultimately matters to a company’s stock price, is its business performance. 

3. Anchoring on past stock prices

A 2014 study by JP Morgan showed that 40% of all stocks in the Russell 3000 index in the US from 1980 to 2014 suffered a permanent decline of 70% or more from their peak values.

There are stocks that fall hard – and then stay there. Thinking that a stock will return to a particular price just because it had once been there can be a terrible mistake to make. 

4. Think a stock is cheap based on superficial valuation metrics

My friend Chin Hui Leong from The Smart Investors had suffered through this mistake before and he has graciously shared his experience for the sake of letting others learn. In an April 2020 article, he wrote:

“The other company I bought in May 2009, American Oriental Bioengineering, has shrunk to such a tiny figure, making it a total loss…

…In contrast, American Oriental Bioengineering’s revenue fell from around $300 million in 2009 to about US$120 million by 2013. The company also recorded a huge loss of US$91 million in 2013…

…Case in point: when I bought American Oriental Bioengineering, the stock was only trading at seven times its earnings. And yet, the low valuation did not yield a good outcome in the end.”

Superficial valuation metrics can’t really tell us if a stock’s a bargain or not. Ultimately, it’s the business which matters.

5. Not investing due to fears of a recession

Many investors I’ve spoken to prefer to hold off investing in stocks if they fear a recession is around the corner, and jump back in only when the coast is clear. This is a mistake.

According to data from Michael Batnick, the Director of Research at Ritholtz Wealth Management, a dollar invested in US stocks at the start of 1980 would be worth north of $78 around the end of 2018 if you had simply held the stocks and did nothing. But if you invested the same dollar in US stocks at the start of 1980 and expertly side-stepped the ensuing recessions to perfection, you would have less than $32 at the same endpoint. 

Said another way, history’s verdict is that avoiding recessions flawlessly would cause serious harm to your investment returns.

6. Following big investors blindly

Morgan Housel is currently a partner with the venture capital firm Collaborative Fund. Prior to this, he was a writer for The Motley Fool for many years. Here’s what Housel wrote in a 2014 article for the Fool (emphasis is mine):

I made my worst investment seven years ago.

The housing market was crumbling, and a smart value investor I idolized began purchasing shares in a small, battered specialty lender. I didn’t know anything about the company, but I followed him anyway, buying shares myself. It became my largest holding — which was unfortunate when the company went bankrupt less than a year later.

Only later did I learn the full story. As part of his investment, the guru I followed also controlled a large portion of the company’s debt and and preferred stock, purchased at special terms that effectively gave him control over its assets when it went out of business. The company’s stock also made up one-fifth the weighting in his portfolio as it did in mine. I lost everything. He made a decent investment.”

We may never be able to know what a famous investor’s true motives are for making any particular investment. And for that reason, it’s important to never follow anyone blindly into the stock market.

7. Not recognising how powerful simple, common-sense financial advice can be

Robert Weinberg is an expert on cancer research from the Massachusetts Institute of Technology. In the documentary The Emperor of All Maladies, Weinberg said (emphases are mine):

If you don’t get cancer, you’re not going to die from it. That’s a simple truth that we [doctors and medical researchers] sometimes overlook because it’s intellectually not very stimulating and exciting.

Persuading somebody to quit smoking is a psychological exercise. It has nothing to do with molecules and genes and cells, and so people like me are essentially uninterested in it — in spite of the fact that stopping people from smoking will have vastly more effect on cancer mortality than anything I could hope to do in my own lifetime.”

I think Weinberg’s lesson can be analogised to investing. Ben Carlson is the Director of Institutional Asset Management at Ritholtz Wealth Management. In a 2017 blog post, Carlson compared the long-term returns of US college endowment funds against a simple portfolio he called the Bogle Model.

The Bogle Model was named after the late index fund legend John Bogle. It consisted of three, simple, low-cost Vanguard funds that track US stocks, stocks outside of the US, and bonds. In the Bogle Model, the funds were held in these weightings: 40% for the US stocks fund, 20% for the international stocks fund, and 40% for the bonds fund. Meanwhile, the college endowment funds were dizzyingly complex, as  Carlson describes:

These funds are invested in venture capital, private equity, infrastructure, private real estate, timber, the best hedge funds money can buy; they have access to the best stock and bond fund managers; they use leverage; they invest in complicated derivatives; they use the biggest and most connected consultants…”

Over the 10 years ended 30 June 2016, the Bogle Model produced an annual return of 6.0%. But even the college endowment funds that belonged to the top-decile in terms of return only produced an annual gain of 5.4% on average. The simple Bogle Model had bested nearly all the fancy-pants college endowment funds in the US.

Simple advice can be very useful and powerful for many investors. But they’re sometimes ignored because they’re too simple, despite how effective they can be. Don’t make this mistake.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI. In it, I shared commentary in earnings conference calls for the second quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s second quarter after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe is using its rich datasets to create foundation models in areas where the company has expertise; Firefly has generated >2 billion images in 6 months 

Our rich datasets enable us to create foundation models in categories where we have deep domain expertise. In the 6 months since launch, Firefly has captivated people around the world who have generated over 2 billion images.

Adobe will allow users to create custom AI models using their proprietary data as well as offer Firefly APIs so that users can embed Firefly into their workflows

Adobe will empower customers to create custom models using proprietary assets to generate branded content and offer access to Firefly APIs so customers can embed the power of Firefly into their own content creation and automation workflows.

Adobe is monetising its generative AI features through generative credits; the generative credits have limits to them, but the limits are set in a way where users can really try out Adobe’s generative AI functions and build the use of generative AI into a habit

We announced subscription offerings, including new generative AI credits with the goal of enabling broad access and user adoption. Generative credits are tokens that enable customers to turn text-based prompts into images, vectors and text effects, with other content types to follow. Free and trial plans include a small number of monthly fast generative credits that will expose a broad base of prospects to the power of Adobe’s generative AI, expanding our top of funnel. Paid Firefly, Express and Creative Cloud plans will include a further allocation of fast generative credits. After the planned specific number of generative credits is reached, users will have an opportunity to buy additional fast generative credits subscription packs…

…First of all, it was a very thoughtful, deliberate decision to go with the generative credit model. And the limits, as you can imagine, were very, very considered in terms of how we set them. The limits are, of course, fairly low for free users. The goal there is to give them a flavor of it and then help them convert. . And for paid users, especially for people in our Single Apps and All Apps plans, one of the things we really intended to do is try and drive real proliferation of the usage. We didn’t want there to be generation anxiety, put in that way. We wanted them to use the product. We wanted the Generative Fill and Generative Expand. We wanted the vector creation. We want to build the habits of using it. And then what will happen over time as we introduce 3D, as we introduce video and design and vectors, and as we introduce these Acrobat capabilities that Shantanu was talking about, the generative credits that are used in any given month continues to go up because they’re getting more value out of it. And so that’s the key thing. We want people to just start using it very actively right now and build those habits.

Brands around the world are using Adobe’s generative AI – through products such as Adobe GenStudio – to create personalised customer experiences at scale; management sees Adobe GenStudio as a huge new opportunity; Adobe itself is using GenStudio for marketing its own products successfully and it’s using its own success as a selling point

Brands around the globe are working with Adobe to accelerate personalization at scale through generative AI. With the announcement of Adobe GenStudio, we are revolutionizing the entire content supply chain by simplifying the creation-to-activation process with generative AI capabilities and intelligent automation. Marketers and creative teams will now be able to create and modify commercially safe content to increase the scale and speed at which experiences are delivered…

…Shantanu and David already talked about the Adobe GenStudio, and we’re really excited about that. This is a unique opportunity, as you said, for enterprises to really create personalized content and drive efficiencies as well through automation and efficiency. And when you look at the entire chain of what enterprises go through from content creation, production workflow and then activation through DX through all the apps we have on our platform, we have the unique opportunity to do that. We already have deployed it within Adobe for our own Photoshop campaign, and we’re working with a number of agencies and customers to do that. So this is a big net new opportunity for us with Adobe GenStudio…

…And if I could actually just add one quick thing at the GenStudio work that Anil team has been doing, we’ve actually been using that within the Digital Media business already to release some of the campaigns that we’ve released this quarter. So it’s one of these things that it’s great to see the impact it’s having on our business and that becomes a selling point for other businesses, too.

Inferencing costs for generative AI are expensive, but Adobe’s management is still confident of producing really strong margins for FY2023

[Question] We’ve been told generative AI is really expensive to run. The inference and training costs are really high. 

[Answer] Our customers have generated over 2 billion images. And I know it’s not lost on people, all this was done while we’re delivering strong margins. But when we take a step back and think about these technologies, we have investments from a COGS standpoint, inferencing, content; from an R&D standpoint, training, creating foundation models. And David alluded to it in his prepared comments, the image model for Firefly family of models is out, but we’re going to bring other media types to market as well so we’re making substantive investments. When I go back to the framing of my prepared comments, we really have a fundamental operating philosophy that’s been alive at the company for a long time: growth and profitability. We’re going to prioritize, we’re going to innovate and we’re going to execute with rigor…

…As we think about going — the profile going forward, what I’ll come back to is when we initially set fiscal 2023 targets, implicit in those targets was a 44.5% operating margin. If you think about how we just guided Q4… implicit in that guide is an operating margin of around 45.5%.

So as you think about us leading this industry, leading the inflection that’s unfolding in front of us, that mid-40s number, we think, is the right ballpark to think about the margin structure of the company as we continue to drive this technology and leadership. 

Adobe’s management thinks about generative AI’s impact on the company’s growth through two lenses: (1) acquiring new users, and (2) growing the spend of existing customers; for growing the spend of existing customers, Adobe has recently increased the pricing of its products

Yes, Shantanu said that we look at the business implications of this through those two lenses: new user adoption, first and foremost; and then sort of opportunity to continue to grow the existing book of business. On the new user side, we’ve said this for years: our focus continues to be on proliferation. We believe that there — we have a massive number of users in front of us. We continue to have our primary focus being net user adds and subscribers. And so the goal here in proliferation is to get the right value to the right audience at the right price…

…The second thing is going to be on the book of business. And here, we’re — basically, the pricing changes, just as a reminder, they have a rolling impact. 

Adobe’s management took a differentiated approach with Firefly when building the company’s generative AI capabilities, with a focus on using licensed content for training where Adobe has the rights to use the content 

So from the very beginning of Firefly, we took a very different approach to how we were doing generative. We started by looking at and working off the Adobe Stock base, which are contents that are licensed and very clearly we have the rights to use. And we looked at other repositories of content where they didn’t have any restrictions on usage, and we’ve pulled that in. So everything that we’ve trained on has gone through some form of moderation and has been cleared by our own legal teams for use in training. And what that means is that the content that we generate is, by definition, content that isn’t then stepping on anyone else’s brand and/or leveraging content that wasn’t intended to be used in this way. So that’s the foundation of what we’ve done.

Adobe is sharing the economic spoils with the creators of the content it has been training its generative AI models on

We’ve been working with our Stock contributors. We’ve announced, and in fact, yesterday, we had our first payout of contributions to contributors that have been participating and adding stock for the AI training. And we’re able to leverage that base very effectively so that if we see that we need additional training content, we can put a call to action, call for content, out to them, and they’re able to bring content to Adobe in a fully licensed way. So for example, earlier this quarter, we decided that we needed 1 million new images of crowd scenes. And so we put a call to action out. We were able to gather that content in. But it’s fully licensed and fully moderated in terms of what comes in. So as a result, all of the content we generate is safe for commercial use.

Adobe’s management is seeing that enterprise customers place a lot of importance on working with generated AI content that is commercially safe

The second thing is that because of that, we’re able to go to market and also indemnify customers in terms of how they’re actually leveraging that content and using it for content that’s being generated. And so enterprise customers find that to be very important as we bring that in not just in the context of Firefly stand-alone but we integrated into our Creative Cloud applications and Express applications as well. 

Adobe’s management has been very focused on generating fair (in population diversity, for example) and safe content in generative AI and they think this is a good business decision

We’ve been very focused on fair generation. So we look intentionally for diversity of people that are generated, and we’re looking to make sure that the content we generate doesn’t create or cause any harm. And all of those things are really good business decisions and differentiate us from others. 

One of the ways Adobe’s management thinks generative AI could be useful in PDFs is for companies to be able to have conversations with their own company-wide knowledge base that is stored in PDFs – Adobe is already enabling this through APIs

Some of the things that people really want to know is how can I have a conversational interface with the PDF that I have, not just the PDF that I have opened right now but the PDF that are all across my folder, then across my entire enterprise knowledge management system, and then across the entire universe. So much like we are doing in Creative, where you can start to upload your images to get — train your own models within an enterprise, well, it is often [ hard-pressed ]. The number of customers who want to talk to us now that we’ve sort of designed this to be commercially safe and say, “Hey, how do we create our own model,” whether you’re a Coke or whether you’re a Nike, think of them as having that. I think in the document space, the same interest will happen, which is we have all our knowledge within an enterprise associated with PDFs, “Adobe, help me understand how your AI can start to deliver services like that.” So I think that’s the way you should also look at the PDF opportunity that exists, just more people taking advantage of the trillions of PDFs that are out there in the world and being able to do things…

… So part of what we are also doing with PDFs is the fact that you can have all of this now accessible through APIs. It’s not just the context of the PDF, the semantic understanding of that to do specific workflows, we’re starting to enable all of that as well. 

When it comes to generative AI products, Adobe’s goal for enterprises and partners is to provide (1) API access, (2) ability to train their own models, and (3) core workflows that gel well with Adobe’s existing products; management is thinking about extending the same metering concepts as Adobe’s generative credits to API calls too

Our goal right now, for enterprises and third-parties that we work with, is to provide a few things. The first is this ability, obviously, to have API access to everything that we are building in, so that they can build it into their workflows and their automation stack. The second thing is to give them the ability to extend or train their own models as well. So if — as we mentioned earlier, our core model, foundation model is a very clean model. It generates great content and you can rely on it commercially. We want our customers and partners to be able to extend that model with content that is relevant to them so that Firefly is able to generate content in their brand or in their style. So we’ll give them the ability to train their own model as well. And then last, but certainly not least, we’ll give them some core workflows that will work with our existing products, whether it’s Express or whether it’s Creative Cloud or GenStudio as well, so that they can then integrate everything they’re doing onto our core platform.

And then from a monetization perspective, you can imagine the metering concepts that we have for generative credits extending to API calls as well. And of course, those will all be custom negotiated deals with partners and enterprises.

Adobe is its own biggest user of the AI products it has developed for customers – management thinks this is a big change for Adobe because the extent of usage internally of its AI products is huge, and it has helped improve the quality of the company’s AI products

So I think the pace of innovation internally of what we have done is actually truly amazing. I mean relative to a lot of the companies that are out there and the fact that we’ve gone from talking about this to very, very quickly, making it commercially available, I don’t want to take for granted the amount of work that went into that. I think internally, it is really galvanized because we are our own biggest user of these technologies. What we are doing associated with the campaigns and the GenStudio that we are using, as David alluded to it, our Photoshop Everyone Can Campaign or the Acrobat’s Got It campaign or how we will be further delivering campaigns for Express as well as for Firefly, all of this is built on this technology. And we use Express every day, much like we use Acrobat every day. So I think it’s really enabled us to say are we really embracing all of this technology within the company. And that’s been a big change because I think the Creative products, we’ve certainly had phenomenal usage within the company, but the extent to which the 30,000 employees can now use our combined offering, that is very, very different internally

DocuSign (NASDAQ: DOCU)

DocuSign has a new AI-powered feature named Liveness Detection for ID verification, which has reduced the time needed for document signings by 60%

Liveness Detection technology leverages AI-powered biometric checks to prevent identity spoofing, which results in more accurate verification without the signee being present. ID Verification is already helping our customers. Our data shows that it has reduced time to sign by about 60%.

DocuSign is already monetising AI features directly

Today, we’re already monetizing AI directly through our CLM+ product and indirectly through its use in our products such as search. 

DocuSign is partnering with AI Labs to build products in closer collaboration with customers

Our next step on that journey is with AI Labs. With AI Labs, we are co-innovating with our customers. We provide a sandbox where customers can share a select subset of agreements, try new features we’re testing. Our customers get early access to developing technology and re-receive early feedback that we will incorporate into our products. By working with our customers in the development phase, we’re further reinforcing the trusted position we’ve earned over the last 20 years. 

DocuSign’s management is excited about how AI – especially generative AI – can help the company across the entire agreement workflow

We think AI will impact practically all of our products at every step with the agreement workflow. So I don’t know that there’s a — just a one call out. But maybe to off a couple that I’m most interested in, I certainly think that the broader, should we say, agreement analytics category is poised to be completely revamped with generative AI. 

DocuSign has been an early investor in AI but had been held back by fundamental technology until the introduction of generative AI

We were an early investor in that category. We saw that coming together with CLM 4 or 5 years ago and made a couple of strategic investments and been a leader in that space, but have been held back by fundamental technology. And I think now with generative AI, we can do a substantially better job more seamlessly, lighter weight with less professional services. And so I’m very excited to think about how it transformed the CLM category and enables us to deliver more intelligent agreements. I think you mentioned IDV [ID Verification]. I agree 100%. Fundamentally, that entire category is AI-enabled. The upload and ingestion of your ID recognition of it and then that Liveness Detection where we’re detecting who you are and that you are present and matching that to ID, that would simply not be possible without today’s AI technology and really takes just dramatically reshapes the ability to trade off risk and convenience. So I think that’s a good one. 

MongoDB (NASDAQ: MDB)

There are 3 important things to focus on when migrating off a relational database, and MongoDB’s management thinks that generative AI can help with one of them (the rewriting of the application code)

So with regards to Gen AI, I mean, we do see opportunities essentially, the reason when you migrate off using relational migrator, there’s really 3 things you have to focus on. One is mapping the schema from the old relational database to the MongoDB platform, moving the data appropriately and then also rewriting some, if not all, of the application code. Historically, that last component has been the most manually intensive part of the migration, obviously, with the advance of cogeneration tools. These opportunities to automate the rewriting of the application code. I think we’re still in the very early days. You’ll see us continue to add new functionality to relational migrator to help again reduce the switching costs of doing so. And that’s obviously an area that we’re going to focus. 

MongoDB introduced Atlas Vector Search, its vector database which allows developers to build AI applications, and it is seeing significant interest; management hopes to bring Atlas Vector Search to general availability (GA) sometime next year, but some customers are already deploying it in production

We also announced Atlas Vector Search, which enables developers to store, index and query Vector embeddings, instead of having to bolt on vector search functionality separately, adding yet another point solution and creating a more fragmented developer experience. Developers can aggregate and process the vectorized data they need to build AI applications while also using MongoDB to aggregate and process data and metadata. We are seeing significant interest in our vector search offering from a large and sophisticated enterprise customers even though it’s only — still only in preview. As one example, a large global management consulting firm is using Atlas Vector Search for internal research applications that allows consultants to semantically search over 1.5 million expert interview transcripts…

…Obviously, Vector is still in public preview. So we hope to have a GA sometime next year, but we’re really excited about the early and high interest from enterprises. And obviously, some customers are already deploying it in production, even though it’s a public preview product.

MongoDB’s management believes that AI will lead developers to write more software and these software will be exceptionally demanding and will thus require high-performance databases

Over time, AI functionality will make developers more productive to the use of code generation and code assist tools that enable them to build more applications faster. Developers will also be able to enrich applicants with compelling AI experiences by enabling integration with either proprietary or open source large language models to deliver more impact. Now instead of data being used only by data scientists who drive insights, data can be used by developers to build smarter applications that truly transform a business. These AI applications will be exceptionally demanding, requiring a truly modern operational data platform like MongoDB. 

MongoDB’s management believes MongoDB has a bright future in the world of AI because (1) the company’s document database is highly versatile, (2) AI applications need a high-performant, scalable database and (3) AI applications have the same requirements for transactional guarantees, security, privacy etc as other applications

In fact, we believe MongoDB has even stronger competitive advantage in the world of AI. First, the document models inherent in flexibility and versatility renders it a natural fit for AI applications. Developers can easily manage and process various data types all in one place. Second, AI applications require high performance, parallel computations and the ability to scale data processing on an ever-growing base of data. MongoDB supports its features, with features like shorting and auto-scaling. Lastly, it is important to remember AI applications have the same demands as any other type of application: Transactional guarantees, security and privacy requirements, tech search, in-app analytics and more. Our developer data platform that gives developer a unified solution to smarter AI applications.

AI startups as well as industrial equipment suppliers are using MongoDB for their AI needs 

We are seeing these applications developed across a wide variety of customer types and use cases. For example, observe.ai is an AI start-up that leverages 40 billion parameter LLM to provide customers with intelligence and coaching that maximize performance of their frontline support and sales teams. Observe.ai processes and run models on millions of support touch points daily to generate insights for their customers. Most of this rich, unstructured data is stored in MongoDB. Observe.ai chose to build on MongoDB because we enable them to quickly innovate, scale to handle large and unpredictable workloads and meet their security requirements of their largest enterprise customers. On the other end of the spectrum is one of the leading industrial equipment suppliers in North America. This company relies on Atlas and Atlas Device sync to deploy AI models at the edge. To their field teams mobile devices to better manage and predict inventory in areas with poor physical network connectivity, they chose MongoDB because of our ability to efficiently handle large quantities of distributed data and to seamlessly integrate between network edge and their back-end systems.

MongoDB’s management sees customers saying that they prefer being able to have one platform handle all their data use-cases (AI included) rather than stitching point solutions together

People want to use one compelling, unified developer experience to address a wide variety of use cases of which AI is just one of them. And we’re definitely hearing from customers to being able to do that on one platform versus bolting on a bunch of point solutions is far more the preferable approach. And so we’re excited about the opportunity there.

MongoDB is working with Google on a number of AI projects

On the other thing on partners, I do want to say that we’re seeing a lot of work and activity with our partner channel on the AI front as well. We’re working with Google in the AI start-up program, and there’s a lot of excitement. Google had their next conference this week. We’re also working with Google to help train Codey, their code generation tool to help people accelerate the development of AI and other applications. And we’re seeing a lot of interest in our own AI innovators program. We’ve had lots of customers apply for that program. So we’re super excited about the interest that we’re generating.

MongoDB’s management thinks there’s a lot of hype around AI in the short term, but also thinks that AI is going to have a huge impact in the long-term, with nearly every application having some AI functionality embedded within over the next 3-5 years

I firmly believe that we, as an industry, tend to overestimate the impact of a new technology in the short term and underestimate the impact in the long term. So as you may know, there’s a lot of hype in the market right now, in the industry right around AI and in some of the early stage companies in the space, have the valuations to the roof. In some cases, almost — it’s hard to see how people can make money because the risk reward doesn’t seem to be sized appropriately. So there’s a lot of hype in the space. But I do think that AI will be a big impact for the industry and for us long term. I believe that almost every application, both new and existing, will have some AI functionality embedded into the application over the next — in your horizon 3 to 5 years.

MongoDB’s management thinks that vector search (the key distinguishing feature of vector databases) is just a feature and not a product, and it will eventually be built into every database as a feature

Vector Search is really a reverse index. So it’s like an index that’s built into all databases. I believe, over time, Vector Search functionality will be built into all databases or data platforms in the future. There are some point products that are just focused solely on Vector Search. But essentially, it’s a point product that still needs to be used with other technologies like MongoDB to store the metadata, the data to be able to process and analyze all that information. So developers have spoken loudly that having a unified and elegant developer experience is a key differentiator. It removes friction in how they work. It’s much easier to build and innovate on one platform versus learning and supporting multiple technologies. And so my strong belief is that, ultimately, Vector Search will be embedded in many platforms and our differentiation will be a — like it always has been a very compelling and elegant developer experience

MongoDB’s management thinks that having vector search as a feature in a database does not help companies to save costs, but instead, improves the overall developer experience

Question: I know that we’re talking about the developers and how they — they’re voting here because they want the data in a unified platform, a unified database that preserves all that metadata, right? But I would think there’s probably also a benefit to having it all in a single platform as well just because you’re lowering the TCO [total cost of ownership] for your customers as well, right? 

Answer: Vectors are really a mathematical representation of different types of data, so there is not a ton of data, unlike application search, where there’s a profound benefits by storing everything on one platform versus having an operational database and a search database and some glue to keep the data in sync. That’s not as much the case with Vector because you’re talking about storing essentially an elegant index. And so it’s more about the user experience and the development workflow that really matters. And what we believe is that offering the same taxonomy in the same way they know how to use MongoDB to also be able to enable Vector Search functionality is a much more compelling differentiation than a developer have to bolt on a separate vector solution and have to provision, configure and manage that solution along with all the other things they have to do.

MongoDB’s management believes developers will become more important in organisations than data scientists because generative AI will position AI in front of software

Some of the use cases are really interesting, but the fact is that we’re really well positioned because what generative AI does is really instantiate AI in front of — in software, which means developers play a bigger role rather than data scientists, and that’s where you’ll really see the business impact. And I think that impact will be large over the next 3 to 5 years.

Okta (NASDAQ: OKTA)

Okta has been using AI for years and management believes that AI will be transformative for the identity market

AI is a paradigm shift in technology that is transformative opportunities for identity, from stronger security and faster application development to better user experiences and more productive employees. Okta has been utilizing AI for years with machine learning models for spotting attack patterns and defending customers against threats, and we’ll have more exciting AI news to share at Oktane.

Okta’s management believes that every company must have an AI strategy, which will lead to more identities to be protected; a great example is how OpenAI is using Okta; Okta’s relationship with OpenAI started a few years ago and OpenAI is now a big customer, accounting for a significant chunk of the US$100m in TCV (total contract value) Okta had with its top 25 transactions in the quarter

Just like how every company has to be a technology company, I believe every company must have an AI strategy. More companies will be founded on AI, more applications will be developed with AI and more identities will need to be protected with a modern identity solution like Okta. A great example of this is how Okta’s Customer Identity Cloud is being utilized for the massive number of daily log-ins, in authentications by OpenAI, which expanded its partnership with Okta again in Q2…

…So OpenAI is super interesting. So they’re — OpenAI as a Customer Identity Cloud customer, which so when you log in, in ChatGPT, you log in through Okta. And it’s interesting because a developer inside of OpenAI 3 years ago picked our Customer Identity Cloud because it had a great developer experience and from the website and started using it. And this Chat — and at the time, it was the log-in for their APIs and then ChatGPT took off. And now, as you mentioned, we’ve had really pretty sizable transactions with them over the last couple of quarters. And so it’s a great testament to our strategy on Customer Identity, having something that appeals to developers.

And you saw they did something pretty interesting — and so this is really a B2C app, right, of ChatGPT but they — now they recently launched their enterprise offering, and they want to connect ChatGPT to enterprises. So this is — Okta is really good at this, too, because our customer identity cloud connects our customers to consumers, but also connects our customers to workforces. So then you have to start supporting things like Single Sign-On and SAML and Open ID and authorization. And so it’s just open API continues to get the benefits of being able to focus on what they want to focus on, which is obviously their models in the LLMs and the capabilities, and we can focus on the identity plumbing that wires it together.

So the transaction was — it was one of the top — I mentioned the top 25 transactions. The total TCV of all this transaction was — this quarter was $100 million. It was one of those top 25 transactions, but I don’t — I haven’t done the math on the TCV for how much of the $100 million it was. But it was one of our — it was on the larger side this quarter.

Okta’s management thinks that identity is a key building block in a number of digital trends, including AI

It’s always a good reminder that identity is a key building block for Zero Trust security, digital transformation, cloud adoption projects and now AI. These trends will continue in any macroeconomic environment as organizations look for ways to become more efficient while strengthening their security posture.

Salesforce (NYSE: CRM)

Salesforce is driving an AI transformation to become the #1 AI CRM (customer relationship management)

And last quarter, we told you we’re now driving our AI transformation. We’re pioneering AI for both our customers and ourselves leading the industry through this incredible new innovation cycle, and I couldn’t be happier with Srini and David and the entire product and technology team for the incredible velocity of AI products that were released to customers this quarter and the huge impact that they’re making in the market and showing how [ tran ] Salesforce is transforming from being not only the #1 CRM, but to the #1 AI CRM, and I just express my sincere gratitude to our entire [ TNP ] team.

Salesforce’s management will continue to invest in AI

We’re in a new AI era, a new innovation cycle that we will continue to invest into as we have over the last decade. As a result, we expect nonlinear quarterly margins in the back half of this year, driven by investment timing, specifically in AI-focused R&D.

Salesforce’s management believes the world is at the dawn of an AI revolution that will spark a new tech buying cycle and investment cycle

AI, data, CRM, trust, let me tell you, we are at the dawn of an AI revolution. And as I’ve said, it’s a new innovation cycle which is sparking amounts of tech buying cycle over the coming years. It’s also a new tech investment cycle…

…And when we talk about growth, I think it’s going to start with AI. I think that AI is about to really ignite a buying revolution. I think we’ve already started to see that with our customers and even some of these new companies like OpenAI. And we certainly see that in our customers’ base as well. 

Salesforce has been investing in many AI startups through its $500 million generative AI fund

 We’ve been involved in the earliest rounds many of the top AI start-ups. Many of you have seen that, we are in there very early…

… Now through our $500 million generative AI fund, we’re seeing the development of ethical AI with amazing companies like Anthropic, [ Cohere ], Hugging Face and some others,

Salesforce has been working on AI early on

But I’ll tell you, this company has pioneered AI, and not just in predictive, a lot of you have followed up the development and growth of Einstein. But also, you’ve seen that we’ve published some of the first papers on prompt engineering in the beginnings of generative AI, and we took our deep learning routes, and we really demonstrated the potential for generative AI and now to see so many of these companies become so successful.

Every CEO Salesforce’s leadership has met thinks that AI is essential to improving their businesses

So every CEO I’ve met with this year across every industry believes that AI is essential to improving both their top and bottom line, but especially their productivity AI is just augmenting what we can do every single day…

…I think many of our customers and ultimately, all of them believe they can grow their businesses by becoming more connected to their customers than ever before through AI and at the same time, reduce cost, increase productivity, drive efficiency and exceed customer expectations through AI. 

All management teams in Salesforce are using Einstein AI to improve their decision-making

Every single management team that we have here at Salesforce every week, we’re using our Einstein AI to do exactly the same thing. We go back, we’re trying to augment ourselves using Einstein. So what we’ll say is, and we’ve been doing this now and super impressive, we’ll say, okay, Brian, what do you think our number is and we’ll say, okay, that’s very nice, Brian. But Einstein, what do you really think the number is? And then Einstein will say, I think Brian is sandbagging and then the meeting continues. 

Salesforce’s management thinks that every company will undergo an AI transformation with the customer at the centre, and this is why Salesforce is well positioned for the future

The reality is every company will undergo an AI transformation with the customer at the center, because every AI transformation begins and ends with the customer, and that’s why Salesforce is really well positioned with the future.

Salesforce has been investing a lot in Einstein AI, and Einstein is democratising generative AI for users of Salesforce’s products; Salesforce’s management thinks that the real value Salesforce brings to the world is the ability to help users utilise AI in a low code or no code way 

And with this incredible technology, Einstein that we’ve invested so much and grown and integrated into our core technology base. We’re democratizing generative AI, making it very easy for our customers to implement every job, every business in every industry. And I will just say that in the last few months, we’ve injected a new layer of generative AI assistance across all of the Customer 360. And you can see it with our salespeople who are now using our Sales Cloud GPT, which has been incredible, what we’ve released this quarter to all of our customers and here inside Salesforce. And then when we see that, they all say to themselves, you know what, in this new world, everyone can now be in Einstein.

But democratizing generative AI at scale for the biggest brands in the world requires more than — that’s just these large language models and deep learning algorithms, and we all know that because a lot of our customers kind of think and they have tried and they go and they pull something off a Hugging Face, it is an amazing company. We just invested in their new round and grab a model and put some data in it and nothing happens. And then they don’t understand and they call us and say, “Hey, what’s happening here? I thought that this AI was so amazing and it’s like, well, it takes a lot to actually get this intelligence to occur. And that’s what I think that’s the value that Salesforce is bringing is that we’re really able to help our customers achieve this kind of technological superiority right out of the box just using our products in a low code, no code way. It’s really just democratization of generative AI at scale. And that is really what we’re trying to achieve that at the heart of every one of these AI transformations becomes our intelligent, integrated and incredible sales force platform, and we’re going to show all of that at Dreamforce

Salesforce is seeing strong customer momentum on Einstein generative AI (a customer – PenFed – used Einstein-powered chatbots to significantly improve their customer service)

We’re also seeing strong customer momentum on Einstein generative AI. PenFed is a great example of how AI plus data plus CRM plus Trust is driving growth for our customers. PenFed is one of the largest credit unions in the U.S., growing at a rate of the next 9 credit unions combined. They’re already using Financial Services Cloud, Experience Cloud and MuleSoft, and our Einstein-powered chatbots handling 40,000 customer service sessions per month. In fact, today, PenFed resolves 20% of their cases on first contact with Einstein-powered chatbots resulting in a 223% increase in chatbot activity in the past year with incredible ROI. In Q2, PenFed expanded with Data Cloud to unify all the customer data from its nearly 3 million members and increase their use of Einstein to roll out generative AI assistant for every single one of their service agents.

Salesforce’s management thinks that customers who want to achieve success with AI needs to have their data in order

But what you can see with Data Cloud is that customers must get their data together if they want to achieve success with AI. This is the critical first step for every single customer. And we’re going to see that this AI revolution is really a data revolution. 

Salesforce takes the issue of trust very seriously in its AI work; Salesforce has built a unique trust layer within Einstein that allows customers to maintain data privacy, security, and more

Everything Einstein does has also delivered with trust and especially ethics at the center, and I especially want to call out the incredible work of our office of ethical and humane use, pioneering the use of ethics and technology. If you didn’t read their incredible article in HBR this quarter. It was awesome. And they are doing incredible work really saying that it’s not just about AI, it’s not just about data, but it’s also about trust and ethics. And that’s why we developed this Einstein trust layer. This is completely unique in the industry. It enables our customers to maintain their data privacy, security, residency and compliance goals.

Salesforce has seen customers from diverse industries (such as Heathrow Airport and Schneider Electric) find success using Salesforce’s AI tools

Heathrow is a great example of transformative power of AI, data, CRM and trust and the power of a single source of truth. They have 70 million passengers who pass through their terminal annually, I’m sure many of you have been one of those passengers I have as well, Heathrow is operating in a tremendous scale, managing the entire airport experience with the Service Cloud, Marketing Cloud, Commerce Cloud, but now Heathrow, they’ve added Data Cloud also giving them a single source of truth for every customer interaction and setting them up to pioneer the AI revolution. And with Einstein, Heathrow’s service agents now have this AI-assisted generator applies to service inquiries, case deflection, writing case summaries, all the relevant data and business context coming from Data Cloud…

…Schneider Electric has been using Customer 360 for over a decade, enhancing customer engagement, service and efficiency. With Einstein, Schneider has refined demand generation, reduced close times by 30%. And through Salesforce Flow, they’ve automated order fulfillment. And with Service Cloud, they’re handling over 8 million support interactions annually, much of it done on our self-service offering. In Q2, Schneider selected Marketing Cloud to further personalize the customer experience.

Salesforce’s management thinks the company is only near the beginning of the AI evolution and there are four major steps on how the evolution will happen

And let me just say, we’re at the beginning of quite a ballgame here and we’re really looking at the evolution of artificial intelligence in a broad way, and you’re really going to see it take place over 4 major zones.

And the first major zone is what’s played out in the last decade, which has been predictive. That’s been amazing. That’s why Salesforce will deliver about [ 1 trillion ] transactions on Einstein this week. It’s incredible. 

These are mostly predictive transactions, but we’re moving rapidly into the second zone that we all know is generative AI and these GPT products, which we’ve now released to our customers. We’re very excited about the speed of our engineering organization and technology organization, our product organization and their ability to deliver customer value with generative AI. We have tremendous AI expertise led by an incredible AI research team. And this idea that we’re kind of now in a generative zone means that’s zone #2.

But as you’re going to see at Dreamforce, zone #3 is opening up with autonomous and with agent-based systems as well. This will be another level of growth and another level of innovation that we haven’t really seen unfold yet from a lot of companies, and that’s an area that we are excited to do a lot of innovation and growth and to help our customers in all those areas.

And then we’re eventually going to move into [ AGI ] and that will be the fourth area. And I think as we move through these 4 zones, CRM will become more important to our customers than ever before. Because you’re going to be able to get more automation, more intelligence, more productivity, more capabilities, more augmentation of your employees, as I mentioned.

Salesforce can use AI to help its customers in areas such as call summaries, account overviews, responding to its customers’ customers, and more

And you’re right, we’re going to see a wide variety of capability is exactly like you said, whether it’s the call summaries and account overviews and deal insights and inside summaries and in-product assistance or mobile work briefings. I mean, when I look at things like service, when we see the amount of case deflection we can do and productivity enhancements with our service teams not just in replies and answers, but also in summaries and summarization. We’ve seen how that works with generative and how important that is in knowledge generation and auto-responding conversations and then we’re going to have the ability for our customers to — with our product.

Salesforce has its own AI models, but Salesforce has an open system – it’s allowing customers to choose any models they wish

We have an open system. We’re not we’re not dictating that they have to use any one of these AI systems. We have an ecosystem. Of course, we have our own models and our own technology that we have given to our customers, but we’re also investing in all of these companies, and we plan to be able to offer them as opportunities for those customers as well, and they’ll be able to deliver all kinds of things. And you’ll see that whether it’s going to end up being contract digitization and cost generation or survey generators or all kinds of campaign assistance.

Slack is going to be an important component of Salesforce’s AI-related work; management sees Slack as an easy-to-use interface for Salesforce’s AI systems

Slack has become incredible for these AI companies, every AI company that we’ve met with is a Slack company. All of them make their agents available for Slack first. We saw that, for example, with Anthropic, where Cloud really appeared first and [ Cloud 2 ], first in Slack.

And Anthropic, as a company uses Slack internally and they have a — they take their technology and develop news digest every day and newsletters and they do incredible things with Slack — Slack is just a treasure trove of information for artificial intelligence, and you’ll see us deliver all kinds of new capabilities in Slack along these lines.

And we’re working, as I’ve mentioned, get Slack to wake up and become more aware and also for Slack to be able to do all of the things that I just mentioned. One of the most exciting things I think you’re going to see a Dreamforce is Slack very much as a vision for the front end of all of our core products. We’re going to show you an incredible new capability that we call Slack Sales Elevate, which is promoting our core Sales Cloud system running right inside Slack.

That’s going to be amazing, and we’re going to also see how we’re going to release and deliver all of our core services in sales force through Slack. This is very important for our company to deliver Slack very much as a tremendous easy-to-use interface on the core Salesforce, but also all these AI systems. So all of that is that next generation of artificial intelligence capability, and I’m really excited to show all of that to you at Dreamforce as well as Data Cloud as well.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, and Salesforce. Holdings are subject to change at any time.

When Genius Failed (temporarily)*

Not even a business and investing genius can save us from short-term pain.

The late Henry Singleton was a bona fide polymathic genius. He had a PhD in electrical engineering and could play chess just below the grandmaster level. In the realm of business, Warren Buffett once said that Singleton “has the best operating and capital deployment record in American business… if one took the 100 top business school graduates and made a composite of their triumphs, their record would not be as good.”

Singleton co-founded Teledyne in 1960 and stepped down as chairman in 1990. Teledyne started life as an electronics company and through numerous acquisitions engineered by Singleton, morphed into an industrials and insurance conglomerate. According to The Outsiders, a book on eight idiosyncratic CEOs who generated tremendous long-term returns for their shareholders, Teledyne produced a 20.4% annual return from 1963 to 1990, far ahead of the S&P 500’s 8.0% return. Distant Force, a hard-to-obtain memoir on Singleton, mentioned that a Teledyne shareholder who invested in 1966 “was rewarded with an annual return of 17.9 percent over 25 years, or a return of 53 times his invested capital.” In contrast, the S&P 500’s return was just 6.7 times in the same time frame. 

Beyond the excellent long-term results, I also found another noteworthy aspect about Singleton’s record: It is likely that shareholders who invested in Teledyne in 1963 or 1966 would subsequently have thought, for many years, that Singleton’s genius had failed them. I’m unable to find precise historical stock price data for Teledyne during Singleton’s tenure. But based on what I could gather from Distant Force, Teledyne’s stock price sunk by more than 80% from 1967 to 1974. That’s a huge and demoralising decline for shareholders after holding on for seven years, and was significantly worse than the 11% fall in the S&P 500 in that period. But even an investor who bought Teledyne shares in 1967 would still have earned an annualised return of 12% by 1990, outstripping the S&P 500’s comparable annualised gain of 10%. And of course, an investor who bought Teledyne in 1963 or 1966 would have earned an even better return, as mentioned earlier. 

Just like how Buffett’s Berkshire Hathaway had seen a stomach-churning short-term decline in its stock price enroute to superb long-term gains driven by outstanding business growth, shareholders of Teledyne also had to contend with the same. I don’t have historical financial data on Teledyne from primary sources. But for the 1963-1989 time frame, based on data from Distant Force, it appears that the compound annual growth rates (CAGRs) for the conglomerate’s revenue, net income, and earnings per share were 19.8%, 25.3%, and 20.5%, respectively; the self-same CAGRs for the 1966-1989 time frame were 12.1%, 14.3%, and 16.0%. These numbers roughly match Teledyne’s returns cited by The Outsiders and Distant Force, once again demonstrating a crucial trait about the stock market I’ve mentioned in many earlier articles in in this blog (see here and here for example): What ultimately drives a stock’s price over the long run is its business performance.

Not every long-term winner in the stock market will bring its shareholders through an agonising fall mid-way. A notable example is the Canada-based Constellation Software, which is well-known in the investment community for being a serial acquirer of vertical market software businesses. The company’s stock price has risen by nearly 15,000% from its May 2006 IPO to the end of June 2023, but it has never seen a peak-to-trough decline of more than 30%. This said, it’s common to see companies suffer significant drawdowns in their stock prices while on their way to producing superb long-term returns. An unfortunate reality confronting investors who are focused on the long-term business destinations of the companies they’re invested in is that while the end point has the potential to be incredibly well-rewarding, the journey can also be blisteringly painful.

*The title of this section is a pun on one of my favourite books on finance, titled When Genius Failed. In the book, author Roger Lowenstein detailed how the hedge fund, Long-Term Capital Management (LTCM), produced breath-taking returns in a few short years only to then give it all back in the blink of an eye. $1 invested in LTCM at its inception in February 1994 would have turned into $4 by April 1998, before collapsing to just $0.30 by September in the same year; the fund had to be rescued via a bail-out orchestrated by the Federal Reserve Bank of New York. Within LTCM’s ranks were some of the sharpest minds in finance, including Nobel laureate economists, Robert Merton and Myron Scholes. Warren Buffett once said that LTCM “probably have as high an average IQ as any 16 people working together in one business in the country…[there was] an incredible amount of intellect in that room.” LTCM’s main trading strategy was arbitrage – taking advantage of price differentials between similar financial securities that are trading at different prices. The LTCM team believed that the price differentials between similar instruments would eventually converge and they set up complex trades involving derivatives to take advantage of that convergence. Because of the minute nature of the price differentials, LTCM had to take on enormous leverage in order to make substantial profits from its arbitrage trading activities. According to Roger Lowenstein’s account, leverage ratios of 20-to-1 to 30-to-1 were common. At its peak, LTCM was levered 100-to-1 – in other words, the hedge fund was borrowing $100 for every dollar of asset it had. Compounding the problem, LTCM’s partners, after enjoying startling success in the fund’s early days, started making directional bets in the financial markets, a different – and arguably riskier – activity from their initial focus on arbitrage. The story of LTCM’s downfall is a reminder of how hubris and leverage can combine into a toxic cocktail of financial destruction.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

Lessons From Two Polar Opposite Companies

The ultimate goal of management should be to maximise shareholder value. This means returning as much cash (discounted to the present) as possible to shareholders over time. 

Finding the right management team that can do this is key to good long-term returns.

Constellation Software

One of the best examples of a management team that is great at maximising shareholder value is that of Constellation Software. 

Headed by Mark Leonard, the team behind Constellation Software has been consistently finding ways to grow free cash flow per share for shareholders by using the cash it generates to acquire companies on the cheap. Constellation’s secret is that it buys companies with low organic but at really cheap valuations. Although growth is low, the investments pay off very quickly due to the low valuations they were acquired for. 

The consistent use of available cash for new investments mean that Constellation’s dividend payouts have been lumpy and relatively small. But this strategy should pay off over time and enable Constellation’s shareholders to receive a much bigger dividend stream in the future. 

Not only are Leonard and his team good allocators of capital and excellent operators, they are also careful with spending shareholders’ money. In his 2007 shareholders’ letter, Leonard wrote:

“I recently flew to the UK for business using an economy ticket. For those of you who have seen me (I’m 6’5”, and tip the non-metric scale at 280 lbs.) you know that this is a bit of a hardship. I can personally afford to fly business class, and I could probably justify having Constellation buy me a business class ticket, but I nearly always fly economy. I do this because there are several hundred Constellation employees flying every week, and we expect them to fly economy when they are spending Constellation’s money. The implication that I hope you are drawing, is that the standard we use when we spend our shareholders’ money is even more stringent than that which we use when we are spending our own.”

This attitude on safeguarding shareholders’ money is exactly what Constellation’s shareholders love. This reliability is also part of the reason why Constellation has been such a big success in the stock market. The company’s stock price is up by more than 14,000% since its May 2006 IPO.

Singapore Press Holdings

On the flip side, there are companies that have management teams that do not strive to maximise shareholder value. Some hoard cash, or use the cash a company generates for pet projects that end up wasting shareholders’ money. And then, there are some management teams that have other priorities that are more important than maximising shareholder value.

Singapore Press Holdings (SPH), for example, was a company that I think did not do enough to maximise shareholder value. SPH, which is based in Singapore but delisted from the country’s stock market in May 2022, was a company that published Singapore’s most widely-read newspapers, including The Straits Times. The company also owned the online news portal, straitstimes.com, as well as other local media assets such as radio channels and magazines. In addition, SPH owned real estate such as its print and news centre that were used for its media business. SPH also had investments in SPH REIT and other real estate.

In 2021, SPH spun off its entire media arm, including its print and news centre, to a new non-profit entity. Unlike normal spin-offs or sales, SPH shareholders did not receive any shares in the new entity, nor did SPH receive any cash. Instead, SPH donated its whole media segment to the new entity for just S$1. To rub salt into shareholders’ wounds, SPH donated S$80 million in cash, S$20 million in SPH REIT units, and another $10 million in SPH shares, to the new entity. 

After the spin-off, SPH’s net asset value dropped by a whopping S$238 million. The restructuring clearly was not designed to maximise shareholder value.

Management said that SPH had to give away its media segment as selling it off or winding up the media business was not a feasible option given the “critical function the media plays in providing quality news and information to the public.”

In other words, management was torn between the interests of the country the company is in, and its shareholders. Ultimately, shareholders’ hard-earned money was squandered in the process. This was possibly one of the more brazen mishandlings of shareholder money I’ve witnessed in the last decade.

Bottom line

As minority shareholders in public companies, we often have little to no say on how things are run within a company. Our votes during shareholder meetings are overshadowed by other major shareholders who may also have conflicting interests. As such we rely on the honesty and integrity of management to put minority shareholders’ interests as a priority. 

Unfortunately, conflicts of interest do occasionally occur. As an investor, you may want to consider only investing in companies that will protect shareholders’ interests fervently such as the example shown by Mark Leonard.

On the other hand, we should avoid situations where conflicts of interest may encourage the misuse of funds or even promote dishonest behaviour.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

A Reason For Optimism For Global Economic Growth

There are necessary factors that have to be in place in order for economies to advance over time. The good news is these factors are present today.

There are a myriad of important political, social, economic, and healthcare issues that are plaguing our globe today. But I am still long-term optimistic on the stock market.

This is because I still see so much potential in humanity. There are more than 8.0 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc but I have faith that humanity can clean it up. To me, investing in stocks is ultimately the same as having faith in the long-term positivity of humanity. I will remain long-term optimistic on stocks so long as I continue to have this faith. 

What helps me keep the faith is also the existence of other factors that provide fertile soil for mankind’s desire for progress to flourish. In his excellent book, The Birth of Plenty, the polymathic William Bernstein (he’s a neurologist as well as finance researcher) explained why the pace of global economic growth picked up noticeably starting in the early 1800s; Figure 1 below shows the unmistakable growth spurt in global GDP per capita that started, and continued on, from that period.

 Figure 1; Source: The Birth of Plenty

Bernstein wrote in his book that there are four necessary factors for economies to advance over time: 

  • Respect for property rights: Entrepreneurs and business owners must have confidence that the rewards from their efforts will not be unduly confiscated
  • Broad acceptance of the scientific method for investigating how the world works: The foundation for innovative ideas is a useful intellectual framework  
  • Easy access to capital: Without funding, even the best business ideas will be starved of fuel to take off
  • Methods for rapid and efficient transport of ideas and widgets: Great ideas and products will be unable to find their appropriate audience in time without reliable and fast transportation  

Without any of these factors, economic growth can’t proceed. From my vantage point, all four factors are firmly in place in large swathes of the world, especially in the USA, the world’s largest economy. This is a strong reason for optimism for global economic growth to continue powering on in the years ahead. So, the only time I will turn pessimistic on the long-term returns of stocks is when they become wildly overpriced – and I don’t think this is the case today.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

How To Lose Money Investing With Warren Buffett

Not even Warren Buffett can prevent market volatility from wreaking havoc.

Warren Buffett is one of my investing heroes. He assumed control of Berkshire Hathaway in 1965 and still remains at the helm. Through astute acquisitions and stock-picking, he has grown Berkshire into one of the most valuable companies in the world today. US$1,000 invested in Berkshire at the time Buffett came into the picture would have grown to US$37.9 million by the end of 2022.

Despite this tremendous record, it would have still been easy for an investor to lose money while investing with Buffett. It all has to do with our very human emotions.

Table 1 shows the five highest annualised growth rates in book value per share Berkshire has produced over rolling 10-year calendar-year periods from 1965 to 2022. 

Table 1; Source: Berkshire annual shareholder letters

In the 1974-1983 period, Berkshire produced one of its highest annualised book value per share growth rates at 29.4%. The destination was brilliant, but the journey was anything but smooth. US$1,000 invested in Berkshire shares at the end of 1973 would be worth just US$526 (a decline of 47.4%) by the end of 1975. Over the same years, the S&P 500 was up by 1.0% including dividends. And it wasn’t the case where Berkshire’s book value per share experienced a traumatic decline – in fact, the company’s book value per share increased by a total of 28.6% in that period. Moreover, prior to the decline in Berkshire’s stock price, its book value per share was up by a healthy 16.0% per year from 1965 to 1973.

So in the first two years of one of the best decades of value-building Buffett has led Berkshire in, after a long period of excellent business growth, the company’s stock price fell by nearly half and also dramatically underperformed the US stock market. It is at this juncture – the end of 1975 – where it would have been easy for an investor who bought Berkshire shares before or at the end of 1973 to throw in the towel. Seeing your investment cut in half while the market barely budged is painful, even if you know that the underlying business was growing in value. It’s only human to wave the white flag.

But as an apt reflection of Ben Graham’s timeless analogy of the stock market being a voting machine in the short run but a weighing machine in the long run, Berkshire’s book value per share and stock price compounded at highly similar annual rates of 29.4% and 32.6% over the 1974-1983 timeframe (the S&P 500’s annualised return was just 10.5%). This is the unfortunate reality confronting investors who are focused on the long-term business destinations of the companies they’re invested in: The end point has the potential to be incredibly well-rewarding, but the journey can also be blisteringly painful. Bear this in mind when you invest in stocks, for you can easily lose money – even if you’re investing with Buffett – if you’re not focused on the right numbers (the business’s value) and if you do not have the right temperament.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. We currently do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

When Should You Use EBITDA?

It is becoming increasingly common for companies to report adjusted earnings but when should you really make adjustments to earnings?

In the lexicon of finance, EBITDA stands for earnings before interest, tax, depreciation and amortisation. It is a commonly reported metric among companies and is sometimes used by management teams to make companies appear more profitable than they actually are.

But making certain adjustments to a company’s earnings can still be useful in certain scenarios

In this article, I explore when should investors, and when should they not, make adjustments to a company’s earnings.

Interest expense

One scenario when it may be good to measure earnings before interest is when you are a bondholder. Bond holders need to see if a company has the capacity to pay its interest and earnings before interest is a good tool to measure profitability in this case. 

Another situation to remove interest is when you are an equity investor (invested in the stock of the company) and want to make year-on-year comparisons. Interest expenses can fluctuate wildly based on interest rates set by central banks. Removing interest expense gives you a better gauge of the company’s profitability without the distorting effects of interest rates.

On the other hand, if you are measuring a company’s valuation, then including interest expense is important. This gives you a closer estimate to the company’s cash flow and the amount of cash that can be returned to shareholders through dividends.

Tax expense

Tax expense is very similar to interest expense. If you are a bondholder, you should look at earnings before tax as this gives you a gauge of whether the company can pay you your bond coupon.

Like interest rates, tax rates can also vary based on laws and tax credits. This can result in tax rates changing from year to year. If you are an equity investor and want to assess how a company has done compared to prior years, it may be best to remove taxes to see the actual growth of the company. 

On the contrary, if you are valuing a company, I prefer to include taxes as it is an actual cash outflow. The company’s value should be based on actual cash flows to an investor and tax has a real impact on valuation.

Depreciation expense

Depreciation is a little trickier. Both bond and equity investors need to be wary of removing depreciation from earnings. 

In many cases, while depreciation may not be a cash expense, it actually results in a cash outflow as the company needs to replace its assets over time in the form of capital expenseditures.

Capital expenditures are a cash outflow that impacts the company’s annual cash flow. This, in turn, impacts the company’s ability to pay both its interest expense to bondholders and dividends to shareholders.

In some cases, depreciated assets do not need to be replaced, or they can be replaced at a lower rate compared to the depreciation expense recorded. This can be due to aggressive accounting methods or the assets having a longer shelf-life than what is accounted for in the income statement. In this scenario, it may be useful to use earnings before depreciation.

In any case, I find it helpful to compare depreciation expenses with capital expenditures to get a better feel for a company’s cash flow situation.

Amortisation expenses

Companies may amortise their goodwill or other intangible assets over time. In many cases, the amortisation of goodwill is a one-off expense and should be removed when making year-on-year comparisons. 

I think that both bond and equity investors should remove amortisation expense, if it is a one-off, when assessing a company.

In many cases, intangible assets and goodwill are actually long-lasting assets that still remain valuable to a company over time. However, due to accounting standards, a company may be obliged to amortise these assets and reduce their value on its balance sheet. In these cases, I prefer to remove amortisation from earnings.

On the other hand, on the cash flow statement, you may come across a line that says “purchase of intangibles”. If this is a recurring annual cash outflow, you may want to include amortisation expenses.

Other adjustments

Companies may make other adjustments and report “adjusted” EBITDA. These adjustments may include things such as stock-based compensation (SBC), foreign currency translation gains or losses, and gains or losses from the sale of assets.

These adjustments may be necessary to make more accurate year-on-year comparisons of a company’s core business. However, one exception may be SBC. This is a real expense for shareholders as it dilutes their ownership stake in a company.

While standard accounting is not a good proxy for the monetary impact of SBC, removing it altogether is also incorrect. It may be better to account for SBC by looking at earnings or cash flow on a per-share basis to account for the dilution.

Final thoughts

EBITDA and other adjustments made to earnings can be useful on many occasions especially when making year-on-year comparisons or if you are a bondholder. Removing non-recurring, non-cash expenses such as amortisation also makes sense when valuing a company.

However, there are also situations when it is better to use GAAP (Generally Accepted Accounting Principles) or IFRS (International Financial Reporting Standards) earnings.

Some companies that are loss-making may conveniently use adjusted earnings simply to mislead investors to get their share price higher. This should be a red flag for investors.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

A Possible Scientific Explanation For Why Top-Down Control of Economies Is A Bad Idea

Economies are complex systems that exhibit unpredictable emergent behaviours.

Mitch Waldrop’s Complexity: The Emerging Science at the Edge of Order and Chaos, published in 1992, is one of the best books I’ve read in recent times. It describes the science behind complex adaptive systems and the work academics from numerous disciplines have done on the concept of emergence. I also think it contains a kernel of insight – and a possible scientific explanation – on why top-down control of economies is a bad idea.

Complexity and emergence

But first, what are complex adaptive systems? The following passages from Waldrop’s book is a neat summary of what they are:

“For example, every one of these questions refers to a system that is complex, in the sense that a great many independent agents are interacting with each other in a great many ways. Think of the quadrillions of chemically reacting proteins, lipids, and nucleic acids that make up a living cell, or the billions of interconnected neurons that make up the brain, or the millions of mutually interdependent individuals who make up a human society.

In every case, moreover, the very richness of these interactions allows the system as a whole to undergo spontaneous self-organization. Thus, people trying to satisfy their material needs unconsciously organize themselves into an economy through myriad individual acts of buying and selling; it happens without anyone being in charge or consciously planning it. The genes in a developing embryo organize themselves in one way to make a liver cell and in another way to make a muscle cell… In every case’ groups of agents seeking mutual accommodation and self-consistency somehow manage to transcend themselves, acquiring collective properties such as life, thought, and purpose that they might never have possessed individually.

Furthermore, these complex, self-organizing systems are adaptive, in that they don’t just passively respond to events the way a rock might roll around in an earthquake. They actively try to turn whatever happens to their advantage. Thus, the human brain constantly organizes and reroganizes its billions of neural connections so as to learn from experience (sometimes, anyway)… the marketplace responds to changing tastes and lifestyles, immigration, technological developments, shifts in the price of raw materials, and a host of other factors. 

Finally, every one of these complex, self-organizing, adaptive systems possesses a kind of dynamism that makes them qualitatively different from static objects such as computer chips or snowflakes, which are merely complicated. Complex systems are more spontaneous, more disorderly, more alive than that. At the same time, however, their peculiar dynamism is also a far cry from the weirdly unpredictable gyrations known as chaos. In the past two decades, chaos theory has shaken science to its foundations with the realization that very simple dynamical rules can give rise to extraordinarily intricate behavior; witness the endlessly detailed beauty of fractals, or the foaming turbulence of a river. And yet chaos by itself doesn’t explain the structure, the coherence, the self-organizing cohesiveness of complex systems.

Instead, all these complex systems have somehow acquired the ability to bring order and chaos into a special kind of balance. This balance point – often called the edge of chaos – is where the components of a system never quite lock into place, and yet never quite dissolve into turbulence, either. The edge of chaos is where life has enough stability to sustain itself and enough creativity to deserve the name of life. The edge of chaos is where new ideas and innovative genotypes are forever nibbling away at the edges of the status quo, and where even the most entrenched old guard will eventually be overthrown.”

Put simply, a complex adaptive system comprises many agents, each of which may be following only simple rules. But through the interactions between the agents, sophisticated outcomes spontaneously “emerge”, even when the agents were not instructed to produce these outcomes. This phenomenon is known as emergence. Waldrop’s book has passages that help shed more light on emergence, and also has an illuminating example of how an emergent behaviour takes shape:

“These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry. Thus, molecules would form cells, neurons would form brains, species would form ecosystems, consumers and corporations would form economies, and so on. At each level, new emergent structures would form and engage in new emergent behaviors. Complexity, in other words, was really a science of emergence… 

…Cells make tissues, tissues make organs, organs make organisms, organisms make ecosystems – on and on. Indeed, thought Holland, that’s what this business of “emergence” was all about: building blocks at one level combining into new building blocks at a higher level. It seemed to be one of the fundamental organizing principles of the world. It certainly seemed to appear in every complex, adaptive system that you looked at…

…Arthur was fascinated by the thing. Reynolds had billed the program as an attempt to capture the essence of flocking behavior in birds, or herding behavior in sheep, or schooling behavior in fish. And as far as Arthur could tell, he had succeeded beautifully. Reynolds’ basic idea was to place a large collection of autonomous, birdlike agents—“boids”—into an onscreen environment full of walls and obstacles. Each boid followed three simple rules of behavior: 

1. It tried to maintain a minimum distance from other objects in the environment, including other boids.

2. It tried to match velocities with boids in its neighborhood.

3. It tried to move toward the perceived center of mass of boids in its neighborhood.

What was striking about these rules was that none of them said, “Form a flock.” Quite the opposite: the rules were entirely local, referring only to what an individual boid could see and do in its own vicinity. If a flock was going to form at all, it would have to do so from the bottom up, as an emergent phenomenon. And yet flocks did form, every time. Reynolds could start his simulation with boids scattered around the computer screen completely at random, and they would spontaneously collect themselves into a flock that could fly around obstacles in a very fluid and natural manner. Sometimes the flock would even break into subflocks that flowed around both sides of an obstacle, rejoining on the other side as if the boids had planned it all along. In one of the runs, in fact, a boid accidentally hit a pole, fluttered around for a moment as though stunned and lost—then darted forward to rejoin the flock as it moved on.”

Emergence in the economy

In the first series of excerpts I shared from Waldrop’s book, it was hinted that an economy is a complex adaptive system. But this is not always true. Emergence is unlikely to happen in an economy with a very simple make-up. On the other hand, emergence is likely to occur in an economy whose depth and variety of economic activity within has increased over time. Here’s a relevant passage from Waldrop’s book:

“In fact, he argued, once you get beyond a certain threshold of complexity you can expect a kind of phase transition analogous to the ones he had found in his autocatalytic sets. Below that level of complexity you would find countries dependent upon just a few major industries, and their economies would tend to be fragile and stagnant. In that case, it wouldn’t matter how much investment got poured into the country. “If all you do is produce bananas, nothing will happen except that you produce more bananas.” But if a country ever managed to diversify and increase its complexity above the critical point, then you would expect it to undergo an explosive increase in growth and innovation-what some economists have called an “economic takeoff.””

This brings me to the topic behind the title and introduction of this article: Why top-down control of economies is a bad idea. An important aspect of emergence is that specific emergent phenomena in any particular complex adaptive system are inherently unpredictable. This applies to economies too. Given everything above, I think it stands to reason that any government that aims to exert top-down control over an economy that has grown in complexity would likely do a poor job. How can you control something well if you’re unable to predict its behaviour? 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Thoughts From American Technology Companies On AI

A vast collection of notable quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies.

Nearly a month ago, I published What American Technology Companies Are Thinking About AI. In it, I shared commentary in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls after the article was published. The leaders of these companies also had insights on AI that I think would be useful to share. Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management will be building foundational models as well as using generative AI as co-pilots for users

Our generative AI strategy focuses on data, models and interfaces. Our rich datasets across creativity, documents and customer experiences enable us to train models on the highest quality assets. We will build foundation models in the categories where we have deep domain expertise, including imaging, vector, video, documents and marketing. We are bringing generative AI to life as a co-pilot across our incredible array of interfaces to deliver magic and productivity gains for a broader set of customers. 

Adobe’s management thinks its generative AI feature, Firefly, has multiple monetisation opportunities, but will only introduce specific pricing later this year with a focus on monetisation right now

Our generative AI offerings represent additional customer value as well as multiple new monetization opportunities. First, Firefly will be available both as a stand-alone freemium offering for consumers as well as an enterprise offering announced last week. Second, copilot generative AI functionality within our flagship applications will drive higher ARPUs and retention. Third, subscription credit packs will be made available for customers who need to generate greater amounts of content. Fourth, we will offer developer communities access to Firefly APIs and allow enterprises the ability to create exclusive custom models with their proprietary content. And finally, the industry partnerships as well as Firefly represent exciting new top-of-funnel acquisition opportunities for Express, Creative Cloud and Document Cloud. Our priority for now is to get Firefly broadly adopted, and we will introduce specific pricing later this year.

Adobe is seeing outstanding customer demand for generative AI features

We’re really excited, if you can’t tell on the call, about Firefly and what this represents. The early customer and community response has been absolutely exhilarating for all of us. You heard us talk about over 0.5 billion assets that have already been generated. Generations from Photoshop were 80x higher than we had originally projected going into the beta and obviously, feel really good about both the quality of the content being created and also the ability to scale the product to support that

Adobe has built Firefly to be both commercially as well as socially safe for use

Third is that, and perhaps most importantly, we’ve also been able to — because of the way we share and are transparent about where we get our content, we can tell customers that their content generated with Firefly is commercially safe for use. Copyrights are not being violated. Diversity and inclusion is front and center. Harmful imagery is not being generated.

Adobe’s management believes that (1) marketing will become increasingly personalised, (2) the personalisation has to be done at scale, and (3) Adobe can help customers achieve the personalisation with the data that it has

I think if you look at Express and Firefly and also the Sensei GenAI services that we announced for Digital Experience, comes at a time when marketing is going through a big shift from sort of mass marketing to personalized marketing at scale. And for the personalization at scale, everything has to be personalized, whether it’s content or audiences, customer journeys. And that’s the unique advantage we have. We have the data within the audience — the Adobe Experience Platform with the real-time customer profiles. We then have the models that we’re working with like Firefly. And then we have the interfaces through the apps like Adobe Campaign, Adobe Experience Manager and so on.So we can put all of that together in a manner that’s really consistent with the data governance that people — that customers expect so that their data is used only in their context and use that to do personalized marketing at scale. So it really fits very well together.

Adobe’s management believes that content production will increase significantly in the next few years because of AI and this will lead to higher demand for more software-seats

And we’re sitting at a moment where companies are telling us that there’s a 5x increase in content production coming out in the next few — next couple of years. And you see a host of new media types coming out. And we see the opportunity here for both seat expansion as a result of this and also because of the value we’re adding into our products themselves, increase in ARPU as well.

DocuSign (NASDAQ: DOCU)

DocuSign’s management believes that generative AI can transform all aspects of the agreement workflow

In brief, we believe AI unlocks the true potential of the intelligent agreement category. We already have a strong track record, leveraging sophisticated AI models, having built and shipped solutions based on earlier generations of AI. Generative AI can transform all aspects of agreement workflow, and we are uniquely positioned to capitalize on this opportunity. As an early example, we recently introduced a new limited availability feature agreement summarization. This new feature, which is enabled by our integration with Microsoft’s Azure Open AI service and tuned with our own proprietary agreement model uses AI to summarize and documents critical components giving signers a clear grasp of the most relevant information within their agreement, while respecting data security and privacy. 

Some possible future launches of generative AI features by DocuSign include search capabilities across agreement libraries and edits of documents based on industry best practices

Future launches will include search across customer agreement libraries, extractions from agreements and proposed language and edits based on customer, industry and universal best practices.

DocuSign has been working with AI for several years, but management sees the introduction of generative AI as a great opportunity to drive significant improvements to the company’s software products

I’d add to that, that I think the biggest change in our road map beyond that clear focus and articulation on agreement workflow is really the advent of generative AI. We’ve been working on AI for several years. As you know, we have products like Insights that leverage earlier generations of AI models. But given the enormous change there, that’s a fantastic opportunity to really unlock the category. And so, we’re investing very heavily there. We released some new products, and we’ll release more next week at Momentum, but I’m sure we’ll talk more about AI during the call. 

DocuSign’s management sees AI technology as the biggest long-term driver of the company’s growth

So, I think we — overall, I would say, product innovation is going to be the biggest driver and unlocker of our medium- to long-term growth. We do believe that we have very credible low-hanging fruit from better execution on our self-serve and product-backed growth motion. And so, that’s a top priority to drive greater efficiency in the near to medium term. I think the AI impact is perhaps the biggest in the long term. And we are starting to ship products, as I alluded to, and we’ll announce more next week. But in terms of its overall impact on the business, I think it’s still behind the other two in the — in the near to medium term. But in terms of the long-term potential of our category of agreement workflow, I think it’s a massive unlock and a fantastic opportunity for DocuSign.

DocuSign’s management is currently monetising AI by bundling AI features with existing features in some cases, and charging for AI features as add-ons in others; management needs to learn more about how customers are using AI features when it comes to monetization

In terms of monetization, I expect AI features to be both bundled as part of our baseline products, strengthening their functionality and value, as I suggested earlier. And in some cases, packaged as a separately charged add-on. We do both today. So, if you take our Insights product, which is really our AI-driven analytics product for CLM, we both have a stand-alone SKU. It’s sold separately as well as a premium bundle. I think, we’re going to need to learn a little bit more about how customers want to use this and what the key value drivers are before we finalize how we price the different features, but certainly mindful of wanting to capture the — deliver the most value and capture the most value for DocuSign, as we price it.

MongoDB (NASDAQ: MDB)

MongoDB’s management believes that AI will increase software development velocity and will enable more companies to launch more apps, leading to the speed of software development being even more important for companies

We believe AI will be the next frontier of development productivity — developer productivity and will likely lead to a step function increase in software development velocity. We know that most organizations have a huge backlog of projects they would like to take on but they just don’t have the development capacity to pursue. As developer productivity meaningfully improves, companies can dramatically increase their software ambitions and rapidly launch many more applications to transform their business. Consequently, the importance of development velocity to remain competitive will be even more pronounced. Said another way, if you are slow, then you are obsolete.

Companies are increasingly choosing MongoDB’s Atlas database service as the platform to build and run new AI apps

We are observing an emerging trend where customers are increasingly choosing Atlas as the platform to build and run new AI applications. For example, in Q1, more than 200 of the new Atlas customers were AI or ML companies. Well-financed start-ups like Hugging Face, [ Tekion ], One AI and [ Neura ] are examples of companies using MongoDB to help deliver the next wave of AI-powered applications to their customers.

MongoDB’s management believes that apps on legacy platforms will be replatformed to be AI-enabled, and those apps will need to migrate to MongoDB

We also believe that many existing applications will be replatformed to be AI enabled. This will be a compelling reason for customers to migrate from legacy technologies to MongoDB.

MongoDB’s management believes that in an increasingly AI-driven world, (1) AI will lead to more apps and more data storage demand for MongoDB; (2) developers will want to use modern databases like MongoDB to build; and (3) MongoDB can support wide use-cases, so it’s attractive to use MongoDB

First, we expect MongoDB to be a net beneficiary of AI, the reason being is that, as developer productivity increases, the volume of new applications will increase, which by definition will create new apps, which means more data stores, so driving more demand for MongoDB. Second, developers will be attracted to modern platforms like MongoDB because that’s the place where they can build these modern next-generation applications. And third, because of the breadth of our platform and the wide variety of use cases we support, that becomes even more of an impetus to use MongoDB. 

MongoDB’s management knows that AI requires vector databases, but thinks that AI still needs an operational datastore, which is where MongoDB excels in

The results that come from training and LLM against content are known as vector embeddings. And so content is assigned vectors and the vectors are stored in a database. These databases then facilitate searches when users query large language model with the appropriate vector embeddings, and it’s essentially how a user search is matched to content from an LLM. The key point, though, is that you still need an operational data store to store the actual data. And there are some adjunct solutions out there that have come out that are bespoke solutions but are not tied to actually where the data resides, so it’s not the best developer experience. And I believe that, over time, people will gravitate to a more seamless and integrated platform that offers a compelling user experience…

..Again, for generating content that’s accurate in a performant way, you do need to use vector embeddings which are stored in a database. And you — but you also need to store the data and you want to be able to offer a very compelling and seamless developer experience and be able to offer that as part of a broader platform. I think what you’ve seen, Brent, is that there’s been other trends, things like graph and time series, where a lot of people are very excited about these kind of bespoke single-function technologies, but over time, they got subsumed into a broader platform because it didn’t make sense for customers to have all these bespoke solutions which added so much complexity to their data architecture. 

Okta (NASDAQ: OKTA)

Okta has been working with AI for a number of years and some of its products contain AI features

So when we look at our own business, one of our huge — we have AI in our products, and we have for a few years, whether it’s ThreatInsight on the workforce side or Security Center on the customer identity side, which look at our billions of authentications and use AI to make sure we defend other customers from like similar types of threats that have been prosecuted against various customers on the platform. 

Okta’s management thinks AI could be really useful for helping users to auto-configure the set of Okta

One of the ideas that we’re working on that might be a typical use case of how someone like us could use AI is configuring Okta, setting the policy up for Okta across hundreds of applications on the workforce side or 10 or 20 applications on the customer identity side with various access policies and rules about who can access them and how they access them. It’d be pretty complicated to set up, but we’ve actually been prototyping using AI to auto-generate that configuration.

Okta’s management believes that AI will lead to higher demand for identity-use cases for the company

And then the other one we’re excited about is if you zoom out and you think this is a huge platform shift, it’s the next generation of technology. So that means that there’s going to be tons of new applications built with AI. It means that there’s going to be tons of new industries created and industries changed. And there’s going to be a login for all these things. You’re going to need to log on to these experiences. Sometimes it’s going to be machines. Sometimes it’s going to be users. That’s an identity problem, and we can help with that. So in a sense, we’re really going to be selling picks and shovels to the gold miners. 

Salesforce (NYSE: CRM)

Salesforce recently launched EinsteinGPT, a form of generative AI for customer relationship management

Last quarter, I told you of how our AI team is getting ready to launch EinsteinGPT, the world’s first generative AI for CRM. At Trailhead DX in March in front of thousands of trailblazers here in San Francisco, that’s exactly what we did. 

Salesforce announced SlackGPT, an AI assistant for users of the communication software Slack; management also believes that unleashing large language models within Slack can make the software incredibly valuable for users

We saw more of the incredible work of our AI team at our New York City World Tour this month when we demonstrated Slack GPT. Slack is a secure treasure trove of company data that generative AI can use to give every company and every employee their own powerful AI assistant, helping every employee be more productive in transforming the future of work. SlackGPT can leverage the power of generative AI, deliver instant conversation summaries, research tools and writing assistance directly in Slack. And you may never need to leave Slack to get a question answered. Slack is the perfect conversational interface for working with LLMs, which is why so many AI companies are Slack first and why OpenAI, ChatGPT and AnthropicSquad can now use Slack as a native interface…

…I think folks know, I have — my neighbor Sam Altman is the CEO of OpenAI, and I went over to his house for dinner, and it was a great conversation as it always is with him. And he had — he said, “Oh, just hold on one second, Marc, I want to get my laptop.” And he brought his laptop out and gave me some demonstrations of advanced technologies that are not appropriate for the call. But I did notice that there was only one application that he was using on his laptop and that was Slack. And the powerful part about that was I realized that everything from day 1 at OpenAI have been in Slack. And as we kind of brainstorm and talked about — of course, he was paying a Slack user fee and on and on, and he’s a great Slack customer. We’ve done a video about them, it’s on YouTube. But I realize that taking an LLM and embedding it inside Slack, well, maybe Slack will wake up. I mean there is so much data in Slack, I wonder if it could tell him what are the opportunities in OpenAI. What are the conflicts, what are the conversations, what should be his prioritization. What is the big product that got repressed that he never knew about.

And I realized in my own version of Slack at Salesforce, I have over 95 million Slack messages, and these are all open messages. I’m not talking about closed messaging or direct messaging or secure messaging between employees. I’m talking about the open framework that’s going on inside Salesforce and with so many of our customers. And then I realized, wow, I think Slack could wake up, and it could become a tremendous asset with an LLM consuming all that data and driving it. And then, of course, the idea is that is a new version of Slack. Not only do you have the free version of Slack, not only do you have the per user version of Slack, but then you have the additional LLM version of Slack. 

Salesforce is working with luxury brand Gucci to augment its client advisers by building AI chat technology

A great example already deploying this technology is Gucci. We’re working with them to augment their client advisers by building AI chat technology that creates a Gucci-fied tone of service, while incredible new voice, amplifying brand, storytelling and incremental sales as well. It’s an incredibly exciting vision for generative AI to transform which was customer service into now customer service, marketing and sales, all through augmenting Gucci employee capabilities using this amazing generative AI.

Salesforce’s management believes that Salesforce’s AI features can (1) help financial services companies improve the capabilities of their employees and (2) provide data-security for highly regulated companies when their data is used in AI models

But yesterday, there were many questions from my friend who I’m not going to give you his name because he’s one of the – the CEO of one of the largest and most important banks in the world. And I’ll just say that, of course, his primary focus is on productivity. He knows that he wants to make his bankers a lot more successful. He wants every banker to be able to rewrite a mortgage, but not every banker can, because writing the mortgage takes a lot of technical expertise. But as we showed him in the meeting through a combination of Tableau, which we demonstrated and Slack, which we demonstrated, and Salesforce’s Financial Services Cloud, which he has tens of thousands of users on, that banker understood that this would be incredible. But I also emphasize to him that LLMs, or large language models, they have a voracious appetite for data. They want every piece of data that they can consume. But through his regulatory standards, he cannot deliver all that data into the LLM because it becomes amalgamated. Today, he runs on Salesforce, and his data is secured down to the row and cell level.

Salesforce’s management believes that the technology sector experienced a “COVID super cycle” in 2020/2021 that made 2022 difficult for companies in the sector but that the tech could see an acceleration in growth in the future from an “AI supercycle”

I just really think you have to look at 2020, 2021 was just this massive super cycle called the pandemic. I don’t know if you remember, but we had a pandemic a couple of years ago. And during that, we saw tech buying like we never saw. It was incredible and everybody surged on tech buying. So you’re really looking at comparisons against that huge mega cycle… 

…That’s also what gives me tremendous confidence going forward and that what we’re really seeing is that customers are absorbing the huge amounts of technology that they bought. And that is about to come, I believe, to a close. I can’t give you the exact date, and it’s going to be accelerated by this AI super cycle.

Salesforce is doing a lot of work on data security when it comes to developing its AI features

For example, so we are doing a lot of things as the basic security level, like we are really doing tenant level isolation coupled with 0 retention architecture, the LLM level. So the LLM doesn’t remember any of the data. Along with that, they — for them to use these use cases, they want to have — they have a lot of these compliances like GDPR, ISO, SOC, Quadrant, they want to ensure that those compliances are still valid, and we’re going to solve it for that. In addition, the big worry everybody has is people have heard about hallucinations, toxicity, bias, this is what we call model trust. We have a lot of innovation around how to ground the data on 360 data, which is a huge advantage we have. And we are able to do a lot of things at that level. And then the thing, which I think Marc hinted at, which is LLMs are not like a database. These intra-enterprise trust, even once you have an LLM, you can’t open the data to everybody in the company. So you need ability to do this — who can access this data, how is it doing both before the query and after the query, we have to build that. 

Salesforce is importing 7 trillion reports into its Data Cloud to build AI features, and management believes this is a value trove of data

And by the way, the Data Cloud, just in a month, we are importing more than 7 trillion reports into the data layer, so which is a very powerful asset we have. So coupled with all of this is what they are looking for guidance and how we think we can deliver significant value to our customers.

Salesforce’s management sees generative AI as a useful tool to help non-technical users write software

But you can also imagine, for example, even with Salesforce, the ability as we’re going to see in June, that many of our trailblazers are amazing low-code, no-code trailblazers, but soon they’ll have the ability to tap into our LLMs like ProGen and Cogen that have the ability to code for them automatically. hey aren’t coders. They didn’t graduate computer science degrees.

The arc of progress that Salesforce’s management sees with AI: Predictive, then generative, then autonomous

So I think the way I see it is this AI technologies are a continuum that is predictive then they generate, and the real long-term goal is autonomous. The initial version of the generative AI will be more in terms of assistance…

… And then I think the fully autonomous cases, for example, in our own internal use cases with our models, we are able to detect 60% of instance and auto remediate. That requires a little bit more fine-tuning and we’ll have to work with specific customers to get to that level of model performance. So I see this is just the start of this cut. The assistant model is the initial thing to build trust and a human in the loop and validate it. And then as the models get better and better, we’ll keep taking use cases where we can fully automate it.

AI has already improved the productivity of Salesforce’s software developers by at least 20% and management thinks the same productivity-boost can happen for Salesforce’s customers

But the other use cases, which we are going to see, and in fact, I have rolled out our own code elements in our engineering org and we are already seeing minimum 20% productivity…

…In some cases, up to 30%. Now a lot of our customers are asking the same. We are going to roll Einstein GPT for our developers in the ecosystem, which will not only help not only the local developers bridge the gap, where there’s a talent gap but also reduce the cost of implementations for a lot of people. So there’s a lot of value.

Veeva Systems (NYSE: VEEV)

Veeva Systems recently announced an AI chatbot for field sales reps and management is not thinking about the chatbot’s monetisation at the moment

CRM Bot is an AI application for Vault CRM. You can think of it as ChatGPT for field teams…

…Yes, it’s really early right now. We’re focused on ensuring that we have the right product working with our customers. So that’s our focus right now. Let’s get the product right, and then we’ll get into more of the details on kind of the sizing and the opportunity there. But we’re excited overall about the opportunity we have in front of us …CRM bot will — that’s not an included product so that will have a license that will most likely be licensed by the user. So that will be net new. But as Brent mentioned, we’re focused on getting the product right and we don’t have pricing for that or sizing for that yet.

Veeva Systems’ management thinks that AI will not disrupt the company and will instead be a positive

Given the breadth and the nature of our industry cloud software, data, and services, AI will not be a major disruptor for our markets, rather it will be complementary and somewhat positive for Veeva in a few ways. We will develop focused AI applications where the technology is a good fit, such as CRM Bot for Vault CRM. The broadening use of AI will make our proprietary data assets, such as Link and Compass, more valuable over time because the data can be used in new ways. We will also make it easy for customers to connect their own AI applications with their Veeva applications, creating even more value from the Veeva ecosystem…

Veeva Systems’ management thinks that core systems of records will be needed even in the world of AI

I like our position as it relates to AI because we’re a core system of record. So that’s something you’re always going to need. I think that’s 1 thing that people should always understand. Core system of records will be needed even in the world of AI. If I ask Brent, hey, Brent, do you think 10 years from now, you’ll need a financial system to manage your financials. He’s going to tell me, yes, I really need one, you can’t take it away. ChatGPT won’t do it for me, right? I’m making a joke there, but our customers have the same critical operational systems around drug safety, around clinical trials, around regulatory, around their quality processes. So those are always going to be needed.

Veeva Systems is focused on leveraging its proprietary data assets with AI to make them more valuable 

Now we are also building our data assets, and these are proprietary data assets, Link, Compass and we’re building more data assets. Those will also be not affected by AI, but AI will be able to leverage those assets and make those assets more valuable. So I think we’ll develop more — we’ll do basically 3 things. We’ll develop more applications over time. CRM bot the first. We got to get that right. We also will — our proprietary data will get more valuable.

Veeva Systems’ management wants to make it easy for customers to connect their own AI applications with the company’s software products

And the third thing we’ll do is make our applications fit very well when customers have their own proprietary AI applications. So especially the Vault platform, we’ll do a lot of work in there to make it fit really well with the other AI applications they have from other vendors or that they develop themselves, because it’s an open ecosystem, and that’s how that’s part of being Veeva. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, DocuSign, MongoDB, Okta, Salesforce, and Veeva Systems. Holdings are subject to change at any time.

How Bad is Zoom’s Stock-Based Compensation?

On the surface, the rising stock based compensation for Zoom looks bad. But looking under the hood, the situation is not as bad as it looks.

There seems to be a lot of concern surrounding Zoom’s rising stock-based compensation (SBC).

In its financial years 2021, 2022 and 2023, Zoom recorded SBC of US$275 million, US$477 million and US$1,285 million, respectively. FY2023 was perhaps the most worrying for investors as Zoom’s revenue essentially flat-lined while its SBC increased by more than two-fold.

But as mentioned in an earlier article, GAAP accounting is not very informative when it comes to SBC. When companies report SBC using GAAP accounting, they record the amount on the financial statements based on the share price at the time of the grant. A more informative way to look at SBC would be from the perspective of the actual number of shares given out during the year.

In FY2021, 2022 and 2023, Zoom issued 0.6 million, 1.8 million and 4 million restricted stock units (RSUs), respectively. From that point of view, it seems the dilution is not too bad. Zoom had 293 million shares outstanding as of 31 January 2023, so the 4 million RSUs issued resulted in only 1.4% more shares.

What about down the road?

The number of RSUs granted in FY2023 was 22.1 million, up from just 3.1 million a year before. The big jump in FY2023 was because the company decided to give a one-time boost to existing employees. 

However, this does not mean that Zoom’s dilution is going to be 22 million shares every year from now. The number of RSUs granted in FY2023 was probably a one-off grant that will likely not recur and these grants will vest over a period of three to four years.

If we divide the extra RSUs given in FY2023 by their 4-year vesting schedule, we can assume that around 8 million RSUs will vest each year. This will result in an annual dilution rate of 2.7% based on Zoom’s 293 million shares outstanding as of 31 January 2023.

Bear in mind: Zoom guided for a weighted diluted share count of 308 million for FY2024. This diluted number includes 4.8 million in unexercised options that were granted a number of years ago. Excluding this, the number of RSUs that vest will be around 10 million and I believe this is because of an accelerated vesting schedule this year.

Cashflow impact

Although SBC does not result in a cash outflow for companies, it does result in a larger outstanding share base and consequently, lower free cash flow per share.

But Zoom can offset that by buying back its shares. At its current share price of US$69, Zoom can buy back 8 million of its shares using US$550 million. Zoom generated US$1.5B in free cash flow if you exclude working capital changes in FY2023. If it can sustain cash generation at this level, it can buy back all its stock that is issued each year and still have around US$1 billion in annual free cash flow left over for shareholders.

And we also should factor in the fact that in most companies, due to employee turnover, the RSU forfeiture rate is around 20% or more, which will mean my estimate of 8 million RSUs vesting per year for Zoom could be an overestimate. In addition, Zoom reduced its headcount by 15% in February this year, which should lead to more RSU forfeitures and hopefully fewer grants in the future.

Not as bad as it looks

GAAP accounting does not always give a complete picture of the financial health of a business. In my view, SBC is one of the most significant flaws of GAAP accounting and investors need to look into the financial notes to better grasp the true impact of SBC.

Zoom’s SBC numbers seem high. But when zooming in (pun intended), the SBC is not as bad as it looks. In addition, with share prices so low, it is easy for management to offset dilution with repurchases at very good prices. However, investors should continue to monitor share dilution over time to ensure that management is fair to shareholders.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Zoom. Holdings are subject to change at any time.