What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q1 2024

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the first quarter of 2024.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the first quarter of 2024 – was held two weeks ago and contained useful insights on the state of American consumers and businesses. The bottom-line is this: Economic indicators in the US continue to be favourable and American consumers are in good shape, but there are a number of risks on the horizon, so JPMorgan’s management is preparing for a wide range of outcomes.  

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. Management sees economic indicators in the USA as favourable, but there are many risks on the horizon (including geopolitical conflicts, inflationary pressures, and the Fed’s quantitative tightening) so they want to be prepared for a range of outcomes; the economy always looks healthy at the inflection point; management thinks the US economy will be affected even if problems happen elsewhere

Many economic indicators continue to be favorable. However, looking ahead, we remain alert to a number of significant uncertain forces. First, the global landscape is unsettling – terrible wars and violence continue to cause suffering, and geopolitical tensions are growing. Second, there seems to be a large number of persistent inflationary pressures, which may likely continue. And finally, we have never truly experienced the full effect of quantitative tightening on this scale. We do not know how these factors will play out, but we must prepare the Firm for a wide range of potential environments to ensure that we can consistently be there for clients…

…But what I caution people, these are all the same results [referring to the comments in Point 2 and Point 6 below about consumers and businesses being in good shape] of a lot of fiscal spending, a lot of QE, et cetera. And so we don’t really know what’s going to happen. And I also want to look at the year, look at 2 years or 3 years, all the geopolitical effects and oil and gas and how much fiscal spending will actually take place, our elections, et cetera. So we’re in good — we’re okay right now. It does not mean we’re okay down the road. And if you look at any inflection point, being okay in the current time is always true. That was true in ’72, it was true in any time you’ve had it. So I’m just on the more cautious side that how people feel, the confidence levels and all that, that doesn’t necessarily stop you from having an inflection point. And so everything is okay today, but you’ve got to be prepared for a range of outcomes, which we are…

…I think that when we talk about the impact of the geopolitical uncertainty on the outlook, part of the point there is to note that the U.S. is not isolated from that, right? If we have global macroeconomic problems as a result of geopolitical situations, that’s not only a problem outside the U.S. That affects the global economy and therefore the U.S. and therefore our corporate customers, et cetera, et cetera.

2. Management is seeing consumers remain healthy with overall spend in line with a year ago, and although their cash buffers have normalised, they are still higher than pre-COVID levels; management thinks consumers will be in pretty good shape even if there’s a recession; the labour market remains healthy with wages keeping pace with inflation

Consumers remain financially healthy, supported by a resilient labor market. While cash buffers have largely normalized, balances were still above pre-pandemic levels, and wages are keeping pace with inflation. When looking at a stable cohort of customers, overall spend is in line with the prior year…

… I would say consumer customers are fine. The unemployment is very low. Home price dropped, stock price dropped. The amount of income they need to service their debt is still kind of low. But the extra money of the lower-income folks is running out — not running out, but normalizing. And you see credit normalizing a little bit. And of course, higher-income folks still have more money. They’re still spending it. So whatever happens, the customer’s in pretty good shape. And they’re — if you go into a recession, they’d be in pretty good shape.

3. Auto originations are down

And in auto, originations were $8.9 billion, down 3%, while we maintained healthy margins and market share.

4. Net charge-offs (effectively bad loans that JPMorgan can’t recover) rose from US$1.1 billion a year ago, mostly because of card-related credit losses that are normalising to historical norms; management expects consumer-spending on credit/debit cards to have strong growth in 2024

In terms of credit performance this quarter, credit costs were $1.9 billion, driven by net charge-offs, which were up $825 million year-on-year predominantly due to continued normalization in Card…

…And in Card, of course, while charge-offs are now close to normalized, essentially, we did go through an extended period of charge-offs being very low by historical standards, although that was coupled with NII also being low by historical standards…

….Yes, we still expect 12% card loan growth for the full year. 

5. The level of appetite that companies have for capital markets activity is uncertain to management

While we are encouraged by the level of capital markets activity we saw this quarter, we need to be mindful that some meaningful portion of that is likely pulling forward from later in the year. Similarly, while it was encouraging to see some positive momentum in announced M&A in the quarter, it remains to be seen whether that will continue, and the Advisory business still faces structural headwinds from the regulatory environment…

…Let me take the IPO first. So we had been a little bit cautious there. Some cohorts and vintages of IPOs had performed somewhat disappointingly. And I think that narrative has changed to a meaningful degree this quarter. So I think we’re seeing better IPO performance. Obviously, equity markets have been under a little bit of pressure the last few days. But in general, we have a lot of support there, and that always helps. Dialogue is quite good. A lot of interesting different types of conversations happening with global firms, multinationals, carve-out type things. So dialogue is good. Valuation environment is better, like sort of decent reasons for optimism there. But of course, with ECM [Equity Capital Markets], there’s always a pipeline dynamic, and conditions were particularly good this quarter. And so we caution a little bit there about pull-forward, which is even more acute, I think, on the DCM [Debt Capital Markets] side, given that quite a high percentage of the total amount of debt that needed to be refinanced this year has gotten done in the first quarter. So that’s a factor.

And then the question of M&A, I think, is probably the single most important question, not only because of its impact on M&A but also because of its knock-on impact on DCM through acquisition financing and so on. And there’s the well-known kind of regulatory headwinds there, and that’s definitely having a bit of a chilling effect. I don’t know. I’ve heard some narratives that maybe there’s like some pent-up deal demand. Who knows how important politics are in all this. So I don’t know.  

6. Management is seeing that businesses are in good shape

Businesses are in good shape. If you look at it today, their confidence is up, their order books drop, their profits are up.

7. Management thinks that the generally accepted economic scenario is nearly always wrong and that no one can accurately predict an inflection point in the economy

And the other thing I want to point out because all of these questions about interest rates and yield curves and NII and credit losses, one thing you projected today based on what — not what we think in economic scenarios, but the generally accepted economic scenario, which is the generally accepted rate cuts of the Fed. But these numbers have always been wrong. You have to ask the question, what if other things happen? Like higher rates with this modest recession, et cetera, then all these numbers change. I just don’t think any of us should be surprised if and when that happens. And I just think the chance of that happen is higher than other people. I don’t know the outcome. We don’t want to guess the outcome. I’ve never seen anyone actually positively predict a big inflection point in the economy literally in my life or in history.

8. Management thinks the US commercial real estate market is fine, at least when it comes to JPMorgan’s portfolio; management thinks that if interest rates rise, it could be roughly neutral or really bad for the real estate market, depending on the reason for the increase in interest rates

First of all, we’re fine. We’ve got good reserves against office. We think the multifamily is fine. Jeremy can give you more detail on that if you want.

But if you think of real estate, there’s 2 pieces. If rates go up, think of the yield curve, the whole yield curve, not Fed funds, but the 10-year bond rate, it goes up 2%. All assets, all assets, every asset on the planet, including real estate, is worth 20% less. Well obviously, that creates a little bit of stress and strain, and people have to roll those over and finance it more. But it’s not just true for real estate, it’s true for everybody. And that happens, leveraged loans, real estate will have some effect.

The second thing is the why does that happen? If that happens because we have a strong economy, well, that’s not so bad for real estate because people will be hiring and filling things out. And other financial assets. If that happens because we have stagflation, well, that’s the worst case. All of a sudden, you are going to have more vacancies. You are going to have more companies cutting back. You are going to have less leases. It will affect — including multifamily, that will filter through the whole economy in a way that people haven’t really experienced since 2010. So I’d just put in the back of your mind, the why is important, the interest rates are important, the recession is important. If things stay where they are today, we have kind of the soft landing that seems to be embedded in the marketplace, everyone — the real estate will muddle through.

9. Consumers whose real incomes are down are slowing their spending, but they account for only a small proportion of the overall population, and they are not levering up irresponsibly

And there are some such people whose real incomes are not up, they’re down, and who are therefore struggling a little bit, unfortunately. And what you observe in the spending patterns of those people is some meaningful slowing rather than what you might have feared, which is sort of aggressive levering up. So I think that’s maybe an economic indicator of sorts, although this portion of the population is small enough that I’m not sure the read-across is that big. But it is encouraging from a credit perspective because it just means that people are behaving kind of rationally and in a sort of normal post-pandemic type of way as they manage their own balance sheets. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.

Seatrium and Global Invacom’s Continued Annual Losses, Cordlife’s Woes, Inflation and Interest Rate Expectations, Trump Media & Technology Group’s Valuation, & More

Last week, on 9 April 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s The Evening Runway show. We discussed a number of topics, including:

  • Seatrium and Global Invacom’s announcements of three consecutive years of annual losses and what the implications are for investors in the companies (Hint: Seatrium has been making annual losses since 2018 but management has a plan to turn things around and reduce the company’s reliance on the oil & gas industry, although it remains to be seen if management can execute on their plan; meanwhile Global Invacom has been making losses periodically and even when it was profitable, its margins have been slim)
  • Cordlife’s announcement that about 5,300 cord-blood units under its care are at high risk of being exposed to high temperatures, and its promise to offer refunds and waivers to affected customers (Hint: The refunds and waivers will have a significant impact on Cordlife’s financials for 2024, but the even more important impact to the company’s business is a potential loss of reputation among customers)
  • What do companies look at when considering where to IPO, and whether Singapore is near the top of their potential listing locations (Hint: Singapore is unlikely to be in the list of considerations as a potential IPO location for many companies)
  • Near-term inflation and interest rate expectations (Hint: Inflation and interest rates are not that important for long-term investors in the stock market)
  • How far can Donald Trump cash out with Trump Media & Technology Group? (Hint: Between the two scenarios of Trump Media Technology Group being severely overvalued or severely undervalued, it’s way more likely that the company is currently being severely overvalued)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I do not have a vested interest in any companies mentioned. Holdings are subject to change at any time.

More Of The Latest Thoughts From American Technology Companies On AI (2023 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q4 earnings season.

Nearly a month ago, I published The Latest Thoughts From American Technology Companies On AI (2023 Q4). In it, I shared commentary in earnings conference calls for the fourth quarter of 2023, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. 

A few more technology companies I’m watching hosted earnings conference calls for 2023’s fourth quarter after I prepared the article. The leaders of these companies also had insights on AI that I think would be useful to share. This is an ongoing series. For the older commentary:

Here they are, in no particular order:

Adobe (NASDAQ: ADBE)

Adobe’s management thinks the company is a leader in delivering generative AI and has a highly differentiated approach through the use of proprietary data and by being friendly with intellectual property

We’re a leader in delivering generative AI across all our clouds. We’re taking a highly differentiated approach across data, models, and interfaces. Our proprietary data is built on decades of deep domain expertise across creative, documents and customer experience management. We leverage large language models as well as have invested in building and delivering our proprietary models in the creative, document, and marketing domains. Our IP-friendly approach is a differentiator for creators and enterprises.

Adobe’s management sees an immense market opportunity for the company in AI and that the company is uniquely positioned to capture the opportunity; Adobe’s end-to-end generative AI solution, GenStudio, is already seeing success with entreprises; GenStudio is a generative AI application that helps marketers plan create, store, and deliver marketing content; GenStudio is integrated across Creative Cloud and Experience Cloud

Every student, communicator, creative professional, and marketer is now focused on leveraging generative AI to imagine, ideate, create and deliver content and applications across a plethora of channels. Adobe is uniquely positioned through the combination of Express, Firefly, Creative Cloud, Acrobat, and Experience Cloud to deliver on this immense market opportunity. The success we are already seeing with our GenStudio offering in the enterprise is validation of our leadership, and we expect that success to translate into other segments as we roll out these solutions throughout the year…

…Adobe GenStudio is a generative AI-first application that allows marketers to quickly plan, create, store, deliver, and measure marketing content in a single intuitive offering. With state-of-the-art generative AI powered by Firefly services, marketers can create on-brand content with unprecedented scale and agility to deliver personalized experiences. Adobe GenStudio natively integrates with multiple Adobe applications across Creative Cloud and Experience Cloud, including Express, Firefly, Workfront, Experience Manager, Customer Journey Analytics and Journey Optimizer. It can be used by brands and their agency partners to unlock new levels of creativity and efficiency in marketing campaigns.

Adobe’s management is seeing strong usage, value and demand for its AI solutions across all customer segments

We’re driving strong usage, value and demand for our AI solutions across all customer segments.

Acrobat AI Assistant uses generative AI to summarise long PDFs, answer questions through a chat interface, help with generating emails, reports, and presentations; AI Assistant has strong data security; Adobe’s management is pleased with the English language beta of AI Assistant and Adobe will be releasing other languages later in the year; management will monetise AI Assistant through a monthly add-on for Reader and Acrobat users; management thinks there’s a lot of monetisation opportunity with AI Assistant, including consumption-based monetisation

The world’s information, whether it’s an enterprise legal contract, a small business invoice, or a personal school form, lives in trillions of PDFs. We were thrilled to announce Acrobat AI Assistant, a massive leap forward on our journey to bring intelligence to PDFs. With AI Assistant, we’re combining the power of generative AI with our unique understanding of the PDF file format to transform the way people interact with and instantly extract additional value from their most important documents. Enabled by a proprietary attribution engine, AI Assistant is deeply integrated into Reader and Acrobat workflows. It instantly generates summaries and insights from long documents, answers questions through a conversational interface, and provides an on-ramp for generating e-mails, reports and presentations. AI Assistant is governed by secure data protocols so that customers can use the capabilities with confidence. We’re pleased with the initial response to the English language beta and look forward to usage ramping across our customer base as we release other languages later in the year. We will monetize this functionality through a monthly add-on offering to the hundreds of millions of Reader users as well as the Acrobat installed base across individuals, teams, and enterprises…

…Everyone is looking at AI Assistant in Acrobat. I certainly hope all of you are using it. It should make your lives more effective. Not just for insight, we think that there’s a lot of opportunity for monetization of insight for AI Assistant on our core base of Acrobat users but also, for the first time, doing consumption-based value. So the hundreds of millions of monthly active users of Reader will also be able to get access to AI Assistant and purchase an add-on pack there, too. So it’s a really broad base to look at how we monetize that.

Adobe’s generative AI model for creative work, Adobe Firefly, has been released for around a year and has been integrated into Creative Cloud and within Adobe Express; users of Creative Cloud and Adobe Express have generated >6.5 billion creative assets to-date (was 4.5 billion in 2023 Q3) across images, vectors, designs, and text effects; Firefly has a web-based interface which has seen tremendous adoption; enterprises can now embed Firefly into their own workflow through Firefly Services, which is commercially safe for enterprises to use

Adobe Express is inspiring millions of users of all skill levels to design more quickly and easily than ever before. In the year since we announced and released Adobe Firefly, our creative generative AI model, we have aggressively integrated this functionality into both our Creative Cloud flagship applications and more recently, Adobe Express, delighting millions of users who have generated over 6.5 billion assets to date.

In addition to creating proprietary foundation models, Firefly includes a web-based interface for ideation and rapid prototyping, which has seen tremendous adoption. We also recently introduced Firefly Services, an AI platform which enables every enterprise to embed and extend our technology into their creative production and marketing workflows. Firefly Services is currently powered by our commercially safe models and includes the ability for enterprises to create their own custom models by providing their proprietary data sets as well as to embed this functionality through APIs into their e-mail, media placement, social, and web creation process…

…… The 6.5 billion assets generated to date include images, vectors, designs, and text effects. 

IBM is an early adopter of Firefly and has used it to generate marketing assets much faster than before and that have produced much higher engagement

Early adopters like IBM are putting Firefly at the center of their content creation processes. IBM used Adobe Firefly to generate 200 campaign assets and over 1,000 marketing variations in moments rather than months. The campaign drove 26x higher engagement than its benchmark and reached more key audiences.

Firefly is now available in mobile workflows through the Adobe Express mobile app beta and has a first-of-its-kind integration with TikTok’s creative assistant; the introduction of Firefly for enterprises has helped Adobe win enterprise clients in the quarter

The launch of the new Adobe Express mobile app beta brings the magic of Adobe Firefly AI models directly into mobile workflows. The first-of-its-kind integration with TikTok’s creative assistant makes the creation and optimization of social media content quicker, easier and more effective than ever before…

…  The introduction of Firefly services for enterprises drove notable wins in the quarter, including Accenture, IPG, and Starbucks. Other key enterprise wins include AECOM, Capital Group, Dentsu, IBM, Nintendo, and RR Donnelley.

During 2023 Q4 (FY2024 Q1), Adobe’s management saw the highest adoption of Firefly-powered tools in Photoshop since the release of Generative Fill in May 2023

Generative Fill in Photoshop continues to empower creators to create in new ways and accelerate image editing workflows. Q1 saw the highest adoption of Firefly-powered tools in Photoshop since the release of Generative Fill in May 2023, with customers adopting these features across desktop, web and most recently, iPad, which added Generative Fill and Generative Expand in December.

Adobe’s management expects Adobe’s AI-powered product features to drive an acceleration in the company’s annual recurring revenue (ARR) in the second half of the year; management thinks the growth drivers are very clear

You can expect to see the product advances in Express with Firefly on mobile, Firefly services and AI Assistant in Acrobat drive ARR acceleration in the second half of the year…

…As we look specifically at Creative Cloud, I just want to sort of make sure everyone takes a step back and looks at our strategy to accelerate the business because I think the growth drivers here are very clear. We are focused on expanding access to users with things like Express on mobile. We want to introduce new offers across the business with things like AI Assistant and also existing capabilities for Firefly coming into our core Firefly, our core Photoshop and Illustrator and flagship applications. We want to access new budget pools with the introduction of Firefly services and GenStudio as we talked about…

…And as we enter the back half of the year, we have capabilities for Creative Cloud pricing with Firefly that have already started rolling out late last year as we talked about, and we’ll be incrementally rolling out throughout the year. We’re ramping Firefly services and Express in enterprise. As we talked about, we saw a very good beginning of that rollout at the — toward the end of Q1. We also expect to see the second half ramping with Express Mobile and AI Assistant coming through. So we have a lot of the back-end capabilities set up so that we can start monetizing these new features, which are still largely in beta starting in Q3 and beyond…

…We are very excited about all the innovation that’s coming out, that’s just starting to ramp in terms of monetization and/or still in beta on the Creative Cloud side. We expect that to come out in Q3 and we’ll start our monetization there. So we continue to feel very confident about the second half acceleration of Creative Cloud…

…Usage of Firefly capabilities in Photoshop was at an all-time high in Q1, Express exports more than doubling with the introduction of Express mobile in beta now, going to GA in the coming months, AI Assistant, Acrobat, same pack pattern. You can see that momentum as we look into the back half of the year. And from an enterprise standpoint, the performance in the business was really, really superb in Q1, strongest Q1 ever in the enterprise. So there’s a lot of fundamental components that we’re seeing around performance of the business that give us confidence as we look into the back half of the year.

Adobe’s management believes that the roll out of personalisation at scale has been limited by the ability of companies to create content variations and this is where generative AI can help

Today, rollout of personalization at scale has been limited by the number of content variations you can create and the number of journeys you can deploy. We believe harnessing generative AI will be the next accelerant with Creative Cloud, Firefly services and GenStudio providing a comprehensive solution for the current supply chain and generative experience model automating the creation of personalized journeys.

Adobe’s management believes that AI augments human ingenuity and expands the company’s market opportunity

We believe that AI augments human ingenuity and expands our addressable market opportunity.

Adobe’s management is monetising Adobe’s AI product features in two main ways – via generative packs and via whole products – and they are progressing in line with expectations; management thinks that it’s still early days for Adobe in terms of monetising its AI product features

I think where there’s tremendous interest and where, if you look at it from an AI monetization, the 2 places that we’re monetizing extremely in line with our expectations, the first is as it relates to the Creative Cloud pricing that we’ve rolled out. And as you know, the generative packs are included for the most part in how people now buy Creative Cloud, that’s rolling out as expected. And the second place where we are monetizing it is in the entire enterprise as it relates to Content and GenStudio. And I’m really happy about how that’s monetizing it. And that’s a combination, Brent, of when we go into an enterprise for creation, whether we provide Creative Cloud or a combination of Express, what we are doing with asset management and AEM, Workflow as well as Firefly services to enable people to do custom models as well as APIs. We’re seeing way more monetization earlier, but again, very much in line with expected…

…As it relates to the monetization of AI, I think we’re in early stages as it relates to experimentation. So we’re looking at both what the value utilization is as well as experimentation. The value utilization is actually really positive for us. I think as it relates to the monetization and the experimentation, we have the generative packs, as you know, in Creative Cloud. I think you will see us more and more have that as part of the normal pricing and look at pricing, because that’s the way in which we want to continue to see people use it. I think in Acrobat, as you’ve seen, we are not actually using the generative packs. We’re going to be using more of an AI Assistant model, which is a monthly model. As it relates to the enterprise, we have both the ability to do custom models, which depends on how much content that they are creating as well as an API and metering. We’ve rolled that out and we started to sell that as part of our GenStudio solution.

Adobe’s management pushed out the enforcement of generative AI credit limits beyond April 2024 because Adobe is still in user-acquisition mode for its AI product features

[Question] You pushed out the enforcement of generative credit limits for some products beyond April that were originally expected sooner. What’s the thinking behind this decision? And what are you seeing thus far in terms of credit consumption and purchasing patterns of those credit packs? 

[Answer] In terms of the timing of the — when we start enforcing credits, don’t read anything into that other than right now we are still very much in acquisition mode. We want to bring a lot of users in. We want to get them using the products as much as possible. We want them coming back and using it…

…So right now, look, the primary point is about proliferation and usage. 

Adobe recently released generative AI capabilities in music composition, voice-dubbing, and lip-syncing; these capabilities will require a lot of generative AI credits from users

 In the last few weeks, we’ve done a couple of sneaks that could also be instructive. Last month, we snuck music composition where you can take any music track, you can give it a music type like hip-hop or orchestra or whatever, and it will transform that initial track into this new type of music. Just this morning, we snuck our ability to do auto-dubbing and lip-syncing where you give it a video of someone talking in, say, English and then you can translate it automatically to French or Spanish or Portuguese or whatever. As you can imagine, those actions will not take 1 credit. Those actions will be much more significant in terms of what they cost.

Adobe’s management thinks that developments in generative AI models for video, such as the recent release of Sora by OpenAI, are a net positive for Adobe; Adobe is also developing its own generative AI video models and will be releasing them later this year

[Question] Clearly, a lot of news around video creation using generative AI during the quarter, of course, with the announcement of Sora. Maybe the question for you folks is can we just talk a little bit about how you think about the market impact that generative AI can have in the video editing market and how maybe Firefly can participate in that trend?

[Answer] So really great advances, but net-net, video is going to be even more of a need for editing applications in order to truly take advantage of generative AI…

…We see the proliferation of video models to be a very good thing for Adobe. We’re going to work with OpenAI around Sora. You’re going to see us obviously developing our own model. You’re going to see others develop their model. All of that creates a tailwind because the more people generate video clips, the more they need to edit that content, right? So whether it’s Premier or After Effects or Express, they have to assemble those clips. They have to color correct those clips. They have to tone-match. They have to enable transitions. So we’re excited about what we’re building, but we’re just as excited about the partnerships that we see with OpenAI and others coming down this path. And if you take a step back, you should expect to see more from us in the weeks ahead with imaging and vector, design, text effects, in the months ahead with audio and video and 3D. We’re very excited about what all of this means, not just for the models, but for our APIs and our tools.

Adobe’s management thinks that Adobe is in a great position to attract talent for technical AI work because they believe that the company has one of the best AI research labs and can provide access to AI computing hardware; management also thinks that Adobe is in a great position to attract talent for AI sales

And so that’s not just an Adobe perspective, but it’s playing out, obviously, in the enterprises as they look at what are the models that they can consider using for production workflows. We’re the only one with the full suite of capabilities that they can do. It’s a really unique position to be in. But it’s also being noticed by the research community, right? And as the community starts looking at places, if I’m a PhD that wants to go work in a particular environment, I start to ask myself the question of which environment do I want to pick. And a lot of people want to do AI in a responsible way. And that has been a very, very good opportunity for us to bring in amazing talent. So we are investing. We do believe that we have the best — one of the best, if not the best, research labs around imaging, around video, around audio, around 3D, and we’re going to continue to attract that talent very quickly. We’ve already talked about we have the broadest set of creative models for imaging, for vector, for design, for audio, for 3D, for video, for fonts and text effects. And so this gives us a broad surface area to bring people in. And that momentum that starts with people coming in has been great.

The second part of this, too, is managing access to GPUs while maintaining our margins. We’ve been able to sort of manage our cost structure in a way that brings in the talent and gives them the necessary GPUs to do their best work…

…Regarding the sales positions in enterprises. In enterprise, we’re in a strong position because what we — this area of customer experience management, it remains a clear imperative for enterprise customers. Everybody is investing in this personalization, at scale and current supply chain. These help drive both growth and profitability. So when you look at these areas, these, from an enterprise perspective, these are a must-have. This is not a need-to-have. And that’s helping us really attract the right kind of talent. We just onboarded, this week, a VP of Sales who had prior experience, a lot of experience in Cisco and Salesforce, et cetera. 

Adobe’s management believes that Adobe’s tools will be in demand even in an AI dominated world and will not be automated away

[Question] I just wanted to give you an opportunity to debunk this hypothesis that is going around that AI, it is generating videos and pictures, but the next step is, it’s going to do the actual editing and put out Premier Pro use or whatnot. So that is probably the existential threat that people are debating.

[Answer] So as it relates to generative content, I’m going to sort of break it up into 2 parts. One is around the tooling and how you create the content and the second is around automation associated with the content…

…So I think the core part of this is that as more of this content creates, you need more toolability, the best models are going to be the models that are safe to use and have control built-in from the ground up. And I think we have the best controls of anyone in the industry. And they need to be able to be done in an automated fashion that can embed into your workflow. 

Adobe’s management believes that as generative AI models proliferate in society, the demand for Adobe’s products will increase partly because there will be a rise in the number of interfaces that people use for creative content

I think the first question that I hear across many folks is, hey, with the advent of AI and the increase in the number of models that people are seeing, whether they be image models or video models, does that mean that the number of seats, both for Adobe and in the world, do they increase? Or do they decrease? To me, there’s no question in my mind that when you talk about the models and interfaces that people will use to do creative content, that the number of interfaces will increase. So Adobe has to go leverage that massive opportunity. But big picture, models will only cause more opportunity for interfaces. And I think we’re uniquely qualified to engage in that, so that’s the first one.

Adobe’s management wants Adobe to work with many different AI models, even those from third-parties

Do we only leverage the Adobe model? Or is there a way in which we can leverage every other model that exists out there? Much like we did with plug-ins, with all of our Creative applications, any other model that’s out there, we will certainly provide ways to integrate that into our applications, so anybody who’s using our application benefits not just from our model creation but from any other model creation that’s out there…

…But long term certainly, as I’ve said with our partnerships, we will have the ability for Adobe, in our interfaces, to leverage any other model that’s out there, which again further expands our opportunity.

MongoDB (NASDAQ: MDB)

MongoDB’s management thinks that AI will be a long-term growth driver for the company, but it’s still early days; management sees three layers to the AI stack – the first being compute and LLMs (large language models), the second being fine-tuning the models, and the third being the building of AI applications – and most of the AI spend today is at the first layer where MongoDB does not compete; MongoDB’s customers are still at the experimentation and prototyping stages of building their initial AI applications and management expects the customers to take time to move up to the second and third layers; management believes that MongoDB will benefit when customers start building AI application

While I strongly believe that AI will be a significant driver of long-term growth for MongoDB we are in the early days of AI, akin to the dial-up phase of the Internet era. To put things in context, it’s important to understand that there are 3 layers to the ad stack. The first layer is the underlying compute and LLMs the second layers of fine-tuning of models and building of AI applications. And the third layer is deploy and running applications that end users interact with. MongoDB’s strategy is to operate at the second and third layers to enable customers to build AI applications by using their own proprietary data together with any LLM, closed or open source on any computing infrastructure.

Today, the vast majority of AI spend is happening in the first layer that is investments in compute to train and run LLM, neither are areas in which we compete. Our enterprise customers today are still largely in the experimentation and prototyping stages of building their initial AI applications, first focused on driving efficiencies by automating existing workflows. We expect that will take time for enterprises to deploy production workloads at scale. However, as organizations look to realize the full benefit of these AI investments, they will turn to companies like MongoDB, offering differentiated capabilities in the upper layers of the stack. Similar to what happened in the Internet area, era when value accrued over time to companies offering services and applications, leveraging the built-out Internet infrastructure, platforms like MongoDB will benefit as customers build AI applications to drive meaningful operating efficiencies and create compelling customer experiences and pursue new growth opportunities…

…While it’s early days, we expect that AI will not only support the overall growth of the market, but also compel customers to revisit both their legacy workloads and build more ambitious applications. This will allow us to win more new and existing workloads and to ultimately establish MongoDB as a standard enterprise accounts. 

MongoDB’s management is already seeing the company’s platform resonate with AI startups that are building applications across wide use cases, and this gives management confidence that MongoDB is a good fit for sophisticated AI workloads

We already see our platform resonating with innovative AI startups building exciting applications for use cases such as real-time patient diagnostics for personalized medicine, cyber threat data analysis for risk mitigation, predictive maintenance for maritime fleets and auto generated animations for personalized marketing campaigns…

…we do see some really interesting start-ups who are building on top of MongoDB. So it gives us confidence about our platform fit for these sophisticated workloads. 

There are three elements that are important when migrating from a relational database to a non-relational database and MongoDB’s current relational migrator offering helps automate the first two elements; the third element – rewriting the application code – is manually intensive and management believes that generative AI can help to tremendously improve the experience there

There are 3 elements to migrating and application to transform the schema, moving the data and rewriting the application code. Our current relational migrator offering is designed to automate large parts of the first 2 elements, but rewriting application code is the most manually intensive element. Gen AI holds tremendous promise to meaningfully reduce the cost and time of rewriting application code. We will continue building AI capabilities into relational migrator, but our view is that the solution will be a mix of products and services.

Samsung Electronics’ Digital Appliances division migrated from its previous MySQL database to MongoDB Atlas; the Samsung Smart Home Service can leverage MongoDB’s document database model to collect real-time data for training AI services; the migration improved response times by >50% and the latency was reduced from 3 seconds to 18 milliseconds

Samsung Electronics Digital Appliances division transitioned from their previous MySQL database to MongoDB Atlas to manage clients’ data more effectively. By leveraging MongoDB’s document model, Samsung’s Smart Home Service can collect real-time data from the team’s AI-powered home appliances and use it for a variety of data-driven initiatives such as training AI services. Their migration to MongoDB Atlas improved response times by more than 50% and this re-latency was reduced from 3 seconds to 18 millisecond, significantly improving availability and developer productivity.

MongoDB’s management believes that the performance and cost of AI applications are still not up to mark, using ChatGPT as an example

And also these technologies maturing, but from both the performance and from a cost point of view, if you played with chat GPT or any of the other chatbots out there or large language models, you’ll know that the performance of these applications to get a response time in the 1 to 2 to 3 seconds, depending on the type of question you’re asking. And so naturally, a chatbot is a very simple but easy to understand use case, but to embed that technology into a sophisticated application, making real-time decisions based on on real-time data, the performance and to some degree, the cost of these architectures are still not there…

…The performance of some of these systems is — I would classify as okay, not great. The cost of inference is quite expensive. So people have to be quite careful about the types of applications they deploy.

MongoDB’s management thinks that this year is when MongoDB’s customers start rolling out a few AI applications and learn; it will be at least another year when the positive impacts of AI to MongoDB’s business really starts showing up

Cstomers are still in the learning phase. They’re — they’re experimenting, they’re prototyping. But I would say you’re not seeing a lot of customers really deploy AI applications at scale. So I think it’s going to take them — I would say, this year is a year where they’re going to do probably roll out a few applications, learn…

… I think it’s going to show up in a business when people are deploying AI apps at scale, right? So I think that’s going to be at least another year.

MongoDB’s management believes that the company is very well-positioned to capture AI application workloads because of the technologies underneath its platform and because it is capable of working with a wide range of AI models

We feel very good about our positioning because from an architecture point of view, the document model, the flexible schema, the ability to handle real-time data, performance at scale, the unified platform, the ability to handle data, metadata and vector data with the same query language, same semantics, et cetera, is something that makes us very, very attractive…

… We feel like we’re well positioned we feel that people really resonate with the unified platform, one way to handle data, metadata and vector data that we are open and composable that we integrate to not only all the different LLMs, we are integrated to different embedding models, and we also essentially also integrate with some of the emerging application frameworks that developers want to use. So we think we’re well positioned

MongoDB’s management is seeing that AI-related decisions are being made at the senior levels of a company, and so MongoDB is engaging with customers at that senior level

The other thing that we’re finding is unlike a typical sale where someone is deciding to either build a new workload or modernize a workload. The AI decision is more of a central decision — centralized decision more than ever. So it allows us to actually go higher in the organization. So we’re actually engaging with customers at much more senior levels because, obviously, this is coming down as a top-down initiative.

MongoDB’s management is seeing the first few wave of AI use cases among companies being focused on reducing costs, co-generation, and increasing developer productivity

In regards to use cases, we’re seeing most customers focus on driving efficiencies in their business because their existing baseline of costs are well known. So it’s much easier for them to determine how much value they can derive by using some of these new AI technologies. So I see the first wave of applications being around reducing costs. You’ve seen some announcements by some customers are saying, focusing on things like customer support and customer service, they really have been — they have found ways to dramatically reduce their cost. That’s not surprising to me. I think co-generation and this increasing developer productivity is another area. I think those are going to be kind of 2 areas where there’s low-hanging fruit. 

MongoDB’s management is seeing high interest in AI across almost every industry

In terms of across industries, I think it’s — obviously, there’s some constraints on some customers based on the regulated nature of their industry. But in general, we see basically high interest across almost every industry that we operate in.

Customers migrate off relational databases to MongoDB for three key reasons – (1) their data model has become brittle with relational architecture, (2) their legacy systems are not scaling properly, and (3) the costs have become high – and they are accompanied by a compelling event; customers also conduct the migration to make their data more AI-enabled

Even for IPO, we had a meaningful number of customers migrating off relational to MongoDB. So they to come in 3 categories of reasons why. First is that the data model has become so brittle with relational architecture that is very hard to build new features and be responsive to their customers. And so they just feel like their ability to innovate has slowed down. The second reason is that the system is just not scaling or performing given the increased number of users or the large amount of data that they have to process that they realize that they have to get off a legacy platform. And the third reason is just the cost of the underlying platform and relative to the ROI that application. So it typically falls in one of those three buckets. Sometimes customers may have all 3 or maybe 2 of the 3 that are driving that demand. And then there’s typically some compelling event. Maybe there’s some milestones they want to hit. Maybe there’s a renewal coming up with the incumbent vendor that’s driving them to potentially move off that vendor as quickly as possible…

… On top of the 3 reasons I gave you in terms of why people moved this now in the both reason which is enabling their data and their applications to be more AI-enabled. And so it’s not just moving to a more modern platform, but making them more AI enabled. And so that’s also something that’s getting customers’ interest.

Okta (NASDAQ: OKTA)

Okta’s management has built a strong pipeline of products that are powered by Okta AI

We’re expanding on the world’s most robust and modern identity platform, and we have a strong pipeline of products and functionality powered by Okta AI.

Okta’s management believes that Spera (the company’s new acquisition) is going to help with threat protection; threat protection with Okta AI and the Spera product will be packaged and monetised as add ons

And so you’re seeing customers really starting as they lean in and do more with modern identity, they’re also at the same time saying, what is this class of tools and technologies and capabilities are going to protect that? And that’s where offerings like Underneath Threat Protection with Okta AI or the Spera product are really going to help. And so I think in terms of how we’re going to price and package and monetize these things, think of — they’re both additional, they’re both additional capabilities with additional licensing fee. 

Salesforce (NYSE: CRM)

Salesforce’s management believes that the company is the world’s No.1 AI CRM

Salesforce is the world’s #1 AI CRM, #1 in sales, #1 in service, #1 in marketing, #1 data cloud, incredible.

In Salesforce’s management’s conversations with CEOs, they are hearing three things that the CEOs want – productivity, higher value customer relationships, and higher margins – and these things can happen through AI; Salesforce’s management thinks that company-leaders know they need to make major investments in AI right now

As I talk to CEOs around the world, they tell me, they want 3 things. You may have heard me say this already, but I’ll say it again. One, they want more productivity, and they’re going to get that productivity through the fundamental augmentation of their employees through artificial intelligence. It’s happening. It’s empirical. Number two is they want higher value customer relationships, which is also going to happen through this AI. And they want higher margins, which we are seeing empirically as well through the — when they use this artificial intelligence in these next-generation products. As we look at productivity, as we look at higher value customer relationships, as we look at higher margins, how do our customers get these things? How are they achieving these goals? It is AI. It is why every CEO and company knows they need to make major investments in AI right now.

Salesforce’s management thinks that the current AI moment will give companies an unprecedented level of intelligence and Salesforce’s Einstein 1 platform can help companies achieve this

And I believe this is the single most important moment in the history of the technology industry. It’s giving companies an unprecedented level of intelligence that will allow them to connect with their customers in a whole new way.  And with our Einstein 1 Platform, we’re helping out our customers transform for the AI future.

Salesforce’s management thinks that popular AI models are not trusted solutions for enterprises because they are trained on amalgamated public data and could hallucinate, providing customers with services that do not exist; this was the exact problem faced by an airline recently, and the airline was a Salesforce customer who did not want to work with Salesforce’s AI technologies

The truth is that these AI models are all trained on amalgamated public data. You all understand that. You’ve all seen the New York Times lawsuit of OpenAI or others who are really going to take, hey, this is all — this amalgamated stolen public data, much of it used without permission, unlicensed, but amalgamated into these single consolidated data stores…

These AI models, well, they could be considered very confident liars, producing misinformation, hallucinations. Hallucinations are not a feature, okay?…

…And there’s a danger though for companies, for enterprises, for our customers that these are not trusted solutions. And let me point out why that is, especially for companies who are in regulated markets. Why this is a big, big deal. These models don’t know anything about the company’s customer relationships and, in some cases, are just making it up. Enterprises need to have the same capabilities that are captivating consumers, those amazing things, but they need to have it with trust and they need to have it with security, and it’s not easy. Look, we all read the story. Now it just happened last week. An airline chatbot prompts by a passenger to book a flight with a 90-day refund window. It turns out the chatbot, running on one of these big models, we won’t have to use any brand names here. We all know who it was, hallucinate the option. It did not exist… 

…The airline said, “Oh, listen, that was just the chatbot. It gets that way some time. We’re so sorry — you know what, that’s just a separate technical entity, a separate legal entity and the airline, “We can’t — we’re not going to hold liability for that.” Well, guess what? That defense did not work in a court of law. The court of law said that, that AI chatbot that made up that incredible new policy for that company, well, that company was going to be held responsible, liable for that policy, that they were going to be held liable for the work of that chatbot. Just as they would for a human employee, they were being held liable for a digital employee…

…And that story I told you on the script. When I saw that last week, I’m like, I’m putting this in the script, that this company, which is a great company and a customer of ours, but did not use our technology, went out there and used some kind of rogue AI that they picked off the Internet. Some engineer just hobbled it, hooked it up, and then it started just skewing these hallucinations and false around their loyalty program, and the courts are holding them liable. Good. Let every CEO wake up and realize, we are on the verge of one of the greatest transformations in the history of technology, but trust must be our highest value.

Salesforce’s management believes that there are three essential components for enterprises to deliver trusted AI experiences, (1) a compelling user interface, (2) a high-quality AI model, and (3) data and metadata; management thinks that Salesforce excels in all three components; management has found that Salesforce customers who are experts on AI have realised that it is the integration of AI models with data and metadata that is the important thing in powering AI experiences, and this is why customers are turning to Salesforce

The reality for every enterprise is that to deliver trusted AI experiences, you need these 3 essential components now.

You need that compelling user interface. There’s no question, a natural and effortless experience. And at Salesforce, we have some of the most intuitive user interfaces that deliver insights and intelligence across sales and service and marketing and commerce and industries. Many of you are on Slack right now. Many of you are on Tableau. Many of you are on MuleSoft are, one of our other products.

Okay. Now what else do you need? Number two, you need a world-class AI model. And now we know there’s many, many models available. Just go to hugging face, which is a company that we’re investor in or look at all the other models. And by the way, not only the thousands of models right now, but there are tens of thousands, hundreds of thousands of models coming. And all the models that are available today will be obsolete 12 months from now. So we have to have an open, extensible and trusted framework inside Salesforce to be receptacles for these models. That’s why Einstein 1 is so important. Then you have to be able to use these AI models. The ones that Salesforce is developing or these public models on Hugging Face or other things, or even bring your own model. Customers are even making their own models, fantastic. Of course, we have great partnerships with OpenAI, with Mythropic, with Cohere with many other AI models. This is the second key component. One is the UI, the second is the model, all right?…

…Now we’re in the enterprise. In the enterprise, you need deep integration of data and metadata for the AI to understand and deliver the critical insights and intelligence that customers need across their business, across sales, service, marketing, commerce, whatever it is. That deep integration of the data and metadata that’s not so easy. That’s not just some amalgamate stolen public data set. In the enterprise, that deep integration of data and metadata. Oh, that’s what Salesforce does. We are a deep integration of data and metadata. That is why it’s very, very exciting…

…And they try to stitch together a variety of AI tools and copilots and this and that and whatever I’ve had so many funny conversations with so many customers that come to me that they’re experts in AI and their. And then I just say to them, but how are you going to deliver this experience? And then finally, they realize, “Oh, I need the deep integration with the data and the metadata. The reason why the metadata is so important is because it describes the data. That’s why so many companies are turning to Salesforce for their AI transformation. Only Salesforce offers these critical layers of AI for our customers, the UI, the model and the deep integration of the data and the metadata make the AI smart and intelligent and insightful. And without the hallucinations and without all of these other — all the other problems. For more than 2 decades, we’ve been trusted with our customers’ data and metadata. And we have a lot of it. 

Salesforce’s management believes that most AI models that are available today will be obsoleted in 12 months’ time, and that a flood of new AI models will be coming soon – because of this, it’s important that Salesforce needs to have an open, extensible framework to work with all kinds of models, and this is where Einstein 1 has an important role to play

And by the way, not only the thousands of models right now, but there are tens of thousands, hundreds of thousands of models coming. And all the models that are available today will be obsolete 12 months from now. So we have to have an open, extensible and trusted framework inside Salesforce to be receptacles for these models. That’s why Einstein 1 is so important.

Salesforce’s management believes that data is even more important for AI than chips, and this is why management is so excited about Salesforce: Because the company has one of the largest data repositories in the world for its customers

 I love NVIDIA, by the way, and what Jensen has done is amazing, and they are delivering very much. In the era of the gold rush, the Levi’s jeans to the gold miners. But we all know where the gold is: the data. The gold is the data. And that’s why we’re so excited about Salesforce because we are one of the very largest repositories of enterprise data and metadata in the world for our customers. And customers are going to start to realize this right now…

…For more than 2 decades, we’ve been trusted with our customers’ data and metadata. And we have a lot of it.

There is a lot of trapped data in Salesforce’s customers which is hindering their AI work; Salesforce’s Data Cloud helps to integrate all the disparate data sources, and it is why the service is Salesforce’s fastest-growing product ever; Data Cloud is now integrated across the entire Salesforce platform, and management is totally focused on Data Cloud in FY2025; using Data Cloud adds huge value for customers who are using other Salesforce services; Data Cloud and Einstein 1 are built on the same metadata framework – which allows customer apps to securely access and understand the data that is on Salesforce’s platform – and this prevents hallucinations and it is something only Salesforce can do

Many of our customers also have islands and thousands of systems of trapped data…

… Trap data is all over the enterprise. Now what trap data could be is you might be using a great company like Snowflake and I less Snowflake or Databricks or Microsoft or you might be using Amazon system or even something like Google, what do you say, BigQuery, all these various databases…

…if you’re using Salesforce, Sales Cloud, Service Cloud, Tableau, Slack, we need to be able to, through our zero copy, automatically integrate into our data cloud, all of those systems and then seamlessly provide that data back into these amazing tools. And that is what we are doing because so many of our customers have islands of trapped data in all of these systems, but the AI is not going to work because it needs to have the seamless amalgamated data experience of data and metadata, and that’s why our Data Cloud is like a rocket ship.

The entire AI revolution is built on this foundation of data, and it’s why we’re so excited about this incredible data cloud. It’s now deeply integrated into all of our apps into our entire platform. Its self-service for all of our customers to turn on. It is our fastest-growing product ever. It’s our total focus for fiscal year ’25.

With Salesforce Data Cloud, Salesforce can unlock this trap data and bring together all of their business and customer data into one place for AI, all while keeping their data safe and secure, and it’s all running inside our Einstein Trust layer, and we’ve deployed it to all of our customers, we unleash now the copilot as well to all of our customers deeply built on our pilot on our data and metadata. And while other copilots just sit and spin because they can’t figure out what the data means and if you haven’t seen the demonstrations, you can see these co-pilots spin, but when they use Salesforce and all of a sudden becomes intelligent, and that is the core of the NSN platform. And all of our apps, all of our AI capabilities, all of the customer data and 1 deeply integrated trusted metadata platform, and that’s why we’re seeing incredible demand for data cloud. Data Cloud brings it all together…

…We’ve never seen traction like this of a new product because you can just easily turn on the Data Cloud and it adds huge value to Sales Cloud. It adds huge value to Service Cloud, the Marketing Cloud and the CDP…

… Because Data Cloud and all of Einstein 1 is built on our metadata framework, as I just described, every customer app can securely access and understand the data and use any bottle, use an EUI workflow, integrate with the platform. That means less complexity, more flexibility, faster innovation, but also we want to say goodbye to these hallucinations. We want to say goodbye to all of these crazy experiences or having with these bots that don’t know what they’re doing because they have no data or metadata, okay? Or the data that they have metadata is like productivity data like the highest level data that’s not deeply integrated data. So only Salesforce can do this.

Payroll company ADP has been a long-time customer of Salesforce but wanted to evaluate other AI solutions; ADP realised that the data and metadata component was lacking in other AI solutions and it is something only Salesforce can provide

We all know the HR and payroll leader, ADP and their incredible new CEO, [indiscernible], amazing. ADP has been a great sales cloud customer for 2 decades. They’ve used Einstein for years. They are one of the first customers we ever have…

…And the company wanted to transform now customer service with AI to give their agents real-time insights, next best actions, auto generating case summaries. But what I have to say to you, it was a little bit embarrassing Salesforce is not #1 on their list. And I said to them, “How can that be? We’re the #1 service cloud. We’re #1 in the Q. We’re #1 in this. Number went work.” “No, we’re going to go evaluate this. We’re going to look at all the different solutions — we’re going to look at all the new AI models. We think we’re just going to hook this model up to this, and we’re going to do that.” And it sounds like a big Rube Goldberg invention what was going to happen there. And so we had to go in and we just wanted to partner with them and say, “All right, show us what you want to do. We’re going to work with you, we’re going to be trusted partners. Let’s go.” 

But like a lot of our customers move into AI, ADP realized it didn’t have a comprehensive deeply integrated platform of data and metadata that could bring together all of this into a single source of truth — and then you get the incredible customer service. Then you get the results that you’re looking for. And it’s deeply integrated with their sales systems with marketing and custom applications. And ADP discovered the only sales force can do this. We were able to show ADP how we could unlock trap data with data cloud, 0 copy, drive intelligence, productivity, efficiency for their sales team with Einstein to levels unimagined just a year ago

Salesforce has a new copilot, Einstein Copilot, which management believes is the first conversational AI assistant the is truly trusted; Einstein Copilot can read across all the data and metadata in Salesforce’s platform to surface valuable sales-actions to take, and that is something human users cannot do; management believes that other copilots cannot do what Einstein Copilot can currently can without deep data integration; management thinks that Einstein Copilot is a cut above other copilots

We’re now incredibly excited to work with all of our customers to take their AI to the next level with Einstein copilot, which is going live tomorrow. Einstein CoPilot, which if you haven’t seen it, and if you haven’t, please come to TrailheadDx next week. This is the first conversational AI assistant for the enterprise that’s truly trusted. It’s amazing. It can answer questions. It can summarize. It can create new content, dynamically automate task on behalf of the user. From the single consistent user experience embedded directly within our platform. 

But let me tell you the 1 thing that can do that’s more important than all of that. It is able to read across all the data and metadata in our platform to get that insight instantly. And you’re going to see that — so the sales rep might ask the Einstein CoPilot, what lead I should focus on or what is the most important thing I need to do with this opportunity. And it may say, you need to resolve this customer’s customer case because this escalation has been around for a week or you better go and answer that lead that came in on the Marketing Cloud before if you want to move this opportunity for it because it’s reading across the entire data set. That is something that individual users cannot do that the copilot can do. With access to customer data and the metadata and sales force, including all this real-time data and website engagement and the ability to read through the data set, that’s why Einstein copilot has all the context to understand the question and surface belied that has the highest value and likelihood to convert. And it can also instantly generate the action plan with the best steps to close the deal, such as suggesting optimal meeting times on the lead contacts, known preferences even draping e-mail. If you haven’t seen the video that I put on my Twitter feed last night, there’s a 5-minute video that goes through — all of these incredible things that it’s able to do, there’s never been an enterprise AI capability quite like it. It’s amazing…

… I assure you, without the deep integration of the day of the metadata across the entire platform within copilots deep integration of that data, they cannot do it. I assure you they cannot because they cannot. — because they don’t have the data on the meta data, which is so critical to making an AI assistant so successful…

And I encourage you to try the demos yourself to put our copilot up against any other copilot. Because I’ll tell you that I’ve seen enterprise copilots from these other companies and actions and they just spend and spin and spin…

…I’ve used those copilots from the competitors, have not seen them work yet….

…Einstein is the only copilot with the ability to truly understand what’s going on with your customer relationships. It’s one conversational AI assistant, deeply connected to trusted customer data and metadata.

Einstein 1 is driving sales price uplift in existing Salesforce customers, while also attracting new customers to Salesforce; Salesforce closed 1,300 Einstein deals in FY2024; Einstein 1 has strong early signs after being launched for just 4-plus months

In fact, we continue to see significant average sales price uplift from existing customers who upgrade to Einstein 1 edition. It’s also attracting new customers to Salesforce, 15% of the companies that purchased our Einstein 1 addition in FY ’24 were net new logos…

… In FY ’24, we closed 1,300 Einstein deals, as more customers are leveraging our generative and predictive AI capabilities…

…. I think the way to think about the price uplift moving to Einstein 1 addition used to be a limited edition plus, is really about the value that we’re providing to our customers because at the end of the day, our ability to get increased price is about the value that we’re going to provide. And so as customers start to ramp up their abilities on AI, ramp up their learnings and understand what it means for them economically, our ability to get price will be dictated by that. Early signs of that are pretty strong. We feel good about the progress we’ve seen. It’s only been in market for 4-plus months now in FY ’24, but we’re encouraged by what we’re seeing.

Slack now comes with AI-search features; Salesforce’s management thinks Slack can become a conversational interface for any application

We just launched SlackAI with features like AI search channel recaps and thread summaries to meet the enormous demand for embedded AI in the flow of work from customers like Australian Post and OpenAI. It’s amazing to see what Slack has accomplished in a decade. And frankly, it’s just the beginning, we have a great vision for the future of Slack as a conversational interface for any application. 

Bajaj Finance in India is using Einstein for AI experiences and in 2023 Q4, Bajaj become Salesforce’s second largest Data Cloud customer globally

India continues to be a bright spot for us, growing new business at 35% year-over-year, and we continue to invest in the region to meet the needs of customers, including Bajaj Finance. I had the great opportunity to meet with our CEO, Rajeev Jain in January, and a top priority for him was using Einstein to deliver predictive and generative AI across their entire lending business, which they run on Salesforce. In Q4, Bajaj became the second largest data cloud customer globally, building their AI foundation on the Einstein One platform

Salesforce’s management would be very surprised if other companies can match Salesforce’s level when it comes to AI

Because if you see anyone else being able to deliver on the promise of enterprise AI at the level of quality and scale and capability of Salesforce, I’ll be very surprised. 

Salesforce is deploying its own AI technologies internally and management is seeing the benefits

We are a big believer on sales on Salesforce. We are deploying our own AI technology internally. Our sales teams are using it. Absolutely, we are seeing benefits right now. But the biggest benefit we’ve seen actually has been in our support operation, with case summaries our ability to get — to tap in a knowledge base is faster to get knowledge surfaced within the flow of work. And so it absolutely is part of our margin expansion strategy going forward, which is how do we leverage our own AI to drive more efficiencies in our business to augment the work that’s being done in sales and in service and in marketing and even into our commerce efforts as well…

…We have to be customer #1 and use it, and I’m excited that we are.

Tencent (NASDAQ: TCEHY)

Tencent’s management thinks that its foundational AI model, Tencent Hunyuan, is now among the best large language models in China and worldwide; Hunyuan excels in multiturn conversations, logical inference and numerical reasoning; Hunyuan has 1 trillion parameters; Hunyuan is increasingly used by Tencent for co-pilot services in the company’s SaaS products; management’s focus for Hunyuan is on its text-related capabilities, especially text-to-video

Our Tencent Hunyuan foundation model is now among the top tier of large language model in China with a notable strength in advanced logical reasoning…

… After deploying leading-edge technologies such as the mixture of experts (MoE) architecture, our foundation model, Tencent Hunyuan, is now achieving top-tier Chinese language performance among large language models in China and worldwide. The enhanced Hunyuan excels particularly in multiturn conversations, logical inference and numerical reasoning, areas which has been challenging for large language models. We have scaled the model up to the 1 trillion perimeter mark, leveraging the MoE architecture to enhance performance and reduce inference costs, and we are rapidly improving the model text to picture and text to video capabilities. We’re increasingly integrating Hunyuan to provide co-pilot services from enterprise SaaS products, including Tencent Meeting and Tencent Docs…

…Among our enterprise Software-as-a-Service products, we deployed AI for real-time content comprehension in Tencent Meeting, deployed AI for prompt based document generation Tencent Docs and rolled out a paid customer acquisition tool for eCom…

… At this point in time, we are actually very focused on the text technology because this is actually the fundamentals of the model. And from text, we have built out text to picture from text, we build out text to video capabilities. And the next important evolution is actually what we have seen with [indiscernible], right? [indiscernible] has done an incredible job with text to a [ long ] video, and we — this is something which we would be developing in [ the next turn ]. When we continue to improve the text fundamental capability of Hunyuan, at the same time, we’ll be developing the text to video capability because we actually think that this is actually very relevant to our core business, which is a content-driven business in the area of short video, long video and games. And that’s the area in which we’ll be developing and moving our Hunyuan into. 

Tencent’s management is developing new generative AI tools for internal content production; management thinks that the main benefits of using AI for internal content production is not to reduce costs, but to enable more rapid monetisation and thus, higher revenue generation

 And we are also developing new gen AI tools for effective content production internally…

…We are increasingly going to be deploying AI, including generative AI in areas such as accelerating the creation of animated content, which is a big business for Tencent Video and a profitable business for Tencent Video in terms of game content, as we discussed earlier, potentially in terms of creating [ code ] in general. But the benefit will show up, not in the substantial cost reductions. It will show up in more rapid content creation, and therefore, more rapid monetization and revenue generation.

Tencent’s management is starting to see significant benefits to Tencent’s business results from deploying AI technology in the company’s businesses; the benefits are particularly clear in Tencent’s advertising business, especially in the short-term; Tencent has seen a 100% increase in click-through rates in the past 18 months in its advertising business through the use of AI

More generally, deploying AI technology in our existing businesses have begun to deliver significant revenue benefits. This is most obvious in our advertising business, where our AI-powered ad tech platform is contributing to more accurate ad targeting, higher ad click-through rates and thus, faster advertising revenue growth rates. We’re also seeing early stage business opportunities in providing AI services to Tencent Cloud customers…

…In terms of the AI short-term benefits, I think financial benefits should be much more indexed towards the advertising side because if you think about the size of our advertising business as call it RMB 100 billion [ a year ]. And if you can just have a 10% increase, right, that’s RMB 10 billion and mostly on profit, right? So that’s the scale of the benefits on the advertising side and especially as we see continued growth of our advertising business and when we add in the Video Accounts e-commerce ecosystem, that just has a very long track of growth potential and also the low ad load right now within Video Accounts.

But on the other hand, if you look at the cloud and business services customers, then you are really facing a relatively nascent market. You still have to sell to these customers. And we spend a lot of time working with all the customers in different industries and trying to figure out what’s the best way of leveraging AI for their business. And then you have to go through a long sales cycle. And then at the same time, it’s competitive because your competitors will actually come in and say, “Oh, they can also provide a similar service.” And despite we believe we have a superior technology and product, it’s actually [ very careful ] and your competitor may actually sort of come in and say they’re going to cut prices, even though there’s an inferior product.

So all these things, all the low-margin, highly competitive and long sales cycle of the 2B business would actually come in to play in that side of the business. So when you compare the two sides of the equation, you can actually clearly see that ramping up advertising is actually going to be much more profitable from the short term. Of course, we’ll continue to do both, right?…

… Martin gave the example of if we can improve click-through rates by 10%, then that’s CNY 10 billion in incremental revenue, probably CNY 8 billion in incremental gross operating profit. In reality, you should view 10% as being in the nature of a floor, not a ceiling. Facebook has seen a substantially bigger improvements in click-through rates for some of our most important inventories, we’ve actually seen our click-through rates increase by 100% in the past 18 months. So when we’re thinking about where the financial benefits of AI, then it’s advertising, click-through rates and therefore, advertisement revenue first and foremost, and that’s a very high flow-through business for us.

Tencent’s management believes that AI technology can be applied in its games business in terms of creating innovative gameplay as well as generating content in existing games, but these will take some time to manifest

In terms of the application of AI to games, then like many things, the boundary between [indiscernible] reality is a function of how far forward [indiscernible] willing to look and [ we’re willing to look very far ] forward. And all of the areas you mentioned, such as AI-powered [ MPCs ], such as AI accelerated graphical content generation, graphical asset generation are areas that [ over for years ] to come, not over the months to come will benefit meaningfully from the deployment of AI. And I think it’s also fair to say that the game industry has always been a mixture of, on the one hand, innovation around gameplay techniques. And on the other hand, deployment of enhanced content — renewed content into existing gameplay. And it’s reasonable to believe that AI will be most beneficial for the second of those activities. But one will continue to require very talented individuals and teams due to focus on the first of those opportunities, which is the creation of innovative game play.

Veeva Systems (NYSE: VEEV)

Veeva’s management has seen very specialised AI models being used for some time – prior to the introduction of large language models to the consumer public – to help with drug discovery, especially in areas such as understanding protein folding

[Question] what are you seeing out of life sciences companies in terms of how AI is changing things. Whether that’s accelerating drug development, whether that’s more targeted marketing, maybe if you could walk us through kind of what those conversations would look like? And what sort of role you think you can play in those changes?

[Answer] I would say the most direct impact and it’s been happening a while before large language models as well with AI and drug discovery. Very, very targeted AI models that can do things like protein folding and analyzing retina images, things like that. So this is — this is very powerful, but very therapeutic area specific, very close to the science in the R&D, and I — there’s not just one AI model there is multiple specialized AI models.

Veeva’s management has seen some experimentation going on with the use of large language models in improving general productivity in the life sciences industry

Then in terms of other areas, really, there’s a lot of experimentation with large language models. And what people look at it for are: a, can I just have general productivity for my people, can they write an e-mail faster? Can they check their e-mail faster? Can they research some information faster. So that’s one thing that’s going on. Also, specific use cases like authoring, can I — can I author a protocol faster? Can I author a regulatory document faster. Now faster is one thing, but also have to be very accurate. So I would say there’s experimentation on that. There’s not yet broad production use on that. And certainly, some of these critical things has to be lot of quality control on it. So those are probably the two biggest use cases — really three: research, general productivity and authoring.

Veeva’s management has developed a product to make Veeva’s data platform extract data in a much faster way so that it works well with AI applications, but otherwise, the company has not invested in LLMs (large language models) because they are not as relevant in the company’s field

And then as far as our role, we’ve been doing some really heavy work over the last 2 years on something in our Vault platform that’s called the Direct Data API. And that’s a pretty revolutionary way of making the data come out of Vault in a consistent — transactionally consistent manner much, much faster, roughly 100x faster than it happens now. That’s going to be critical for all kinds of AI applications on the top, which we may develop, which our customers may develop, and we’re also utilizing that for some really fast system to system transfer between our different Vault family. So that’s been the biggest thing that we’ve done. We haven’t really invested heavily in large language models. So far, we just don’t see quite the application in our application areas, not to say that, that wouldn’t change in the future.

Veeva’s management thinks that the important thing for AI is data – AI models will be a commodity – and Veeva has the advantage in this

I would say we’re in a pretty good position because AI really — the durable thing about AI is the data sources, the data sources. The AI models will come on top, and that will be largely a tech commodity, but the control and the access to the data sources, that’s pretty important, and that’s kind of where Veeva plays.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Meta Platforms, MongoDB, Okta, Salesforce, Starbucks, Tencent, and Veeva Systems. Holdings are subject to change at any time.

Stock Buybacks and Privatisations in Singapore’s Stock Market, China’s Property Market, What’s Next for Mario, & More

Earlier this week, on 11 March 2024, I was invited for a short interview on Money FM 89.3, Singapore’s first business and personal finance radio station, by Chua Tian Tian, the co-host of the station’s The Evening Runway show. We discussed a number of topics, including:

  • City Developments’ S$5.5 million share buyback on 8 March 2024 and the implications behind the company’s move (Hint: City Developments rarely conducts share buybacks, and this recent buyback happened at a time when the company’s price-to-book ratio is at 0.6, which is near a 10-year low)
  • Rumours on a privatisation deal for Japfa from its controlling shareholders (Hint: Japfa’s business has historically been cyclical and it appears that its business results are picking up after a rough few years; at the same time, the company’s valuation looks really low on the surface)
  • The improvement in Singapore’s business sentiment and what it means for Singapore-listed counters from the sectors with the most positive outlooks (Hint: A rising tide may not lift all boats)
  • What would it take for the Chinese property market to rebound (Hint: Demand for Chinese properties is collapsing while Chinese property developers are facing severe financial strain, leading to even lesser demand for Chinese properties)
  • What would a new Mario movie in 2026 mean for Nintendo (Hint: It’s likely to be a boon for Nintendo in the short run, but the long run impacts are less clear)

You can check out the recording of our conversation below!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Meta Platforms. Holdings are subject to change at any time.

The Latest Thoughts From American Technology Companies On AI (2023 Q4)

A collection of quotes on artificial intelligence, or AI, from the management teams of US-listed technology companies in the 2023 Q4 earnings season.

The way I see it, artificial intelligence (or AI), really leapt into the zeitgeist in late-2022 or early-2023 with the public introduction of DALL-E2 and ChatGPT. Both are provided by OpenAI and are software products that use AI to generate art and writing, respectively (and often at astounding quality). Since then, developments in AI have progressed at a breathtaking pace.

Meanwhile, the latest earnings season for the US stock market – for the fourth quarter of 2023 – is coming to its tail-end. I thought it would be useful to collate some of the interesting commentary I’ve come across in earnings conference calls, from the leaders of technology companies that I follow or have a vested interest in, on the topic of AI and how the technology could impact their industry and the business world writ large. This is an ongoing series. For the older commentary:

With that, here are the latest commentary, in no particular order:

Airbnb (NASDAQ: ABNB)

Airbnb’s management believes that AI will allow the company to develop the most innovative and personalised AI interfaces in the world, and the company recently acquired GamePlanner AI to do so; Airbnb’s management thinks that popular AI services today, such as ChatGPT, are underutilising the foundational models that power the services; GamePlanner AI was founded by the creator of Apple’s Siri smart assistant

There is a new platform shift with AI, and it will allow us to do things we never could have imagined. While we’ve been using AI across our service for years, we believe we can become a leader in developing some of the most innovative and personalized AI interfaces in the world. In November, we accelerated our efforts with the acquisition of GamePlanner AI, a stealth AI company led by the co-founder and original developer of Siri. With these critical pieces in place, we’re now ready to expand beyond our core business. Now this will be a multiyear journey, and we will share more with you towards the end of this year…

…If you were to open, say, ChatGPT or Google, though the models are very powerful, the interface is really not an AI interface. It’s the same interface as the 2000s, in a sense, the 2010s. It’s a typical classical web interface. So we feel like the models, in a sense, are probably underutilized…

Airbnb’s management does not want to build foundational large language models – instead, they want to focus on the application layer

One way to think about AI is, let’s use a real-world metaphor. I mentioned we’re building a city. And in that city, we have infrastructure, like roads and bridges. And then on top of those roads and bridges, we have applications like cars. So Airbnb is not an infrastructure company. Infrastructure would be a large language model or, obviously, GPUs. So we’re not going to be investing in infrastructure. So we’re not going to be building a large language model. We’ll be relying on, obviously, OpenAI. Google makes — or create a model, Meta creates models. So those are really infrastructure. They’re really developing infrastructure. But where we can excel is on the application layer. And I believe that we can build one of the leading and most innovative AI interfaces ever created. 

Airbnb’s management believes that the advent of generative AI represents a platform shift and it opens the probability of Airbnb becoming a cross-vertical company

Here’s another way of saying it. Take your phone and look at all the icons on your phone. Most of those apps have not fundamentally changed since the advent of Generative AI. So what I think AI represents is the ultimate platform shift. We had the internet. We had mobile. Airbnb really rose during the rise of mobile. And the thing about a platform shift, as you know, there is also a shift in power. There’s a shift of behavior. And so I think this is a 0-0 ball game, where Airbnb, we have a platform that was built for 1 vertical short-term space. And I think with AI — Generative AI and developing a leading AI interface to provide an experience that’s so much more personalized than anything you’ve ever seen before.

Imagine an app that you feel like it knows you, it’s like the ultimate Concierge, an interface that is adaptive and evolving and changing in real-time, unlike no interface you’ve ever seen before. That would allow us to go from a single vertical company to a cross-vertical company. Because one of the things that we’ve noticed is the largest tech companies aren’t a single vertical. And we studied Amazon in the late ’90s, early 2000s, when they went from books to everything, or Apple when they launched the App Store. And these really large technology companies are horizontal platforms. And I think with AI and the work we’re doing around AI interfaces, I think that’s what you should expect of us.

Alphabet (NASDAQ: GOOG)

Alphabet’s Google Cloud segment saw accelerated growth in 2023 Q4 from generative AI

Cloud, which crossed $9 billion in revenues this quarter and saw accelerated growth driven by our GenAI and product leadership.

Alphabet closed 2023 by launching Gemini, a foundational AI model, which has state-of-the-art capabilities; Gemini Ultra is coming soon

We closed the year by launching the Gemini era, a new industry-leading series of models that will fuel the next generation of advances. Gemini is the first realization of the vision we had when we formed Google DeepMind, bringing together our 2 world-class research teams. It’s engineered to understand and combine text, images, audio, video and code in a natively multimodal way, and it can run on everything from mobile devices to data centers. Gemini gives us a great foundation. It’s already demonstrating state-of-the-art capabilities, and it’s only going to get better. Gemini Ultra is coming soon. The team is already working on the next versions and bringing it to our products.

Alphabet is already experimenting Gemini with Google Search; Search Generative Experience (SGE) saw its latency drop by 40% with Gemini

We are already experimenting with Gemini in Search, where it’s making our Search Generative Experience, or SGE, faster for users. We have seen a 40% reduction in latency in English in the U.S. 

Alphabet’s management thinks that SGE helps Google Search (1) answer new types of questions, (2) answer complex questions, and (3) surface more links; management believes that digital advertising will continue to play an important role in SGE; management has found that users find the ads placed above or below an AI overview of searches to be helpful; management knows what needs to be done to incorporate AI into the future experience of Google Search and they see AI assistants or agents as being an important component of Search in the future

By applying generative AI to Search, we are able to serve a wider range of information needs and answer new types of questions, including those that benefit from multiple perspectives. People are finding it particularly useful for more complex questions like comparisons or longer queries. It’s also helpful in areas where people are looking for deeper understanding, such as education or even gift ideas. We are improving satisfaction, including answers for more conversational and intricate queries. As I mentioned earlier, we are surfacing more links with SGE and linking to a wider range of sources on the results page, and we’ll continue to prioritize approaches that add value for our users and send valuable traffic to publishers…

…As we shared last quarter, Ads will continue to play an important role in the new search experience, and we’ll continue to experiment with new formats native to SGE. SGE is creating new opportunities for us to improve commercial journeys for people by showing relevant ads alongside search results. We’ve also found that people are finding ads either above or below the AI-powered overview helpful as they provide useful options for people to take action and connect with businesses…

…Overall, one of the things I think people underestimate about Search is the breadth of Search, the amount of queries we see constantly on a new day, which we haven’t seen before. And so the trick here is to deliver that high-quality experience across the breadth of what we see in Search. And over time, we think Assistant will be very complementary. And we will again use generative AI there, particularly with our most advanced models in Bard and allows us to act more like an agent over time, if I were to think about the future and maybe go beyond answers and follow through for users even more. So that is the — directionally, what the opportunity set is. Obviously, a lot of execution ahead. But it’s an area where I think we have a deep sense of what to do.

Alphabet’s latest Pixel 8 phones have an AI-powered feature that lets users search what they see on their phones without switching apps; the Pixel 8s uses Gemini Nano for AI features

Circle to Search lets you search what you see on Android phones with a simple gesture without switching apps. It’s available starting this week on Pixel 8 and Pixel 8 Pro and the new Samsung Galaxy S24 Series…

…Pixel 8, our AI-first phone, was awarded Phone of the Year by numerous outlets. It now uses Gemini Nano with features like Magic Compose for Google Messages and more to come.

Alphabet’s management is seeing that advertisers have a lot of interest in Alphabet’s AI advertising solutions; the solutions include (1) the Automatically Created Assets (ACA) feature for businesses to build better ads and (2) conversational experiences – currently under beta testing – that has helped SMBs be 42% more likely to publish ads with good ad-strength

We are also seeing a lot of interest in our AI-powered solutions for advertisers. That includes our new conversational experience that uses Gemini to accelerate the creation of Search campaigns…

…As we look ahead, we’re also starting to put generative AI in the hands of more and more businesses to help them build better campaigns and even better performing ads. Automatically created assets help advertisers show more relevant search ads by creating tailored headlines and descriptions based on each ad’s context. Adoption was up with strong feedback in Q4. In addition to now being available in 8 languages, more advanced GenAI-powered capabilities are coming to ACA…

…And then last week’s big news was that Gemini will power new conversational experience in Google Ads. This is open and beta to U.S. and U.K. advertisers. Early tests show advertisers are building higher-quality search campaigns with less effort, especially SMBs who are 42% more likely to publish a campaign with good or excellent ad strength. 

Alphabet’s Google Cloud offers AI Hypercomputer (a supercomputing architecture for AI), which is used by high-profile AI startups such as Anthropic and Mistral AI

Google Cloud offers our AI Hypercomputer, a groundbreaking supercomputing architecture that combines our powerful TPUs and GPUs, AI software and multi-slice and multi-host technology to provide performance and cost advantages for training and serving models. Customers like Anthropic, Character.AI, Essential AI and Mistral AI are building and serving models on it.

Vertex AI, which is within Google Cloud, enables users to customise and deploy more than 130 generative AI models; Vertex AI’s API (application programming interface) requests has jumped six times from the first half of 2023 the second half; Samsung is using Vertex AI to provide GenAI models in its Galaxy S24 smartphones while companies such as Shutterstock and Victoria’s Secret are also using Vertex AI

For developers building GenAI applications, we offer Vertex AI, a comprehensive enterprise AI platform. It helps customers like Deutsche Telekom and Moody’s discover, customize, augment and deploy over 130 GenAI models, including PaLM, MedPaLM, Sec-PaLM and Gemini as well as popular open source and partner models. Vertex AI has seen strong adoption with the API request increasing nearly 6x from H1 to H2 last year. Using Vertex AI, Samsung recently announced its Galaxy S24 Series smartphone with Gemini and Imagen 2, our advanced text-to-image model. Shutterstock has added Imagen 2 to their AI image generator, enabling users to turn simple text prompts into unique visuals. And Victoria’s Secret & Co. will look to personalize and improve the customer experience with Gemini, Vertex AI, Search and Conversations.

Duet AI, Alphabet’s AI agents for its Google Workspace and Google Cloud Platform (GCP) services, now has more than 1 million testers, and will incorporate Gemini soon; Duet AI for Developers is the only generative AI offering that supports the entire development and operations lifecycle for software development; large companies such as Wayfair, GE Appliances, and Commerzbank are already using Duet AI for Developers

Customers are increasingly choosing Duet AI, our packaged AI agents for Google Workspace and Google Cloud Platform, to boost productivity and improve their operations. Since its launch, thousands of companies and more than 1 million trusted testers have used Duet AI. It will incorporate Gemini soon. In Workspace, Duet AI is helping employees benefit from improved productivity and creativity at thousands of paying customers around the world, including Singapore Post, Uber and Woolworths. In Google Cloud Platform, Duet AI assists software developers and cybersecurity analysts. Duet AI for Developers is the only GenAI offering to support the complete development and operations life cycle, fine-tuned with the customer’s own core purpose and policies. It’s helping Wayfair, GE Appliances and Commerzbank write better software, faster with AI code completion, code generation and chat support. With Duet AI and Security Operations, we are helping cybersecurity teams at Fiserv, Spotify and Pfizer.

Alphabet’s management believes that the company has state-of-the-art compute infrastructure and that it will be a major differentiator in the company’s AI-related work; managements wants Alphabet to continue investing in its infrastructure

Search, YouTube and Cloud are supported by our state-of-the-art compute infrastructure. This infrastructure is also key to realizing our big AI ambitions. It’s a major differentiator for us. We continue to invest responsibly in our data centers and compute to support this new wave of growth in AI-powered services for us and for our customers.

Alphabet’s AI-powered ad solutions are helping retailers with their omni channel growth; a large big-box retailer saw a 60%+ increase in omni channel ROA (return on advertising) and a 22%+ increase in store traffic

Our proven AI-powered ad solutions were also a win for retailers looking to accelerate omni growth and capture holiday demand. Quick examples include a large U.S. big-box retailer who drove a 60%-plus increase in omni ROAS and a 22%-plus increase in store traffic using Performance Max during Cyber Five; and a well-known global fashion brand, who drove a 15%-plus higher omnichannel conversion rate versus regular shopping traffic by showcasing its store pickup offering across top markets through pickup later on shopping ads.

Alphabet’s management is using AI to make it easier for content creators to create content for Youtube (for example, creators can easily create backgrounds or translate their videos); management also believes the AI tools built for creators can also be ported over to the advertising business to help advertisers

First, creation, which increasingly takes place on mobile devices. We’ve invested in a full suite of tools, including our new YouTube Create app for Shorts, to help people make everything from 15-second Shorts to 15-minute videos to 15-hour live streams with a production studio in the palm of their hands. GenAI is supercharging these capabilities. Anyone with a phone can swap in a new backdrop, remove background extras, translate their video into dozens of languages, all without a big studio budget. We’re excited about our first products in this area from Dream Screen for AI-generated backgrounds to Aloud for AI-powered dubbing…

…You are obviously aware of the made YouTube announcement where we introduced a whole lot of new complementary creativity features on YouTube, including Dream Screen, for example, and a lot of other really interesting tools and thoughts. You can obviously imagine that we can take this more actively to the advertising world already. As you know, it continues already to power AI, a lot of our video ad solutions and measurement capabilities. It’s part of video-rich campaigns. Multi-format ads are — actually, there is a generative creator music that actually makes it easier for creators to design the perfect soundtrack already. And as I said earlier, AI will unlock a new world of creativity. And you can see how this will — if you just look at where models are heading, where multimodal models are heading, where the generation capabilities of those models are heading, you can absolutely see how this will impact and positively impact and simplify the flow for creators, similar to what you see already emerging in some of our core products like ACA on the Search side.

Alphabet’s management expects the company’s capital expenditure in 2024 to be notably higher than in 2023 (it was US$20 billion in 2023), driven by investments in AI infrastructure

With respect to CapEx, our reported CapEx in the fourth quarter was $11 billion, driven overwhelmingly by investment in our technical infrastructure with the largest component for servers followed by data centers. The step-up in CapEx in Q4 reflects our outlook for the extraordinary applications of AI to deliver for users, advertisers, developers, cloud enterprise customers and governments globally and the long-term growth opportunities that offers. In 2024, we expect investment in CapEx will be notably larger than in 2023.

Alphabet’s management is restructuring the company’s workforce not because AI is taking away jobs, but because management believes that AI solutions can deliver significant ROI (return on investments) and it’s important for Alphabet to have an organisational structure that can better build these solutions

But I also want to be clear, when we restructure, there’s always an opportunity to be more efficient and smarter in how we service and grow our customers. We’re not restructuring because AI is taking away roles that’s important here. But we see significant opportunities here with our AI-powered solution to actually deliver incredible ROI at scale, and that’s why we’re doing some of those adjustments.

Alphabet’s management thinks that Search is not just about generative AI

Obviously, generative AI is a new tool in the arsenal. But there’s a lot more that goes into Search: the breadth, the depth, the diversity across verticals, stability to follow through, getting actually access to rich, diverse sources of content on the web and putting it all together in a compelling way.

Alphabet’s management believes that AI features can help level the playing field for SMBs in the creation of effective advertising (when competing with large companies) and they will continue to invest in that area

Our focus has always been here on investing in solutions that really help level the playing field, and you mentioned several of those. So actually, SMBs can compete with bigger brands and more sophisticated advertisers. And so the feedback we’re always getting is they need easy solutions that could drive value quickly, and several of the AI-powered solutions that you’re mentioning are actually making the workflow and the whole on-ramp and the bidded targeting creative and so on, you mentioned that is so much easier for SMBs. So we’re very satisfied with what we’re seeing here. We will continue to invest. 

Amazon (NASDAQ: AMZN)

Amazon’s cloud computing service, AWS, saw an acceleration in revenue growth in 2023 Q4 and management believes this was driven partly by AI

If you look back at the revenue growth, it accelerated to 13.2% in Q4, as we just mentioned. That was an acceleration. We expect accelerating trends to continue into 2024. We’re excited about the resumption, I guess, of migrations that companies may have put on hold during 2023 in some cases and interest in our generative AI products, like Bedrock and Q, as Andy was describing

Amazon’s management reminded the audience that their framework for thinking about generative AI consists of three layers – the first is the compute layer, the second is LLMs as a service, the third is the applications that run on top of LLMs – and Amazon is investing heavily in all three

You may remember that we’ve explained our vision of three distinct layers in the gen AI stack, each of which is gigantic and each of which we’re deeply investing.

At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute…

…In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service…

…At the top layer of the stack is the application layer.

Amazon’s management is seeing revenues accelerate rapidly for AWS across all three layers of the generative AI stack and AWS is receiving significant interest from customers wanting to run AI workloads

Still relatively early days, but the revenues are accelerating rapidly across all three layers, and our approach to democratizing AI is resonating well with our customers. We have seen significant interest from our customers wanting to run generative AI applications and build large language models and foundation models, all with the privacy, reliability and security they have grown accustomed to with AWS

Amazon’s management is seeing that enterprises are still figuring out which layer of the generative AI stack they want to operate in; management thinks that most enterprises will operating in at least two layers, with the technically capable ones operating in all three

When we talk to customers, particularly at enterprises as they’re thinking about generative AI, many are still thinking through at which layers of those three layers of the stack I laid out that they want to operate in. And we predict that most companies will operate in at least two of them. But I also think, even though it may not be the case early on, I think many of the technically capable companies will operate at all three. They will build their own models, they will leverage existing models from us, and then they’re going to build the apps. 

At the first layer of the generative AI stack, AWS is offering the most expansive collection of compute instances with NVIDIA chips; AWS has built its own Trainium chips for training and Inferentia chips for inference; a new version of Trainium – Trainium 2 – was recently announced and it is 4x faster, and has 3x more memory, than the first generation of Trainium; large companies and prominent AI startups are using AWS’s AI chips

At the bottom layer where customers who are building their own models run training and inference on compute where the chip is the key component in that compute, we offer the most expansive collection of compute instances with NVIDIA chips. We also have customers who like us to push the price performance envelope on AI chips just as we have with Graviton for generalized CPU chips, which are 40% more price-performant than other x86 alternatives. And as a result, we’ve built custom AI training chips named Trainium and inference chips named Inferentia. In re:Invent, we announced Trainium2, which offers 4x faster training performance and 3x more memory capacity versus the first generation of Trainium, enabling advantageous price performance versus alternatives. We already have several customers using our AI chips, including Anthropic, AirBnB, Hugging Face, Qualtrics, Rico and Snap.

At the middle layer of the generative AI stack, AWS has launched Bedrock, which offers LLMs-as-a-service; Bedrock is off to a very strong start with thousands of customers already using it just a few months after launch; Bedrock has added new models, including those from prominent AI startups, Meta’s Llama2, and Amazon’s own Titan family; customers are excited over Bedrock because building production-quality generative AI applications requires multiple iterations of models, and the use of many different models, and this is where Bedrock excels

In the middle layer where companies seek to leverage an existing large language model, customize it with their own data and leverage AWS’ security and other features, all as a managed service, we’ve launched Bedrock, which is off to a very strong start with many thousands of customers using the service after just a few months… We also added new models from Anthropic, Cohere, Meta with Llama 2, Stability AI and our own Amazon Titan family of LLMs. What customers have learned at this early stage of gen AI is that there’s meaningful iteration required in building a production gen AI application with the requisite enterprise quality at the cost and latency needed. Customers don’t want only one model. They want different models for different types of applications and different-sized models for different applications. Customers want a service that makes this experimenting and iterating simple. And this is what Bedrock does, which is why so many customers are excited about it.

At the top layer of the generative AI stack, AWS recently launched Amazon Q, a coding companion; management believes that a coding companion is one of the very best early generative AI applications; Amazon Q is linked with more than 40 popular data-connectors so that customers can easily query their data repositories; Amazon Q has generated strong interest from developers

At the top layer of the stack is the application layer. One of the very best early gen AI applications is a coding companion. At re:Invent, we launched Amazon Q, which is an expert on AWS, writes code, debugs code, tests code, does translations like moving from an old version of Java to a new one and can also query customers various data repositories like Internet, Wikis or from over 40 different popular connectors to data in Salesforce, Amazon S3, ServiceNow, Slack, Atlassian or Zendesk, among others. And it answers questions, summarizes data, carries on a coherent conversation and takes action. It was designed with security and privacy in mind from the start, making it easier for organizations to use generative AI safely. Q is the most capable work assistant and another service that customers are very excited about…

…When enterprises are looking at how they might best make their developers more productive, they’re looking at what’s the array of capabilities in these different coding companion options they have. And so we’re spending a lot of time. Our enterprises are quite excited about it. It created a meaningful stir in re:Invent. And what you see typically is that these companies experiment with different options they have and they make decisions for their employee base, and we’re seeing very good momentum there.

Amazon’s management is seeing that security over data is very important to customers when they are using AI and this is an important differentiator for AWS because its AI services inherit the same security features as AWS – and AWS’s capabilities and track record in security are good

By the way, don’t underestimate the point about Bedrock and Q inheriting the same security and access control as customers get with AWS. Security is a big deal, an important differentiator between cloud providers. The data in these models is some of the company’s most sensitive and critical assets. With AWS’ advantaged security capabilities and track record relative to other providers, we continue to see momentum around customers wanting to do their long-term gen AI work with AWS.

Amazon has launched some generative AI applications across its businesses and are building more; one of the applications launched is Rufus, a shopping assistant, which allows consumers to receive thoughtful responses to detailed shopping questions; other generative AI applications being built and launched by Amazon include a customer-review-summary app, an app for customers to predict how they will fit in apparel, an app for inventory forecasts for each fulfilment centre, and an app to generate copy for ads based on a picture, or generate pictures based on copy; Rufus is seamlessly integrated into Amazon and management thinks Rufus could meaningfully change what discovery looks for shoppers using Amazon

We’re building dozens of gen AI apps across Amazon’s businesses, several of which have launched and others of which are in development. This morning, we launched Rufus, an expert shopping assistant trained on our product and customer data that represents a significant customer experience improvement for discovery. Rufus lets customers ask shopping journey questions, like what is the best golf ball to use for better spin control or which are the best cold weather rain jackets, and get thoughtful explanations for what matters and recommendations on products. You can carry on a conversation with Rufus on other related or unrelated questions and retains context coherently. You can sift through our rich product pages by asking Rufus questions on any product features and it will return answers quickly…

…. So if you just look at some of our consumer businesses, on the retail side, we built a generative AI application that allowed customers to look at summary of customer review, so that they didn’t have to read hundreds and sometimes thousands of reviews to get a sense for what people like or dislike about a product. We launched a generative AI application that allows customers to quickly be able to predict what kind of fit they’d have for different apparel items. We built a generative AI application in our fulfillment centers that forecasts how much inventory we need in each particular fulfillment center…Our advertising business is building capabilities where people can submit a picture and an ad copy is written and the other way around. 

…  All those questions you can plug in and get really good answers. And then it’s seamlessly integrated in the Amazon experience that customers are used to and love to be able to take action. So I think that that’s just the next iteration. I think it’s going to meaningfully change what discovery looks like for our shopping experience and for our customers.

Amazon’s management believes generative AI will drive tens of billions in revenue for the company over the next few years

Gen AI is and will continue to be an area of pervasive focus and investment across Amazon primarily because there are a few initiatives, if any, that give us the chance to reinvent so many of our customer experiences and processes, and we believe it will ultimately drive tens of billions of dollars of revenue for Amazon over the next several years.

Amazon’s management expects the company’s full-year capital expenditure for 2024 to be higher than in 2023, driven by increased investments in infrastructure for AWS and AI

We define our capital investments as a combination of CapEx plus equipment finance leases. In 2023, full year CapEx was $48.4 billion, which was down $10.2 billion year-over-year, primarily driven by lower spend on fulfillment and transportation. As we look forward to 2024, we anticipate CapEx to increase year-over-year primarily driven by increased infrastructure CapEx to support growth of our AWS business, including additional investments in generative AI and large language models.

AWS’s generative AI revenue is pretty big in absolute numbers, but small in the context of AWS already being a $100 billion annual-revenue-run-rate business

If you look at the gen AI revenue we have, in absolute numbers, it’s a pretty big number. But in the scheme of a $100 billion annual revenue run rate business, it’s still relatively small, much smaller than what it will be in the future, where we really believe we’re going to drive tens of billions of dollars of revenue over the next several years. 

Apple (NASDAQ: AAPL)

Many of the features in Apple’s latest product, the virtual reality headset, the Vision Pro, features are powered by AI

There’s an incredible amount of technology that’s packed into the product. There’s 5,000 patents in the product. And it’s, of course, built on many innovations that Apple has spent multiple years on, from silicon to displays and significant AI and machine learning, all the hand tracking, the room mapping, all of this stuff is driven by AI.

Apple has been spending a lot of time and effort on AI and management will share details later in 2024

As we look ahead, we will continue to invest in these and other technologies that will shape the future. That includes artificial intelligence where we continue to spend a tremendous amount of time and effort, and we’re excited to share the details of our ongoing work in that space later this year…

…In terms of generative AI, which I would guess is your focus, we have a lot of work going on internally as I’ve alluded to before. Our MO, if you will, has always been to do work and then talk about work and not to get out in front of ourselves. And so we’re going to hold that to this as well. But we’ve got some things that we’re incredibly excited about that we’ll be talking about later this year.

Apple’s management thinks there is a huge opportunity for Apple with generative AI but will only share more details in the future

Let me just say that I think there is a huge opportunity for Apple with gen AI and AI and without getting into more details and getting out in front of myself.

Arista Networks (NYSE: ANET)

Arista Networks’ management believes that AI at scale needs Ethernet at scale because AI workloads cannot tolerate delays; management thinks that 400 and 800-gigabit Ethernet will become important or AI back-end GPU clusters

AI workloads are placing greater demands on Ethernet as they have both data and compute-intensive across thousands of processes today. Basically, AI at scale needs Ethernet at scale. AI workloads cannot tolerate the delays in the network because the job can only be completed after all flows are successfully delivered to the GPU clusters. All it takes is one culprit or worst-case link to throttle an entire AI workload…

…. We expect both 400 and 800-gigabit Ethernet will emerge as important pilots for AI back-end GPU clusters. 

Arista Networks’ management is pushing the company and the Ultra Ethernet Consortium to improve Ethernet technology for AI workloads in three key ways; management believes that Ethernet is superior to Infiniband for AI-related data networking because Ethernet provides flexible ordering of data transfer whereas Infiniband is rigid

Three improvements are being pioneered by Arista and the founding members of the Ultra Ethernet Consortium to improve job completion time. Number one, packet spring. AI network topology meets packet spring to allow every flow to simultaneously access all parts of the destination. Arista is developing multiple forms of load balancing dynamically with our customers. Two is flexible ordering. Key to an AI job completion is the rapid and reliable bulk transfer with flexible ordering using Ethernet links to optimally balance AI-intensive operations, unlike the rigid ordering of InfiniBand. Arista is working closely with its leading vendors to achieve this. Finally, network congestion. In AI networks, there’s a common in-cost congestion problem whereby multiple uncoordinated senders can send traffic to the receiver simultaneously. Arista’s platforms are purpose-built and designed to avoid these kinds of hotspots, evenly spreading the load across multi-packs across a virtual output queuing VoQ losses fabric.

Arista Networks’ management thinks the company can achieve AI revenue of at least $750 million in 2025

We are cautiously optimistic about achieving our AI revenue goal of at least $750 million in AI networking in 2025…

…. So our AI performance continues to track well for the $750 million revenue goal that we set last November at Analyst Day. 

Arista Networks’ management sees the company becoming the gold-standard for AI data-networking

We have more than doubled our enterprise revenue in the last 3 years and we are becoming the gold standard for client-to-cloud-to-AI networking with 1 EOS and 1 CloudVision Foundation. 

In the last 12 months, Arista Networks has participated in a large number of AI project bids, and in the last five projects where there was a situation of Ethernet versus Infiniband, Arista Networks has won four of them; over the last 12 months, a lot has changed in terms of how Infiniband was initially bundled into AI data centres; management believes that Ethernet will become the default standard for AI networking going forward

To give you some color on the last 3 months, I would say difficult to project anything in 3 months. But if I look at the last year, which maybe last 12 months is a better indication, we have participated in a large number of AI bids and when I say large, I should say they are large AI bids, but there are a small number of customers actually to be more clear. And in the last 4 out of 5, AI networking clusters we have participated on Ethernet versus InfiniBand, Arista has won all 4 of them for Ethernet, one of them still stays on InfiniBand. So these are very high-profile customers. We are pleased with this progress…

…The first real consultative approach from Arista is to provide our expertise on how to build a robust back-end AI network. And so the whole discussion of Ethernet become — versus InfiniBand becomes really important because as you may recall, a year ago, I told you we were outside looking in, everybody had an Ethernet — everybody had an InfiniBand HPC cluster that was kind of getting bundled into AI. But a lot has changed in a year. And the popular product we are seeing right now and the back-end cluster for our AI is the Arista 7800 AI spine, which in a single chassis with north of 500 terabit of capacity can give you a substantial number of ports, 400 or 800. So you can connect up to 1,000 GPUs just doing that. And that kind of data parallel scale-out can improve the training time dimensions, large LLMs, massive integration of training data. And of course, as we shared with you at the Analyst Day, we can expand that to a 2-tier AI leaf and spine with a 16-way CMP to support close to 10,000 GPUs nonblocking. This lossless architecture for Ethernet. And then the overlay we will have on that with the Ultra Ethernet Consortium in terms of congestion controls, packet spring and working with a suite of [ UC ] mix is what I think will make Ethernet the default standard for AI networking going forward. 

Arista Networks’ management believes that owners and operators of AI data centres would not want to work with white box data switches (non-branded and commoditised data switches) because data switches are mission critical in AI data centres, so users would prefer reliable and higher-quality data switches

I think white box is here to stay for a very long time if somebody just wants a throwaway commodity product, but how many people want throwaway commodity in the data center? They’re still mission-critical, and they’re even more mission-critical for AI. If I’m going to spend multimillion dollars on a GPU cluster, and then the last thing I’m going to do is put a toy network in, right? So to put this sort of in perspective, that we will continue to coexist with a white box. There will be use cases where Arista’s blue box or a stand-alone white box can run either SONiC or FBOSS but many times, the EOS software stack is really, really something they depend on for availability, analytics, automation, and there’s — you can get your network for 0 cost, but the cost of downtime is millions and millions of dollars.

Arista Networks is connecting more and more GPUs and management believes that the picture of how a standard AI data centre Ethernet switch will look like is starting to form; AI is still a small part of Arista Networks’ business but one that should grow over time

On the AI side, we continue to track well. I think we’re moving from what I call trials, which is connecting hundreds of GPUs to pilots, which is connecting thousands of GPUs this year, and then we expect larger production clusters. I think one of the questions that we will be asking ourselves and our customers is how these production clusters evolve. Is it going to be 400, 800 or a combination thereof? The role of Ultra Ethernet Consortium and standards and the ecosystem all coming together, very similar to how we had these discussions in 400 gig will also play a large part. But we’re feeling pretty good about the activity. And I think moving from trials to pilots this year will give us considerable confidence on next year’s number…

…AI is going to come. It is yet to come — certainly in 2023, as I’ve said to you many, many times, it was a very small part of our number, but it will gradually increase.

Arista Networks’ management is in close contact with the leading GPU vendors when designing networking solutions for AI data centres

Specific to our partnership, you can be assured that we’ll be working with the leading GPU vendors. And as you know, NVIDIA has 90% or 95% of the market. So Jensen and I are going to partner closely. It is vital to get a complete AI network design going. We will also be working with our partners in AMD and Intel so we will be the Switzerland of XPUs, whatever the GPU might be, and we look to supply the best network ever.

Arista Networks’ management believes that the company is very well-positioned for the initial growth spurts in AI networking

Today’s models are moving very rapidly, relying on a high bandwidth, predictable latency, the focus on application performance requires you to be sole sourced initially. And over time, I’m sure it’ll move to multiple sources, but I think Arista is very well positioned for the first innings of AI networking, just like we were for the cloud networking decade.

ASML (NASDAQ: ASML)

ASML’s management believes that 2025 will be a strong year for the company because of the long-term trends in its favour (this includes AI and digitalisation, customer-inventory-levels becoming better, and the scheduled opening of many semiconductor fabrication plants)

So essentially unchanged I would say in comparison to what we said last quarter. So if we start looking at 2025. As I mentioned before, we are looking at a year of significant growth and that is for a couple of reasons. First off, we think the secular trends in our industry are still very much intact. If you look at the developments around AI, if you look at the developments around electrification, around energy transition etcetera, they will need many, many semiconductors. So we believe the secular trends in the industry are still very, very strong. Secondly I think clearly by 2025 we should see our customers go through the up cycle. I mean the upward trend in the cycle. So that should be a positive. Thirdly, as we also mentioned last time it’s clear that many fab openings are scheduled that will require the intake of quite some tools in the 2025 time frame.

ASML’s management is seeing AI-related demand drive a positive inflection in the company’s order intake

And I think AI is now particularly something which could be on top of that because that’s clearly a technology transition. But we’ve already seen a very positive effect of that in our Q4 order intake…

…After a few soft quarters, the order intake for the quarter was very, very strong. Actually a record order intake at €9.2 billion. If you look at the composition of that, it was about 50/50 for Memory versus Logic. Around €5.6 billion out of the €9.2 was related to EUV, both Low NA and High NA.

ASML’s management is confident that AI will help to drive demand for the company’s EUV (extreme ultraviolet) lithography systems from the Memory-chips market in the near future

 In ’23, our Memory shipments were lower than the 30% that you mentioned. But if you look at ’25, and we also take into account what I just said about AI and the need for EUV in the DDR5 and in the HBM era, then the 30% is a very safe path and could be on the conservative side.

ASML’s management thinks that the performance of memory chips is a bottleneck for AI-related workloads, and this is where EUV lithography is needed; management was also positively surprised at how important EUV was for the development of leading-edge memory chips for AI

I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV…

…  And were we surprised? I must be — I say, yes, to some extent, we were surprised in the meetings we’ve had with customers and especially the Memory because we’re leading-edge Memory customers. We were surprised about the technology requirements of — for litho, EUV specifically and how it impacts how important it is for the rollout and the ramp of the memory solutions for AI. This is why we received more EUV orders than we anticipated because it was obvious in the detailed discussions and the reviews with our customers, that EUV is critical in that sense. And that was a bit of a surprise, that’s a positive surprise. 

[Question] Sorry, was that a function of EUV layer count or perhaps where they’re repurposing equipment? And so now they’re realizing they need more footprint for EUV.

[Answer] No, it is layer count and imaging performance. And that’s what led to the surprise, the positive surprise, which indeed led to more orders.

ASML’s management sees the early shoots of recovery observed in the Memory chip market as being driven by both higher utilisation across the board, and by the AI-specific technology transition

I think it’s — what we’re seeing is, of course, the information coming off our tools that we see the utilization rates going up. That’s one. Clearly, there’s also an element of technology transition. That’s also clear. I think there’s a bottleneck in the AI and making use of the full AI potential, DRAM is a bottleneck. The performance memory is a bottleneck. And there are solutions, but they need a heck of a lot more HBM and that’s EUV. So it’s a bit of a mix. I mean, yes, you’ve gone through, I think, the bottom of this memory cycle with prices going up, utilizations increasing, and that combined with the technology transition driven by AI. That’s a bit what we see today. So it’s a combination of both, and I think that will continue.

ASML’s management is thinking if their planned capacity buildout for EUV lithography systems is too low, partly because of AI-driven demand for leading edge chips

We have said our capacity buildout will be 90 EUV Low-NA systems, 20 High-NA whereby internally, we are looking at that number as a kind of a base number where we’re investigating whether that number should be higher. The question is whether that 90 is going to be enough. Now we have to realize, we are selling wafer capacity, which is not only a function of the number of units, but also a function of the productivity of those tools. Now we have a pretty aggressive road map for the productivity in terms of wafers per hour. So it’s a complex question that you’re asking. But actually, we need to look at this especially against the math that we’re seeing for little requirements in the area of AI, whether it’s HBM or whether it is Logic, whether the number of units and the road map on productivity, which gives wafers because the combination is wafer capacity, whether that is sufficient.

Datadog (NASDAQ: DDOG)

Datadog’s management is seeing growing engagement in AI with a 75% sequential jump in the use of next-gen AI integrations

In observability, we now have more than 700 integrations allowing our customers to benefit from the latest AWS, Azure and GCP abilities as well as from the newly emerging AI stack. We continued to see increasing engagement there with the use of our next-gen AI integrations growing 75% sequentially in Q4.

Datadog’s management continues to add capabilities to Bits AI, the company’s natural language incident management copilot, and is improving the company’s LLM (large language model) observability capabilities

In the generative AI and LLM space, we continued to add capability to Bits AI, our natural language incident management copilot. And we are advancing LLM observability to help customers investigate where they can safely deploy and manage their models in production.

Currently, 3% of Datadog’s annualised recurring revenue (ARR) comes from next-gen AI native customers (was 2.5% in 2023 Q3); management believes the AI opportunity will be far larger in the future as all kinds of customers start incorporating AI in production; the AI native customers are companies that Datadgo’s management knows are substantially all based on AI

Today, about 3% of our ARR comes from next-gen AI native customers, but we believe the opportunity is far larger in the future as customers of every industry and every size start doing AI functionality in production…

…It’s hard for us to wrap our arms exactly around what is GenAI, what is not among our customer base and their workload. So the way we chose to do it is we looked at a smaller number of companies that we know are substantially all based on AI so these are companies like the modal providers and things like that. So 3% of ARR, which is up from what we had disclosed last time.

Microsoft said that AI accounts for six percentage points of Azure’s growth, but Datadog’s management is seeing AI-native companies on Datadog’s Azure business account for substantially more than the six percentage points mentioned

I know one number that everyone has been thinking about is one cloud, in particular, Microsoft, disclosed that 6% of their growth was attributable to AI. And we definitely see the benefits of that on our end, too. If I look at our Azure business in particular, there is substantially more than 6% that is attributable to AI native as part of our Azure business. So we see completely this trend is very true for us as well. It’s harder to tell with the other cloud providers because they don’t break those numbers up.

Datadog’s management continues to believe that digital transformation, cloud migration, and AI adoption are long-term growth drivers of Datadog’s business, and that Datadog is ideally positioned for these

We continue to believe digital transformation and cloud migration are long-term secular growth drivers of our business and critical motion for every company to deliver value and competitive advantage. We see AI adoption as an additional driver of investment and accelerator of technical innovation and cloud migration. And more than ever, we feel ideally positioned to achieve our goals and help customers of every size in every industry to transform, innovate and drive value through technology adoption.

Datadog experienced a big slowdown from its digitally native customers in the recent past, but management thinks that these customers could also be the first ones to fully leverage AI and thus reaccelerate earlier

We suddenly saw a big slowdown from the digital native over the past year. On the other hand, they might be the first ones to fully leverage AI and deploy it in production. So you might see some reacceleration earlier from some of them at least.

Datadog’s management sees the attach rates for observability going up for AI workloads versus traditional workloads

[Question] If you think about the very long term, would you think attach rates of observability will end up being higher or lower for these AI workloads versus traditional workloads?

[Answer] We see the attach rate going up. The reason for that is our framework for that is actually in terms of complexity. AI just adds more complexity. You create more things faster without understanding what they do. Meaning you need — you shift a lot of the value from building to running, managing, understanding, securing all of the other things that need to keep happening after that. So the shape of some of the products might change a little bit because the shape of the software that runs it changes a little bit, which is no different from what happened over the past 10, 15 years. But we think it’s going to drive more need for observability, more need for security products around that.

Datadog’s management is seeing AI-native companies using largely the same kind of Datadog products as everyone else, but the AI-native companies are building the models, so the tooling for understanding the models are not applicable for them

[Question] Are the product SKUs, these kind of GenAI companies are adopting, are they similar or are they different to the kind of other customer cohorts?

[Answer] Today, this is largely the same SKUs as everybody else. These are infrastructure, APM logs, profiling these kind of things that they are — or really the monitoring, these kind of things that these customers are using. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base. And we see also where we see the bulk of the opportunity in the longer term, not in the handful of model providers that [ anybody ] is going to use. It’s worth noting that they’re in a bit of a separate world because they’re largely the builders of the models. So all the tooling required to understand the models and — that’s less applicable to them. That’s more applicable to their own customers, which is also the rest of our customer base.

Datadog has a much larger presence in inference AI workloads as compared to training AI workloads; Datadog’s management sees that the AI companies that are scaling the most on Azure are scaling on inference

There’s 2 parts to the AI workloads today. There’s training and there’s inference. The vast majority of the players are still training. There’s only a few that are scaling with inference. The ones that are scaling with inference are the ones that are driving our ARR because we are — we don’t — we’re not really present on the training side, but we’re very present on the inference side. And I think that also lines up with what you might see from some of the cloud providers, where a lot of the players or some of the players that are scaling the most are on Azure today on the inference side, whereas a lot of the other players still largely training on some of the other clouds.

Etsy (NASDAQ: ETSY)

Etsy’s management recently launched Gift Mode, a feature where a buyer can type in details of a person and occasion, and AI technology will match the buyer with a gift; Gift Mode has more than 200 recipient persons, and has good early traction with 6 million visits in the first 2 weeks

So what’s Gift Mode? It’s a whole new shopping experience where gifters simply enter a few quick details about the person they’re shopping for, and we use the power of artificial intelligence and machine learning to match them with unique gifts from Etsy sellers. Creating a separate experience helps us know immediately if you’re shopping for yourself or someone else, hugely beneficial information to help our search engines solve for your needs. Within Gift Mode, we’ve identified more than 200 recipient personas, everything from rock climber to the crossword genius to the sandwich specialist. I’ve already told my family that when shopping for me, go straight to the music lover, the adventurer or the pet parent… 

…Early indications are that Gift Mode is off to a good start, including positive sentiment from buyers and sellers in our social channels, very strong earned media coverage and nearly 6 million visits in the first 2 weeks. As you test and shop in Gift Mode, keep in mind that this is just the beginning.

Etsy’s management is using AI to understand the return on investment of the company’s marketing spend

We’ve got pretty sophisticated algorithms that work on is this bid — is this click worth this much right now and how much should we bid. And so to the extent that CPCs rise, we naturally pull back. Or to the extent that CPC is lower, we naturally lean in. The other thing, by the way, it’s not just CPCs, it’s also conversion rates. So in times when people are really budget constrained, we see them actually — we see conversion rate across the industry go down. We see people compare some shop a lot more. And so we are looking at all of that and not humans, but machines using AI are looking at a very sophisticated way at what’s happening with conversion rate right now, what’s happening with CPCs right now. And therefore, how much is each visit worth and how much should we be bidding. 

Fiverr (NYSE: FVRR)

Fiverr’s management is seeing strong demand for the AI services vertical, with AI-related keyword searches growing sevenfold in 2023 

Early in January last year, we were the first in the market to launch a dedicated AI services vertical, creating a hub of businesses to higher AI talent. Throughout the year, we continue to see tremendous demand for those services with searches that contain AI-related keywords in our market base growing sevenfold in 2023 compared to 2022. 

Fiverr’s management has seen AI create a net-positive 4% impact to Fiverr’s business by driving a mix-shift for the company from simple services – such as translation and voice-over – to complex services; complex services now represent 1/3 of Fiverr’s market base are typically larger and longer-duration; complex categories are where a human touch is needed and adds value while simple categories are where technology can do a good job without humans; Fiverr’s management thinks that simple categories will be automated away by AI while complex categories will become more important

Overall, we estimate AI created a net positive impact of 4% to our business in 2023 as we see a category mix shift from simple services such as translation and voice over to more complex services such as mobile app development, e-commerce management or financial consulting. In 2023, complex services represented nearly 1/3 of our market base, a significant step-up from 2022. Moreover, there are typically larger projects and longer duration with an average transaction size 30% higher than those of simple services…

…What we’ve identified is there is a difference between what we call simple categories or tasks and more complex ones. And in the complex group, it’s really those categories that require human intervention and human inputs in order to produce a satisfactory results for the customer. And in these categories, we’re seeing growth that goes well beyond the overall growth that we’re seeing. And really, the simple ones are such where technology can actually do a pretty much gen-tie work, which in those cases, they’re usually associated with lower prices and shorter-term engagements…

…So our assumption is that some of the simple paths are going to be — continue to be automated, which, by the way, is nothing new. I mean, it happened before even before AI, automation has been a part of our lives. And definitely, the more complex services is where I think the growth potential definitely lies. This is why we called out the fact that we’re going to double down on these categories and services.

Fiverr’s management believes that the opportunities created by AI will outweigh the jobs that are displaced

We believe that the opportunities created by emerging technologies far outweigh the jobs they replace. Human talent continues to be an essential part of unlocking the potential of new technologies. 

Fiverr’s management believes that AI will be a multiyear tailwind for the company

We are also seeing a shift into more sophisticated, highly skilled and longer-duration categories with bigger addressable market. Data shows our market base is built to benefit from these technologies and labor market changes. Unlike single vertical solutions with higher exposure to disruptive technologies and train changes, Fiverr has developed a proprietary horizontal platform with hundreds of verticals, quickly leaning into the ever-changing industry demand needs and trends. All in all, we believe AI will be a multiyear tailwind for us to drive growth and innovation. In 2023, we also made significant investments in AI that drove improvements in our overall platform. 

A strategic priority for Fiverr’s management in 2024 is to develop AI tools to enhance the overall customer experience of the company’s marketplace

Our recent winter product release in January culminated these efforts in the second half of 2023 and revamped almost every part of our platform with an AI-first approach, from search to personalization from supply quality to seller engagement…

…Our third strategic priority is to continue developing proprietary AI applications unique to our market base to enhance the overall customer experience. The winter product release we discussed just now gives you a flavor of that, but there is so much more to do.

Mastercard (NYSE: MA)

Mastercard’s management is leveraging the company’s work on generative AI to build new services and solutions as well as to increase internal productivity

We also continue to develop new services and solutions, many of which leverage the work we are doing with generative AI. Generative AI brings more opportunity to drive better experiences for our customers, makes it easier to extract insights from our data. It can also help us increase internal productivity. We are working on many Gen AI use cases today to do just that. For example, we recently announced Shopping News. Shopping News uses generative AI to offer a conversational shopping tool that recreates the in-store human experience online, can translate consumers collegially language into tailored recommendations. Another example is Mastercard Small Business AI. The tool will draw on our existing small business resources, along with the content from a newly formed global media coalition to help business owners navigate a range of business challenges. The platform, which is scheduled for pilot launch later this year will leverage AI to provide personalized real-time assistance delivered in a conversational tone.

MercadoLibre (NASDAQ: MELI)

MercadoLibre’s management launched a number of AI features – including a summary of customer reviews, a summary of product functions, push notifications about items left unpurchased in shopping carts, and capabilities for sellers to create coupons and answer buyer questions quickly – in 2023 for the ecommerce business

In 2023, we launched capabilities that enable sellers to create their own promotional coupons and answer buyer questions more quickly with the assistance of artificial intelligence…

…AI based features are already an integral part of the MELI experience, with many innovations launched in 2023, including: 

  • A summary of customer reviews on the product pages that concentrates the main feedback from buyers of that product.
  • On beauty product pages a summary of product functions and characteristics is automatically created to facilitate buyers choices.
  • Push notifications about items left unpurchased in shopping carts are now highly personalized and remind users why they may have chosen to buy a particular product.
  • We have also added an AI feature that helps sellers to respond to questions by preparing answers that sellers can send immediately, or edit quickly. 

Meta Platforms (NASDAQ: META)

The major goal of Meta’s management is for the company is to have (1) world-class AI assistant for all users, (2) AI-representor for each creator, (3) AI agent for every business, and (4) state-of-the-art open source models for developers

Now moving forward, a major goal, we’ll be building the most popular and most advanced AI products and services. And if we succeed, everyone who uses our services will have a world-class AI assistant to help get things done, every creator will have an AI that their community can engage with, every business will have an AI that their customers can interact with to buy goods and get support, and every developer will have a state-of-the-art open-source model to build with.

Meta’s management thinks consumers will want a new AI-powered computing device that can see and hear what we are seeing and hearing, and this new computing device will be smart glasses, and will require full general intelligence; Meta has been conducting research on general intelligence for more than a decade, but it will now also incorporate general intelligence into product work – management thinks having product-targets when developing general intelligence helps to focus the work

I also think that everyone will want a new category of computing devices that let you frictionlessly interact with AIs that can see what you see and hear what you hear, like smart glasses. And one thing that became clear to me in the last year is that this next generation of services requires building full general intelligence. Previously, I thought that because many of the tools were social-, commerce- or maybe media-oriented that it might be possible to deliver these products by solving only a subset of AI’s challenges. But now it’s clear that we’re going to need our models to be able to reason, plan, code, remember and many other cognitive abilities in order to provide the best versions of the services that we envision. We’ve been working on general intelligence research and FAIR for more than a decade. But now general intelligence will be the theme of our product work as well…

…We’ve worked on general intelligence in our lab, FAIR, for more than a decade, as I mentioned, and we produced a lot of valuable work. But having clear product targets for delivering general intelligence really focuses this work and helps us build the leading research program.

Meta’s management believes the company has world-class compute infrastructure; Meta will end 2024 with 600,000 H100 (NVIDIA’s state-of-the-art AI chip) equivalents of compute; Meta is coming up with new data centre and chip designs customised for its own needs

The first is world-class compute infrastructure. I recently shared that, by the end of this year, we’ll have about 350,000 H100s, and including other GPUs, that will be around 600,000 H100 equivalents of compute…

…In order to build the most advanced clusters, we’re also designing novel data centers and designing our own custom silicons specialized for our workloads.

Meta’s management thinks that future AI models will be even more compute-intensive to train and run inference; management does not know exactly how much the compute this will be, but recognises that the trend has been of AI models requiring 10x more compute for each new generation, so management expects Meta to require growing infrastructure investments in the years ahead for its AI work

Now going forward, we think that training and operating future models will be even more compute-intensive. We don’t have a clear expectation for exactly how much this will be yet, but the trend has been that state-of-the-art large language models have been trained on roughly 10x the amount of compute each year…

…While we are not providing guidance for years beyond 2024, we expect our ambitious long-term AI research and product development efforts will require growing infrastructure investments beyond this year.

Meta’s approach with AI is to open-source its foundation models while keeping product-implementations proprietary; Meta’s management thinks open-sourcing brings a few key benefits, in that open source software (1) is safer and more compute-efficient, (2) can become the industry standard, and (3) attracts talented people; management intends to continue open-sourcing Meta’s AI models 

Our long-standing strategy has been to build an open-source general infrastructure while keeping our specific product implementations proprietary. In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far, as well as industry standard tools like PyTorch that we’ve developed…

…The short version is that open sourcing improves our models. And because there’s still significant work to turn our models into products because there will be other open-source models available anyway, we find that there are mostly advantages to being the open-source leader, and it doesn’t remove differentiation for our products much anyway. And more specifically, there are several strategic benefits.

First, open-source software is typically safer and more secure as well as more compute-efficient to operate due to all the ongoing feedback, scrutiny and development from the community. Now this is a big deal because safety is one of the most important issues in AI. Efficiency improvements and lowering the compute costs also benefit everyone, including us. Second, open-source software often becomes an industry standard. And when companies standardize on building with our stack, that then becomes easier to integrate new innovations into our products. That’s subtle, but the ability to learn and improve quickly is a huge advantage. And being an industry standard enables that. Third, open source is hugely popular with developers and researchers. And we know that people want to work on open systems that will be widely adopted. So this helps us recruit the best people at Meta, which is a very big deal for leading in any new technology area…

…This is why our long-standing strategy has been to open source general infrastructure and why I expect it to continue to be the right approach for us going forward.

Meta is already training the next generation of its foundational Llama model, Llama 3, and progress is good; Meta is also working on research for the next generations of Llama models with an eye on developing full general intelligence; Meta’s management thinks that the company’s next few generations of foundational AI models could be in a totally different direction from other AI companies

In the case of AI, the general infrastructure includes our Llama models, including Llama 3, which is training now, and it’s looking great so far…

…While we’re working on today’s products and models, we’re also working on the research that we need to advance for Llama 5, 6 and 7 in the coming years and beyond to develop full general intelligence…

…A lot of last year and the work that we’re doing with Llama 3 is basically making sure that we can scale our efforts to really produce state-of-the-art models. But once we get past that, there’s a lot more kind of different research that I think we’re going to be doing that’s going to take our foundation models in potentially different directions than other players in the industry are going to go in because we’re focused on specific vision for what we’re building. So it’s really important as we think about what’s going to be in Llama 5 or 6 or 7 and what cognitive abilities we want in there and what modalities we want to build into future multimodal versions of the models.

Meta’s management sees unique feedback loops for the company’s AI work that involve both data and usage of its products; the feedback loops have been important in how Meta improved its AI systems for Reels and ads

When people think about data, they typically think about the corpus that you might use to train a model upfront. And on Facebook and Instagram, there are hundreds of billions of publicly shared images and tens of billions of public videos, which we estimate is greater than the common crawl data set. And people share large numbers of public text posts and comments across our services as well. But even more important in the upfront training corpus is the ability to establish the right feedback loops with hundreds of millions of people interacting with AI services across our products. And this feedback is a big part of how we’ve improved our AI systems so quickly with Reels and Ads, especially over the last couple of years when we had to re-architect it around new rules.

Meta’s management wants hiring-growth in AI-related roles for 2024

AI is a growing area of investment for us in 2024 as we hire to support our road map…

…Second, we anticipate growth in payroll expenses as we work down our current hiring underrun and add incremental talent to support priority areas in 2024, which we expect will further shift our workforce composition toward higher-cost technical roles.

Meta’s management fully rolled out Meta AI Assistant and other AI chat experiences in the US at the end of 2023 and has began testing generative AI features in the company’s Family of Apps; Meta’s focus in 2024 regarding generative AI is on launching Llama3, making Meta AI assistant useful, and improving AI Studio

With generative AI, we fully rolled out our Meta AI assistant and other AI chat experiences in the U.S. at the end of the year and began testing more than 20 GenAI features across our Family of Apps. Our big areas of focus in 2024 will be working towards the launch of Llama 3, expanding the usefulness of our Meta AI assistant and progressing on our AI Studio road map to make it easier for anyone to create an AI. 

Meta has been using AI to improve its marketing performance; Advantage+ is helping advertisers partially or fully automate the creation of ad campaigns; Meta has rolled out generative AI features to help advertisers with changing text and images in their ad campaigns – adoption of the features is strong and test show promising performance gains, and Meta has a big focus in this area in 2024

We continue to leverage AI across our ad systems and product suite. We’re delivering continued performance gains from ranking improvements as we adopt larger and more advanced models, and this will remain an ongoing area of investment in 2024. We’re also building out our Advantage+ portfolio of solutions to help advertisers leverage AI to automate their advertising campaigns. Advertisers can choose to automate part of the campaign creation setup process, such as who to show their ad to with Advantage+ audience, or they can automate their campaign completely using Advantage+ shopping, which continues to see strong growth. We’re also now exploring ways to apply this end-to-end automation to new objectives. On the ads creative side, we completed the global rollout of 2 of our generative AI features in Q4, Text Variations and Image Expansion, and plan to broaden availability of our background generation feature later in Q1. Initial adoption of these features has been strong, and tests are showing promising early performance gains. This will remain a big area of focus for us in 2024…

…So we’re really scaling our Advantage+ suites across all of the different offerings there, which really helped to automate the ads creation process for different types of advertisers. And we’re getting very strong feedback on all of those different features, advantage+ Shopping, obviously, being the first, but Advantage+ Catalog, Advantage+ Creative, Advantage+ Audiences, et cetera. So we feel like these are all really important parts of what has continued to grow improvements in our Ads business and will continue to going forward.

Meta’s management’s guidance for capital expenditure for 2024 is increased slightly from prior guidance (for perspective 2023’s capex is $27.27 billion), driven by increased investments in servers and data centers for AI-related work

Turning now to the CapEx outlook. We anticipate our full year 2024 capital expenditures will be in the range of $30 billion to $37 billion, a $2 billion increase of the high end of our prior range. We expect growth will be driven by investments in servers, including both AI and non-AI hardware, and data centers as we ramp up construction on sites with our previously announced new data center architecture.

Meta’s management thinks AI will make all of the company’s products and services better, but is unsure how the details will play out

I do think that AI is going to make all of the products and services that we use and make better. So it’s hard to know exactly how that will play out. 

Meta’s management does not expect the company’s generative AI products to be a meaningful revenue-driver in the short term, but they expect the products to be huge drivers in the long term

We don’t expect our GenAI products to be a meaningful 2024 driver of revenue. But we certainly expect that they will have the potential to be meaningful contributors over time.

Microsoft (NASDAQ: MSFT)

Microsoft is now applying AI at scale, across its entire tech stack, and this is helping the company win customers

We have moved from talking about AI to applying AI at scale. By infusing AI across every layer of our tech stack, we are winning new customers and helping drive new benefits and productivity gains.

Microsoft’s management thinks that Azure offers (1) the best AI training and inference performance, (2) the widest range of AI chips, including those from AMD, NVIDIA, and Microsoft, and (3) the best selection of foundational models, including LLMs and SLMs (small language models); Azure AI now has 53,000 customers and more than 33% are new to Azure; Azure allows developers to deploy LLMs without managing underlying infrastructure

Azure offers the top performance for AI training and inference and the most diverse selection of AI accelerators, including the latest from AMD and NVIDIA as well as our own first-party silicon, Azure Maia. And with Azure AI, we provide access to the best selection of foundation and open source models, including both LLMs and SLMs all integrated deeply with infrastructure, data and tools on Azure. We now have 53,000 Azure AI customers. Over 1/3 are new to Azure over the past 12 months. Our new models of service offering makes it easy for developers to use LLMs from our partners like Cohere, Meta and Mistral on Azure without having to manage underlying infrastructure.

Azure grew revenue by 30% in 2023 Q4, with six points of growth from AI services; most of the six points of growth from AI services was driven by Azure Open AI

Azure and other cloud services revenue grew 30% and 28% in constant currency, including 6 points of growth from AI services. Both AI and non-AI Azure services drove our outperformance…

…Yes, Azure OpenAI and then OpenAI’s own APIs on top of Azure would be the sort of the major drivers. But there’s a lot of the small batch training that goes on, whether it’s out of [indiscernible] or fine-tuning. And then a lot of people who are starting to use models as a service with all the other new models. But it’s predominantly Azure OpenAI today.

Microsoft’s management believes the company has built the world’s most popular SLMs; the SLMs have similar performance to larger models, but can run on laptops and mobile devices; both startups and established companies are exploring the use of Microsoft’s Phi SLM for applications

We have also built the world’s most popular SLMs, which offer performance comparable to larger models but are small enough to run on a laptop or mobile device. Anchor, Ashley, AT&T, EY and Thomson Reuters, for example, are all already exploring how to use our SLM, Phi, for their applications. 

Microsoft has added Open AI’s latest models to the Azure OpenAI service; Azure Open AI is seeing increased usage from AI-first start ups, and more than 50% of Fortune 500 companies are using it

And we have great momentum with Azure OpenAI Service. This quarter, we added support for OpenAI’s latest models, including GPT-4 Turbo, GPT-4 with Vision, DALL-E 3 as well as fine-tuning. We are seeing increased usage from AI-first start-ups like Moveworks, Poplexity, Symphony AI as well as some of the world’s largest companies. Over half of the Fortune 500 use Azure OpenAI today, including Ally Financial, Coca-Cola and Rockwell Automation. For example, at CES this month, Walmart shared how it’s using Azure OpenAI Service along with its own proprietary data and models to streamline how more than 50,000 associates work and transform how its millions of customers shop. 

Microsoft’s management is integrating AI across the company’s entire data stack; Cosmo DB, which has vector search capabilities, is used by companies as a database for AI apps; KPMG, with the help of Cosmos DB, has seen a 50% increase in productivity for its consultants; Azure AI Search provides hybrid search that goes beyond vector search and Open AI is using it for ChatGPT 

We are integrating the power of AI across the entire data stack. Our Microsoft Intelligent Data Platform brings together operational databases, analytics, governance and AI to help organizations simplify and consolidate their data estates. Cosmos DB is the go-to database to build AI-powered apps at any scale, powering workloads for companies in every industry from AXA and Kohl’s to Mitsubushi and TomTom. KPMG, for example, has used Cosmos DB, including its built-in native vector search capabilities, along with Azure OpenAI Service to power an AI assistant, which it credits with driving an up to 50% increase in productivity for its consultants… And for those organizations who want to go beyond in-database vector search, Azure AI Search offers the best hybrid search solution. OpenAI is using it for retrieval augmented generation as part of ChatGPT. 

There are now more than 1.3 million GitHub Copilot subscribers, up 30% sequentially; more than 50,000 organisations use GitHub Copilot Business and Accenture alone will roll out GitHub Copilot to 50,000 of its developers in 2024; Microsoft’s management thinks GitHub Copilot is a core product for anybody who is working in software development

GitHub revenue accelerated to over 40% year-over-year, driven by all our platform growth and adoption of GitHub Copilot, the world’s most widely deployed AI developer tool. We now have over 1.3 million paid GitHub Copilot subscribers, up 30% quarter-over-quarter. And more than 50,000 organizations use GitHub Copilot Business to supercharge the productivity of their developers from digital natives like Etsy and HelloFresh to leading enterprises like Autodesk, Dell Technologies and Goldman Sachs. Accenture alone will roll out GitHub Copilot to 50,000 of its developers this year…

…Everybody had talked it’s become — it is the 1 place where it’s becoming standard issue for any developer. It’s like if you take away spellcheck from Word, I’ll be unemployable. And similarly, it will be like I think GitHub Copilot becomes core to anybody who is doing software development…

To increase GitHub Copilot’s ARPU (average revenue per user), and ARPUs for other Copilots for the matter, Microsoft’s management will lean on the improvement that the Copilots bring to a company’s operating leverage and ask for a greater share of value

Our ARPUs have been great but they’re pretty low. But frankly, even though we’ve had a lot of success, it’s not like we are a high-priced ARPU company. I think what you’re going to start finding is, whether it’s Sales Copilot or Service Copilot or GitHub Copilot or Security Copilot, they are going to fundamentally capture some of the value they drive in terms of the productivity of the OpEx, right? So it’s like 2 points, 3 points of OpEx leverage would go to some software spend. I think that’s a pretty straightforward value equation. And so that’s the first time. I mean, this is not something we’ve been able to make the case for before, whereas now I think we have that case.

Then even the horizontal Copilot is what Amy was talking about, which is at the Office 365 or Microsoft 365 level. Even there, you can make the same argument. Whatever ARPU we may have with E5, now you can say incrementally as a percentage of the OpEx, how much would you pay for a Copilot to give you more time savings, for example. And so yes, I think all up, I do see this as a new vector for us in what I’ll call the next phase of knowledge work and frontline work even and their productivity and how we participate.

And I think GitHub Copilot, I never thought of the tools business as fundamentally participating in the operating expenses of a company’s spend on, let’s say, development activity. And now you’re seeing that transition. It’s just not tools. It’s about productivity of your dev team.

Microsoft’s own research and external studies show that companies can see up to a 70% increase in productivity by using generative AI for specific tasks; early users of Copilot for Microsoft 365 became 29% faster in a number of tasks

Our own research as well as external studies show as much as 70% improvement in productivity using generative AI for specific work tasks. And overall, early Copilot for Microsoft 365 users were 29% faster in a series of tasks like searching, writing and summarizing.

Microsoft’s management believes that AI will become a first-class part of every personal computer (PC) in 2024

In 2024, AI will become first-class part of every PC. Windows PCs with built-in neural processing units were front and center at CES, unlocking new AI experiences to make what you do on your PC easier and faster, from searching for answers and summarizing e-mails to optimizing performance in battery efficiency. Copilot in Windows is already available on more than 75 million Windows 10 and Windows 11 PCs. And with our new Copilot Key, the first significant change to the Windows Keyboard in 30 years, providing one-click access.

Microsoft’s management thinks that AI is transforming Microsoft’s search and browser experience; Microsoft has created more than 5 billion images and conducted more than 5 billion chats to-date, with both doubling sequentially; Bing and Edge both took share in 2023 Q4

And more broadly, AI is transforming our search and browser experience. We are encouraged by the momentum. Earlier this month, we achieved a new milestone with 5 billion images created and 5 billion chats conducted to date, both doubling quarter-over-quarter and both Bing and Edge took share this quarter.

Microsoft’s management expects the company’s capital expenditure to increase materially in the next quarter because of cloud and AI infrastructure investments; management’s commitment to increase infrastructure investments is guided by customer demand and what they see as a substantial market opportunity; management feels good about where Microsoft is in terms of adding infrastructure capacity to meet AI computing demand

We expect capital expenditures to increase materially on a sequential basis, driven by investments in our cloud and AI infrastructure and the slip of a delivery date from Q2 to Q3 from a third-party provider noted earlier. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-out…

…Our commitment to scaling our cloud and AI investment is guided by customer demand and a substantial market opportunity. As we scale these investments, we remain focused on driving efficiencies across every layer of our tech stack and disciplined cost management across every team…

…I think we feel really good about where we have been in terms of adding capacity. You started to see the acceleration in our capital expense starting almost a year ago, and you’ve seen it scale through that process.

Microsoft’s management is seeing that most of the AI activity taking place on Azure is for inference

[Question] On AI, where are we in the journey from training driving most of the Azure AI usage to inferencing?

[Answer] What you’ve seen for most part is all inferencing. So none of the large model training stuff is in any of our either numbers at all. Small batch training, so somebody is doing fine-tuning or what have you, that will be there but that’s sort of a minor part. So most of what you see in the Azure number is broadly inferencing.

New workloads in AI that happen on Azure starts with selecting a frontier model, fine-tuning that model, then inference

The new workload in AI obviously, in our case, it starts with 1 of the frontier — I mean, starts with the frontier model, Azure OpenAI. But it’s not just about just 1 model, right? So you — first, you take that model, you do all that jazz, you may do some fine-tuning. You do retrieval, which means you’re sort of either getting some storage meter or you’re eating some compute meters. And so — and by the way, there’s still a large model to a small model and that would be a training perhaps, but that’s a small batch training that uses essentially inference infrastructure. So I think that’s what’s happening. 

Microsoft’s management believes that generative AI will change the entire tech stack, down to the core computer architecture; one such change is to separate data storage from compute, as in the case of one of Microsoft’s newer services, Fabric

[Question] Cloud computing changed the tech stack in ways that we could not imagine 10 years back. The nature of the database layer, the operating system, every layer just changed dramatically. How do you foresee generative AI changing the tech stack as we know it?

[Answer] I think it’s going to have a very, very foundational impact. In fact, you could say the core compute architecture itself changes, everything from power density to the data center design to what used to be the accelerator now is that sort of the main CPU, so to speak, or the main compute unit. And so I think — and the network, the memory architecture, all of it. So the core computer architecture changes, I think every workload changes. And so yes, so there’s a full — like take our data layer.

The most exciting thing for me in the last year has been to see how our data layer has evolved to be built for AI, right? If you think about Fabric, one of the genius of Fabric is to be able to say, let’s separate out storage from the compute layer. In compute, we’ll have traditional SQLs, we’ll have spark. And by the way, you can have an Azure AI drop on top of the same data lake, so to speak, or the lake house pattern. And then the business model, you can combine all of those different computes. So that’s the type of compute architecture. So it’s sort of a — so that’s just 1 example…

… I do believe being in the cloud has been very helpful to build AI. But now AI is just redefining what it means to have — what the cloud looks like, both at the infrastructure level and the app model.

Microsoft’s management is seeing a few big use cases emerging within Microsoft 365 Copilot: Summarisation of meetings and documents; “chatting” with documents and texts of past communications; and creation and completion of documents

In terms of what we’re seeing, it’s actually interesting if you look at the data we have, summarization, that’s what it’s like, number one, like I’m doing summarizations of teams, meetings inside of teams during the meeting, after the meeting, Word documents, summarization. I get something in e-mail, I’m summarizing. So summarization has become a big deal. Drafts, right? You’re drafting e-mails, drafting documents. So anytime you want to start something, the blank page thing goes away and you start by prompting and drafting.

Chat. To me, the most powerful feature is now you have the most important database in your company, which happens to be the database of your documents and communications, is now query-able by natural language in a powerful way, right? I can go and say, what are all the things Amy said I should be watching out for next quarter? And it will come out with great detail. And so chat, summarization, draft.

Also, by the way, actions, one of the most used things is, here’s a Word document. Go complete — I mean, create a PowerPoint for me. So those are the stuff that’s also beginning.

Microsoft’s management is seeing strong engagement growth with Microsoft 365 Copilot that gives them optimism

And the other thing I would add, we always talk about in enterprise software, you sell software, then you wait and then it gets deployed. And then after deployment, you want to see usage. And in particular, what we’ve seen and you would expect this in some ways with Copilot, even in the early stages, obviously, deployment happens very quickly. But really what we’re seeing is engagement growth. To Satya’s point on how you learn and your behavior changes, you see engagement grow with time. And so I think those are — just to put a pin on that because it’s an important dynamic when we think about the optimism you hear from us.

Nvidia (NASDAQ: NVDA)

Nvidia’s management believes that companies are starting to build the next generation of AI data centres; this next generation of AI data centres takes in data and transforms them into tokens, which are the output of AI models

At the same time, companies have started to build the next generation of modern Data Centers, what we refer to as AI factories, purpose-built to refine raw data and produce valuable intelligence in the era of generative AI…

…A whole new industry in the sense that for the very first time, a Data Center is not just about computing data and storing data and serving the employees of the company. We now have a new type of Data Center that is about AI generation, an AI generation factory, and you’ve heard me describe it as AI factories. But basically, it takes raw material, which is data. It transforms it with these AI supercomputers that NVIDIA built, and it turns them into incredibly valuable tokens. These tokens are what people experience on the amazing ChatGPT or Midjourney or search these days are augmented by that. All of your recommender systems are now augmented by that, the hyper-personalization that goes along with it. All of these incredible start-ups in digital biology generating proteins and generating chemicals and the list goes on. And so all of these tokens are generated in a very specialized type of Data Center. And this Data Center, we call it AI supercomputers and AI generation factories.

Nvidia’s management is seeing very strong demand for the company’s Hopper AI chips and expects demand to far outstrip supply

Demand for Hopper remains very strong. We expect our next generation products to be supply constrained as demand far exceeds supply…

…However, whenever we have new products, as you know, it ramps from 0 to a very large number, and you can’t do that overnight. Everything is ramped up. It doesn’t step up. And so whenever we have a new generation of products and right now, we are ramping H200s, there’s no way we can reasonably keep up on demand in the short term as we ramp. 

Nvidia’s outstanding 2023 Q4 growth in Data Center revenue was driven by both training and inference of AI models; management estimates that 40% of Nvidia’s Data Center revenue in 2023 was for AI inference; the 40% estimate might even be understated, because recommendation systems that were driven by CPU approaches are now being driven by GPUs

In the fourth quarter, Data Center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-on-year…

…Fourth quarter Data Center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our Data Center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year, approximately 40% of Data Center revenue was for AI inference…

…The estimate is probably understated and — but we estimated it, and let me tell you why. Whenever — a year ago, a year ago, the recommender systems that people are — when you run the Internet, the news, the videos, the music, the products that are being recommended to you because, as you know, the Internet has trillions — I don’t know how many trillions, but trillions of things out there, and your phone is 3 inches squared. And so the ability for them to fit all of that information down to something such a small real estate is through a system, an amazing system called recommender systems.

These recommender systems used to be all based on CPU approaches. But the recent migration to deep learning and now generative AI has really put these recommender systems now directly into the path of GPU acceleration. It needs GPU acceleration for the embeddings. It needs GPU acceleration for the nearest neighbor search. It needs GPU accelerating for reranking. And it needs GPU acceleration to generate the augmented information for you. So GPUs are in every single step of a recommender system now. And as you know, a recommender system is the single largest software engine on the planet. Almost every major company in the world has to run these large recommender systems. 

Nvidia’s management is seeing that all industries are deploying AI solutions

Building and deploying AI solutions has reached virtually every industry. Many companies across industries are training and operating their AI models and services at scale…

…One of the most notable trends over the past year is the significant adoption of AI by enterprises across the industry verticals such as Automotive, health care, and financial services.

Large cloud providers accounted for more than half of Nvidia’s Data Center revenue in 2023 Q4; Microsoft 

In the fourth quarter, large cloud providers represented more than half of our Data Center revenue, supporting both internal workloads and external public cloud customers. 

Nvidia’s management is finding that consumer internet companies have been early adopters of AI and they are one of Nvidia’s largest customer categories; consumer internet companies are using AI (1) in content recommendation systems to boost user engagement and (2) to generate content for advertising and to help content creators

The consumer Internet companies have been early adopters of AI and represent one of our largest customer categories. Companies from search to e-commerce, social media, news and video services and entertainment are using AI for deep learning-based recommendation systems. These AI investments are generating a strong return by improving customer engagement, ad conversation and click-through rates…

… In addition, consumer Internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation, online product descriptions and AI shopping assistance.

Nvidia’s management is observing that enterprise software companies are using generative AI to help their customers with productivity and they are already seeing commercial success

Enterprise software companies are applying generative AI to help customers realize productivity gains. All the customers we’ve partnered with for both training and inference of generative AI are already seeing notable commercial success. ServiceNow’s generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release. We are working with many other leading AI and enterprise software platforms as well, including Adobe, Databricks, Getty Images, SAP, and Snowflake.

There are both enterprises and startups that are building foundational large language models; these models are serving specific cultures, regions, and also industries

The field of foundation of large language models is thriving, Anthropic, Google, Inflection, Microsoft, OpenAI and xAI are leading with continued amazing breakthrough in generative AI. Exciting companies like Adept, AI21, Character.AI, Cohere, Mistral, Perplexity and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to serve the specific languages, cultures and customs of the world’s many regions. And others are creating foundation models to address entirely different industries like Recursion, pharmaceuticals and generative biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU-specialized cloud providers.

Nvidia’s AI infrastructure are used for autonomous driving; the automotive vertical accounted for more than $1 billion of Nvidia’s Data Center revenue in 2023, and Nvidia’s management thinks the automotive vertical is a big growth opportunity for the company

We estimate that Data Center revenue contribution of the Automotive vertical through the cloud or on-prem exceeded $1 billion last year. NVIDIA DRIVE infrastructure solutions include systems and software for the development of autonomous driving, including data ingestion, curation, labeling, and AI training, plus validation through simulation. Almost 80 vehicle manufacturers across global OEMs, new energy vehicles, trucking, robotaxi and Tier 1 suppliers are using NVIDIA’s AI infrastructure to train LLMs and other AI models for automated driving and AI cockpit applications. In effect, nearly every Automotive company working on AI is working with NVIDIA. As AV algorithms move to video transformers and more cars are equipped with cameras, we expect NVIDIA’s automotive Data Center processing demand to grow significantly…

…NVIDIA DRIVE Orin is the AI car computer of choice for software-defined AV fleet. Its successor, NVIDIA DRIVE Thor, designed for vision transformers offers more AI performance and integrates a wide range of intelligent capabilities into a single AI compute platform, including autonomous driving and parking, driver and passenger monitoring, and AI cockpit functionality and will be available next year. There were several automotive customer announcements this quarter. Li Auto, Great Wall Motor, ZEEKR, the premium EV subsidiary of Geely and Xiaomi EV all announced new vehicles built on NVIDIA.

Nvidia is developing AI solutions in the realm of healthcare

In health care, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging, and wearable devices. We have built deep domain expertise in health care over the past decade, creating the NVIDIA Clara health care platform and NVIDIA BioNeMo, a generative AI service to develop, customize and deploy AI foundation models for computer-aided drug discovery. BioNeMo features a growing collection of pre-trained biomolecular AI models that can be applied to the end-to-end drug discovery processes. We announced Recursion is making available for the proprietary AI model through BioNeMo for the drug discovery ecosystem.

Nvidia’s business in China is affected by the US government’s export restrictions concerning advanced AI chips; Nvidia has been building workarounds and have started shipping alternatives to China; Nvidia’s management expects China to remain a single-digit percentage of Data Center revenue in 2024 Q1; management thinks that while the US government wants to limit China’s access to leading-edge AI technology, it still wants to see Nvidia succeed in China

Growth was strong across all regions except for China, where our Data Center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don’t require a license for the China market. China represented a mid-single-digit percentage of our Data Center revenue in Q4, and we expect it to stay in a similar range in the first quarter…

…At the core, remember, the U.S. government wants to limit the latest capabilities of NVIDIA’s accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars, if you will, are the restrictions.

Nvidia’s management is seeing demand for AI infrastructure from countries become an additional growth-driver for the company

In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large language models in their own language on domestic data and in support of their local research and enterprise ecosystems…

…So we’re seeing sovereign AI infrastructure is being built in Japan, in Canada, in France, so many other regions. And so my expectation is that what is being experienced here in the United States, in the West will surely be replicated around the world. 

Nvidia is shipping its Hopper AI chips with Infiniband networking; management believes that a combination of the company’s Hopper AI chips with Infiniband is becoming a de facto standard for AI infrastructure

The vast majority of revenue was driven by our Hopper architecture along with InfiniBand networking. Together, they have emerged as the de facto standard for accelerated computing and AI infrastructure. 

Nvidia is on track to ramp shipments of the latest generation of its most advanced AI chips – the H200 – in 2024 Q2; the H200 chips have double the inference performance of its predecessor

We are on track to ramp H200 with initial shipments in the second quarter. Demand is strong as H200 nearly doubled the inference performance of H100. 

Nvidia’s networking solutions has a revenue run-rate of more than $13 billion and the company’s Quantum Infiniband band solutions grew by more than five times in 2023 Q4 – but Nvidia is also working on its own Ethernet AI networking solution called Spectrum X, which is purpose-built for AI and performs better than traditional Ethernet for AI workloads; Spectrum X has attracted leading OEMs as partners, and Nvidia is on track to ship the solution in 2024 Q1; management still sees Infiniband the standard for AI-dedicated systems

Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our Quantum InfiniBand solutions grew more than 5x year-on-year. NVIDIA Quantum InfiniBand is the standard for the highest-performance AI-dedicated infrastructures. We are now entering the Ethernet networking space with the launch of our new Spectrum-X end-to-end offering designed for an AI-optimized networking for the Data Center. Spectrum-X introduces new technologies over Ethernet that are purpose-built for AI. Technologies incorporated in our Spectrum switch, BlueField DPU and software stack deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Leading OEMs, including Dell, HPE, Lenovo and Supermicro with their global sales channels are partnering with us to expand our AI solution to enterprises worldwide. We are on track to ship Spectrum-X this quarter…

…InfiniBand is the standard for AI-dedicated systems. Ethernet with Spectrum-X, Ethernet is just not a very good scale-out system. But with Spectrum-X, we’ve augmented, layered on top of Ethernet, fundamental new capabilities like adaptive routing, congestion control, noise isolation or traffic isolation so that we could optimize Ethernet for AI. And so InfiniBand will be our AI-dedicated infrastructure, Spectrum-X will be our AI-optimized networking

Nvidia’s AI-training-as-a-service-platform, DGX Cloud, has reached a $1 billion annualised revenue run rate, and is now available on all the major cloud service providers; Nvidia’s management believes that the company’s software business will become very significant over time, because of the importance of software when running AI-related hardware

We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion in Q4. NVIDIA DGX Cloud will expand its list of partners to include Amazon’s AWS, joining Microsoft Azure, Google Cloud, and Oracle Cloud. DGX Cloud is used for NVIDIA’s own AI R&D and custom model development as well as NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners…

…And the way that we work with CSPs, that’s really easy. We have large teams that are working with their large teams. However, now that generative AI is enabling every enterprise and every enterprise software company to embrace accelerated computing, and when it is now essential to embrace accelerated computing because it is no longer possible, no longer likely anyhow, to sustain improved throughput through just general-purpose computing, all of these enterprise software companies and enterprise companies don’t have large engineering teams to be able to maintain and optimize their software stack to run across all of the world’s clouds and private clouds and on-prem.

So we are going to do the management, the optimization, the patching, the tuning, the installed base optimization for all of their software stacks. And we containerize them into our stack called NVIDIA AI Enterprise. And the way we go to market with it is think of that NVIDIA AI Enterprise now as a run time like an operating system. It’s an operating system for artificial intelligence. And we charge $4,500 per GPU per year. And my guess is that every enterprise in the world, every software enterprise company that are deploying software in all the clouds and private clouds and on-prem will run on NVIDIA AI Enterprise, especially obviously, for our GPUs. And so this is going to likely be a very significant business over time.

Nvidia’s gaming chips also have strong generative AI capabilities, leading to better gaming performance

At CES, we announced our GeForce RTX 40 Super Series family of GPUs. Starting at $599, they deliver incredible gaming performance and generative AI capabilities. Sales are off to a great start. NVIDIA AI Tensor Cores and the GPUs deliver up to 836 AI TOPS, perfect for powering AI for gaming, creating and everyday productivity. The rich software stack we offer with our RTX DPUs further accelerates AI. With our DLSS technologies, 7 out of 8 pixels can be AI-generated, resulting up to 4x faster ray tracing and better image quality. And with the TensorRT LLM for Windows, our open-source library that accelerates inference performance for the latest large language models, generative AI can run up to 5x faster on RTX AI PCs.

Nvidia has announced new gaming AI laptops from every major laptop manufacturer; Nvidia has more than 100 million RTX PCs in its installed base, and management thinks the company is in a good position to lead the next wave of generative AI applications that are coming to the personal computer

At CES, we also announced a wave of new RTX 40 Series AI laptops from every major OEM. These bring high-performance gaming and AI capabilities to a wide range of form factors, including 14-inch and thin and light laptops. With up to 686 TOPS of AI performance, these next-generation AI PCs increase generative AI performance by up to 60x, making them the best performing AI PC platforms…

…NVIDIA is fueling the next wave of generative AI applications coming to the PC. With over 100 million RTX PCs in the installed base and over 500 AI-enabled PC applications and games, we are on our way.

Nvidia has a service that allows software developers to build state-of-the-art generative AI avatars

At CES, we announced NVIDIA Avatar Cloud Engine microservices, which allow developers to integrate state-of-the-art generative AI models into digital avatars. ACE won several Best of CES 2024 awards. NVIDIA has an end-to-end platform for building and deploying generative AI applications for RTX PCs and workstations. This includes libraries, SDKs, tools and services developers can incorporate into their generative AI workloads.

Nvidia’s management believes that generative AI cannot be done on traditional general-purpose computing – it has to be done on an accelerated computing framework

With accelerated computing, you can dramatically improve your energy efficiency. You can dramatically improve your cost in data processing by 20:1, huge numbers. And of course, the speed. That speed is so incredible that we enabled a second industry-wide transition called generative AI. In generative AI, I’m sure we’re going to talk plenty about it during the call. But remember, generative AI is a new application. It is enabling a new way of doing software, new types of software being created. It is a new way of computing. You can’t do generative AI on traditional general-purpose computing. You have to accelerate it.

The hardware supply chain of a Nvidia GPU is improving; the components that go into a Nvidia GPU is really complex
Our supply is improving. Overall, our supply chain is just doing an incredible job for us. Everything from, of course, the wafers, the packaging, the memories, all of the power regulators to transceivers and networking and cables, and you name it, the list of components that we ship. As you know, people think that NVIDIA GPUs is like a chip, but the NVIDIA Hopper GPU has 35,000 parts. It weighs 70 pounds. These things are really complicated things we’ve built. People call it an AI supercomputer for good reason. If you ever look at the back of the Data Center, the systems, the cabling system is mind-boggling. It is the most dense, complex cabling system for networking the world has ever seen. Our InfiniBand business grew 5x year-over-year. The supply chain is really doing fantastic supporting us. And so overall, the supply is improving. 

Nvidia’s management is allocating chips fairly to all of the company’s customers

CSPs have a very clear view of our product road map and transitions. And that transparency with our CSPs gives them the confidence of which products to place and where and when. And so they know the timing to the best of our ability, and they know quantities and, of course, allocation. We allocate fairly. We allocate fairly, do the best of our — best we can to allocate fairly and to avoid allocating unnecessarily.

Nvidia’s management is seeing a lot of activity emerging from robotics companies

There’s just a giant suite of robotics companies that are emerging. There are warehouse robotics to surgical robotics to humanoid robotics, all kinds of really interesting robotics companies, agriculture robotics companies.

Nvidia’s installed base of hardware has been able to support every single innovation in AI technology because it is programmable

NVIDIA is the only architecture that has gone from the very, very beginning, literally at the very beginning when CNNs and Alex Krizhevsky and Ilya Sutskever and Geoff Hinton first revealed AlexNet, all the way through RNNs to LSTMs to every RLs to deep RLs to transformers to every single version and every species that have come along, vision transformers, multi-modality transformers that every single — and now time sequence stuff. And every single variation, every single species of AI that has come along, we’ve been able to support it, optimize our stack for it and deploy it into our installed base…

… We simultaneously have this ability to bring software to the installed base and keep making it better and better and better. So our customers’ installed base is enriched over time with our new software…

…on’t be surprised if in our future generation, all of a sudden, amazing breakthroughs in large language models were made possible. And those breakthroughs, some of which will be in software because they run CUDA, will be made available to the installed base. And so we carry everybody with us on the one hand, we make giant breakthroughs on the other hand.

A big difference between accelerated computing and general purpose computing is the importance of software in the former

As you know, accelerated computing is very different than general-purpose computing. You’re not starting from a program like C++. You compile it and things run on all your CPUs. The stacks of software necessary for every domain from data processing, SQL versus SQL structured data versus all the images and text and PDF, which is unstructured, to classical machine learning to computer vision to speech to large language models, all — recommender systems. All of these things require different software stacks. That’s the reason why NVIDIA has hundreds of libraries. If you don’t have software, you can’t open new markets. If you don’t have software, you can’t open and enable new applications. Software is fundamentally necessary for accelerated computing. This is the fundamental difference between accelerated computing and general-purpose computing that most people took a long time to understand. And now people understand that software is really key.

Nvidia’s management believes that generative AI has kicked off a massive new investment cycle for AI infrastructure

Generative AI has kicked off a whole new investment cycle to build the next trillion dollars of infrastructure of AI generation factories. We believe these two trends will drive a doubling of the world data center infrastructure installed base in the next 5 years and will represent an annual market opportunity in the hundreds of billions.

PayPal (NASDAQ: PYPL)

PayPal’s management will soon launch a new PayPal app that will utilise AI to personalise the shopping experience for consumers; management hopes to drive engagement with the app

This year, we’re launching and evolving a new PayPal app to create a situation. We will also leverage our merchant relationships and the power of AI to make the entire shopping experience personalized for consumers while giving them control over their data…

…The new checkout and app experiences we are rolling out this year will also create an engagement loop that will drive higher awareness of the various products we offer and drive higher adoption of our portfolio over time.

Shopify (NASDAQ: SHOP)

Shopify’s management launched nearly 12 AI-powered tools through the Shopify Magic product suite in 2023, including tools for AI-generated product descriptions and an AI commerce assistant; in recent weeks, management launched AI product image creating and editing tools within Shopify Magic; management will be introducing new modalities and text-to-image capabilities later this year

In 2023, we brought nearly a dozen AI-enabled tools through our Shopify Magic product suite. We’re one of the first platforms to bring AI-generated product descriptions to market and made solid progress towards building Sidekick, a first of its kind AI-enabled commerce assistant. As part of our winter edition a few weeks ago, we introduced new features to our Shopify Magic suite of AI tools. These new generative AI tools simplify and enhance product image editing directly within the product image editor in the Shopify admin. With Shopify Magic, merchants can now leverage AI to create stunning images and professional edits with just a few clicks or keywords, saving on cost and time. And given the significant advancements in AI in 2023, we plan to seize this enormous opportunity ahead of us and are excited to introduce new modalities and text to image capabilities to Shopify in 2024.

Shopify’s marketing-paybacks have improved by over 30% with the help of AI

In terms of marketing, the 2 areas, in particular, where we are leaning in this quarter are performance marketing and point-of-sale. Within performance marketing, our team has unlocked some opportunities to reach potential customers at highly attractive LTV to CAC and paybacks. In fact, tactics that we’ve implemented on some channels earlier this year including through the enhanced use of AI and automation have improved paybacks by over 30%, enabling us to invest more into these channels while still maintaining our operating discipline on the underlying unit economics. 

Taiwan Semiconductor Manufacturing Company (NYSE: TSM)

TSMC’s management has increased the company’s capital expenditure materially over the last few years to capture the growth opportunities associated with AI

At TSMC, a higher level of capital expenditures is always correlated with higher growth opportunities in the following years. In the past few years, we have sharply increased our CapEx spending in preparation to capture or harvest the growth opportunities from HPC, AI and 5G megatrends.

TSMC’s management expects 2024 to be a healthy growth-year for the company with revenue growth in the low-to-mid 20s percentage range, driven by its 3nm technologies, 5nm technologies, and AI

Entering 2024, we forecast fabless semiconductor inventory to have returned to a [ handsome ] level exiting 2023. However, macroeconomic weakness and geopolitical uncertainties persist, potentially further weighing on consumer sentiment and the market demand. Having said that, our business has bottomed out on a year-over-year basis, and we expect 2024 to be a healthy growth year for TSMC, supported by continued strong ramp of our industry-leading 3-nanometer technologies, strong demand for the 5-nanometer technologies and robust AI-related demand.

TSMC’s management sees 2023 as the year that generative AI became important for the semiconductor industry, with TSMC as a key enabler; management thinks that the surge in AI-related demand in 2023 will drive an acceleration in structural demand for energy-efficient computing, and that AI will need to be supported by more powerful semiconductors – these are TSMC’s strengths

2023 was a challenging year for the global semiconductor industry, but we also witnessed the rising emergence of generative AI-related applications with TSMC as a key enabler…

…Despite the near-term challenges, our technology leadership enable TSMC to outperform the foundry industry in 2023, while we are positioning us to capture the future AI and high-performance computing-related growth opportunities…

…The surge in AI-related demand in 2023 supports our already strong conviction that the structural demand for energy-efficient computing will accelerate in an intelligent and connected world. TSMC is a key enabler of AI applications. No matter which approach is taken, AI technology is evolving to use more complex AI models as the amount of computation required for training and inference is increasing. As a result, AI models need to be supported by more powerful semiconductor hardware, which requires use of the most advanced semiconductor process technologies. Thus, the value of TSMC technology position is increasing, and we are all well positioned to capture the major portion of the market in terms of semiconductor component in AI. To address insatiable AI-related demand for energy-efficient computing power, customers rely on TSMC to provide the most leading edge processing technology at scale with a dependable and predictable cadence of technology offering.

Almost everyone important in AI is working with TSMC on its 2nm technologies

As process technology complexity increase, the engagement lead time with customers also started much earlier. Thus, almost all the AI innovators are working with TSMC, and we are observing a much higher level of customer interest and engagement at N2 as compared with N3 at a similar stage from both HPC and the smartphone applications.

TSMC’s management believes that the world has seen only the tip of the iceberg with AI

But on the other hand, AI is only in its nascent stage. Only last November, the first large language data is announced, ChatGPT announced. We only see the tip of the iceberg. 

TSMC’s management believes that the use of AI could accelerate scientific innovation in the field of semiconductor manufacturing

So I want to give the industry an optimistic note that even though 1 nanometer or sub 1 nanometer could be challenging, but we have a new technology capability using AI to accelerate the innovation in science.

TSMC’s management still believes that its narrowly-defined AI business will grow at 50% annually; management also sees AI application process chips making up a high-teens weightage of TSMC’s revenue by 2027, up from a low-teens weightage mentioned in the 2023 second-quarter earnings call, because of a sudden increase in demand

But for TSMC, we look at ours here, the AI’s a CAGR, that’s the growth rate every year, it’s about 50%. And we are confident that we can capture more opportunities in the future. So that’s what we said that up to 2027, we are going to have high teens of the revenue from a very narrow, we defined the AI application process, not to mention about the networking, not to mention about all others, okay?…

…[Question] You mentioned that we have a very narrow definition, we call server AI processor contribution and that you said it can be high teens in 5 years’ time because the last time, we said low teens.

[Answer] The demand suddenly being increased since last — I think, last year, the first quarter up to March or April, when ChatGPT become popular, so customers respond quickly and asked TSMC to prepare the capacity, both in front end and the back end. And that’s why we have confidence that this AI’s revenue will increase. We only narrowed down to the AI application process, by the way. So we look at ours here, that we prepare the technology and the capacities in both our front end and also back end. And so we — it’s in the early stage so far today. We already see the increase, the momentum. And we expect — if you guys continue to track this one, the number will increase. I have confidence to say that, although I don’t know how much.

TSMC’s management is seeing AI chips being placed in edge-devices such as smartphones and PCs 

And to further extend our value, actually, all the edge device, including smartphone, including the PC, they start to put the AI’s application inside. They have some kind of a neural process, for example, so the silicon content will be greatly increased. 

Tesla (NASDAQ: TSLA)

Tesla has released version 12 of its FSD (Full Self Driving) software, which is powered end-to-end by AI (artificial intelligence); Tesla will soon release it to over 400,000 vehicles in North America; FSD v.12 is the first time AI has been used for pathfinding and vehicle controls, and within it, neural nets replaced over 330,000 lines of code

For full self-driving, we’ve released version 12, which is a complete architectural rewrite compared to prior versions. This is end-to-end artificial intelligence. So [ nothing but ] nets basically, photons in and controls out. And it really is quite a profound difference. This is currently just with employees and a few customers, but we will be rolling out to all who — all those customers in the U.S. who request full self-driving in the weeks to come. That’s over 400,000 vehicles in North America. So this is the first time AI has been used not just for object perception but for pathfinding and vehicle controls. We replaced 330,000 lines of C++ code with neural nets. It’s really quite remarkable.

Tesla’s management believes that Tesla is the world’s most efficient company at AI inference because the company, out of necessity, has had to wring the most performance out of 3-year-old hardware

I think Tesla is probably the most efficient company in the world for AI inference. Out of necessity, we’ve actually had to be extremely good at getting the most out of hardware because hardware 3 at this point is several years old. So I don’t — I think we’re quite far ahead of any other company in the world in terms of AI inference efficiency, which is going to be a very important metric in the future in many arenas.

Tesla’s management thinks that the AI technologies the company has developed for vehicles translates well into a humanoid robot (Optimus); Tesla’s vehicles and Optimus both have the same inference computers

And the technologies that we — the AI technologies we’ve developed for the car translate quite well to a humanoid robot because the car is just a robot on 4 wheels. Tesla is arguably already the biggest robot maker in the world. It’s just a 4-wheeled robot. So Optimus is a robot with — a humanoid robot with arms and legs, just by far the most sophisticated humanoid robot that’s being developed anywhere in the world…

…As we improve the technology in the car, we improve the technology in Optimus at the same time. It runs the same AI inference computer that’s on the car, same training technology. I mean we’re really building the future. I mean the Optimus lab looks like the set of Westworld, but admittedly, that was not a super utopian situation.

Tesla’s management is hedging their bets for the company’s FSD-related chips with Nvidia’s GPUs while also pursuing Dojo (Tesla’s own AI chip design)

[Question] As a follow-up, your release does not mention Dojo, so if you could just provide us an update on where Dojo stands and at what point do you expect Dojo to be a resource in improving FSD. Or do you think that you now have sufficient supply of NVIDIA GPUs needed for the training of the system?

[Answer] I mean the AI part of your question is — that is a deep one. So we’re obviously hedging our bets here with significant orders of NVIDIA GPUs…

…And we’re pursuing the dual path of NVIDIA and Dojo.

Tesla’s management believes that Tesla’s progress in self-driving is limited by training and that in AI, the more training is done on the model, the less resources are required for inference

A lot of our progress in self-driving is training limited. Something that’s important with training, it’s much like a human. The more effort you put into training, the less effort you need in inference. So just like a person, if you train in a subject, sort of class, 10,000 hours, the less mental effort it takes to do something. If you remember when you first started to drive how much of your mental capacity it took to drive, it was — you had to be focused completely on driving. And after you’ve been driving for many years, it only takes a little bit of your mind to drive, and you can think about other things and still drive safely. So the more training you do, the more efficient it is at the inference level. So we do need a lot of training. And we’re pursuing the dual path of NVIDIA and Dojo, A 

Tesla’s management thinks that Dojo is a long shot – it has potential, but may not work out

But I would think of Dojo as a long shot. It’s a long shot worth taking because the payoff is potentially very high but it’s not something that is a high probability. It’s not like a sure thing at all. It’s a high risk, high payoff program. Dojo is working, and it is doing training jobs, so — and we are scaling it up. And we have plans for Dojo 1.5, Dojo 2, Dojo 3 and whatnot. So I think it’s got potential. I can’t emphasize enough, high risk, high payoff.

Tesla’s management thinks that Tesla’s AI-inference hardware in its vehicles can enable the company to perhaps possess the largest amount of compute resources for AI tasks in the world at some point in the future

There’s also our inference hardware in the car, so we’re now on what’s called Hardware 4, but it’s actually version 2 of the Tesla-designed AI inference chip. And we’re about to complete design of — the terminology is a bit confusing. About to complete design of Hardware 5, which is actually version 3 of the Tesla-designed chip because the version 1 was Mobileye. Version 2 was NVIDIA, and then version 3 was Tesla. So — and we’re making gigantic improvements from 1 — from Hardware 3 to 4 to 5. I mean there’s a potentially interesting play where when cars are not in use in the future, that the in-car computer can do generalized AI tasks, can run a sort of GPT4 or 3 or something like that. If you’ve got tens of millions of vehicles out there, even in a robotaxi scenario, whether in heavy use, maybe they’re used 50 out of 168 hours, that still leaves well over 100 hours of time available — of compute hours. Like it’s possible with the right architectural decisions that Tesla may, in the future, have more compute than everyone else combined.

The Trade Desk (NASDAQ: TSLA)

Trade Desk’s management believes that in a post-cookie world, advertisers will have to depend on authentication, new approaches to identity, first-party data, and AI-driven relevance tools – Trade Desk’s tools help create the best outcome in this world

The post-cookie world is one that will combine authentication, new approaches to identity, first-party data activation and advanced AI-driven relevance tools, all to create a new identity fabric for the Internet that is so much more effective than cookies ever were. The Internet is being replumbed and our product offerings create the best outcome for all of the open Internet. 

AI optimisations are distributed across Kokai, which is Trade Desk’s new platform that recently went live; Kokai helps advertisers understand and score every ad impression, and allows advertisers to use an audience-first approach in campaigns

In particular, Kokai represents a completely new way to understand and score the relevance of every ad impression across all channels. It allows advertisers to use an audience-first approach to their campaigns, targeting their audiences wherever they are on the open Internet. Our AI optimizations, which are now distributed across the platform, help optimize every element of the ad purchase process. Kokai is now live, and similar to Next Wave and Solimar, it will scale over the next year.

Based on Trade Desk’s management’s interactions with customers, the use of AI to forecast the impacts that advertisers’ decisions will have on their ad spending is a part of Kokai that customers love

A big part of what they love, to answer your question about what are they most excited about, is we have streamlined our reporting. We’ve made it way faster. There are some reports that you just have to wait multiple minutes for it because they’re just so robust, and we found ways to accelerate that. We’ve also added AI throughout the platform, especially in forecasting. So it’s a little bit like if you were to make a hypothetical trade in a trading platform for equity and then us tell you what we think is going to happen to the price action in the next 10 minutes. So we’re showing them what the effects of their changes are going to be before they even make them so that they don’t make mistakes. Because sometimes what happens is people put out a campaign. They’ll put tight restrictions on it. They’ll hope that it spends, then they come back a day or 2 or even 3 later and then realize they made it so difficult with their combination of targeting and pricing for us to buy anything that they didn’t spend much money. Or the opposite because they spent more and it wasn’t as effective as they wanted. So helping them see all of that before they do anything helped.

Trade Desk’s management believes that the company is reinforcing itself as the adtech AI leader; Trade Desk has been using AI in its platform since 2016

We are reinforcing our position as the adtech AI leader. We’ve been embedding AI into our platform since 2016, so it’s nothing new to us. But now it’s being distributed across our platform so our clients can make even better choices among the 15 million ad impression opportunities a second and understand which of those ads are most relevant to their audience segments at any given time.

Wix (NASDAQ: WIX)

Wix’s management added new AI features in 2023 to help users create content more easily; the key AI features introduced include a chat bot, code assistant, and text and image creators

This year, we meaningfully extended an already impressive toolkit of AI capabilities to include new AI-powered features that will help Wix users create visual and written web content more easily, optimized design and content layout, right code and manage their website and businesses more efficiently. The key AI product introduced in the last year include an AI chat experience for businesses, responsive AI design, AI code assistant, AI Meta Tag Creators and AI text and image creators among several other AI design tools. 

Wix’s management recently released an AI site generator that can create a full-blown, tailored, ready-to-publish website based on user prompts; management believes that Wix is the first to launch such an AI site generator; the site generator has received fantastic feedback so far, and is a good starting point for creating a new website, but it is only at Version 1

We also recently released our AI site generator and have heard fantastic feedback so far. I believe this will be the first AI tool on the market that creates a full-blown, tailored and ready-to-publish website integrated with relevant business application based on user prompt…

… So we released what I would call version 1. It’s a great way for people to start with the website, meaning that you come in and you say, I’m a Spa in New York City and I specialize in some specific things. And we’ll — and AI will interview you on the — what makes your business unique, where are you located? How many people? Tell us about those people and the staff members. And as a result, we generate a website for you that is — has all the great content, right? And the content will be text and images. The other thing that then will actually get you to this experience where you can choose how you want to have the design look like. And the AI will generate different designs for you. So you can tell why I like this thing, I want a variation on that, I don’t like the colors, please change the colors or I want colors that are more professionals or I want color that are blue and yellow. And there I will do it for you.

On the other hand, you can also say, well, I don’t really like the design, can you generate something very different or generate a small variation of that, in many ways, a bit similar to Midjourney, what Midjourney is doing with the images, we are doing with a full-blown website. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. The result of that is something that is probably 70% of the website that you need to have on average, right, sometime it’s 95%, but sometimes it’s less than that. So it gives you an amazing way to start your website and shortened the amount of work that you need to do by about 70% to 80%. I think it’s fantastic and very exciting. 

Wix’s management is seeing that the majority of the company’s new users today have adopted at least one AI tool and this has been a positive for Wix’s business

In fact, the majority of new users today are using at least 1 AI tool on the web creation journey. This has resulted in reduced friction and enhanced the creation experience for our users as well as increased conversion and improve monetization. 

Wix’s management expects AI to be a driver of Wix’s growth in 2024 and beyond

We expect our AI technology to be a significant driver of growth in 2024 and beyond…

…Third, as Avishai mentioned, uptick of the milestone AI initiatives of 2023 has been incredible, and we expect to see ramping conversion and monetization benefits from our entire AI toolkit for both self-creators and partners this year…

…But then again, also 2025 will be much better than 2024. I think that the first reason is definitely the launching new products. At the end of the day, we are a technology, a product company, and this is how we drive our growth, mostly from new features, some new products. And this is what we did in the past, and we will continue also to do in the future. So definitely, it’s coming from the partners business with launching Studio. It was a great launch for us. We see the traction in the market. We see the demand. We see how our agencies use it. I think, as you know, we mentioned a few times about the number of new accounts with more than 50% are new. I think that it’s — for us, it’s a great proxy to the fact that we are going to see much more that it would be significantly the major growth driver for us in the next few years. The second one is everything that we’ve done with AI, we see a tremendous results out of it, which we believe that we will continue into the next year. And as you know, as always, the third one is about trying to optimize our pricing strategy. And this is what we’ve done in the past, we’ll continue to do in the future. [indiscernible] both mentioned like a fourth reason, which is the overall demand that we see on a macro basis.

Wix’s management has been driving the company to use AI for internal processes; the internal AI tools include an open internal AI development platform that everyone at Wix can contribute to, and a generative AI conversational assistant for product teams in Wix; the internal AI tools has also helped Wix to save costs and improve its gross margin

We also leverage AI to improve many of our internal processes at Wix, especially research and development velocity. This include an open internal AI deployment platform that allow for everyone at Wix to contribute to building AI-driven user features in tandem. We also have a Gen AI best platform dedicated to conversational assistant, which allow any product team at Wix to develop their own assistant tailored to specific user needs without having to start from scratch. With this platforms, we are able to develop and release high-quality AI-based features and tools efficiently and at scale…

…We ended 2023 with a total gross margin of 68%, an improvement of nearly 500 basis points compared to 2022. Throughout the year, we benefited from improved efficiencies in housing and infrastructure costs and optimization of support cost, partially aided by integrating AI into our workflows. Creative Subscriptions gross margin expanded to 82% in 2023. And Business Solutions gross margin grew to 29% for the full year as we continue to benefit from improving margin and new [indiscernible].

Wix’s management believes that there can be double-digit growth for the company’s self creators business in the long run partly because of AI products

And we mentioned that for self-creators in the long run, we believe that it will be a double-digit growth just because of that because it has the most effect of the macro environment which already started to see that it’s improving. But then again, also the new product and AI is 1 of the examples how we can bring increased conversion and also increase the growth of self-creators.

Zoom Video Communications (NASDAQ: ZM)

Zoom’s management launched Zoom AI Companion, a generative AI assistant, five months ago and it has been expanded to six Zoom products, all included at no extra cost to users; Zoom AI companion now has 510,000 accounts enabled and has created 7.2 million meeting summaries

Zoom AI Companion, our generative AI assistant, empowers customers and employees with enhanced productivity, team effectiveness and skills. Since its launch only five months ago, we expanded AI Companion to six Zoom products, all included at no additional cost to licensed users…

…Zoom AI companion have grown tremendously in just 5 months with over 510,000 accounts enabled and 7.2 million meeting summaries created as of the close of FY ’24. 

Zoom’s future roadmap for AI is guided by driving customer value

Our future roadmap for AI is 100% guided by driving customer value. We are hard at work developing new AI capabilities to help customers achieve their unique business objectives and we’ll have more to share in a month at Enterprise Connect

Zoom’s Contact Center suite is an AI-first solution that includes AI Companion; Contact Center suite is winning in head-to-head competition against legacy incumbents

Our expanding Contact Center suite is a unified, AI-first solution that offers tremendous value to companies of all sizes seeking to strengthen customer relationships and deliver better outcomes. The base product includes AI Companion and our newly launched tiered pricing allows customers to add specialized CX capabilities such as AI Expert Assist, workforce management, quality management, virtual agent, and omnichannel support. Boosted by its expanding features, our contact center suite is beginning to win in head-to-head competition with the legacy incumbents.

Zoom Revenue Accelerator gained recognition from Forrester as an AI-powered tool for sales teams

Zoom Revenue Accelerator was recognized as a “Strong Performer” in The Forrester Wave™ in its first year of being covered – an amazing testament to its value as a powerful AI-enabled tool driving value for sales teams.

A financial services company, Convera, was attracted to Zoom’s products because of AI Companion

Finally, let me thank Convera, the World’s FX payments leader. Zoom Phone was the foundation of their Zoom engagement and from there they adopted the wider Zoom One platform in less than two years. Seeing the benefits of the tight integration of our products underpinned by AI Companion, they recently began to deeply leverage Zoom Team Chat in order to streamline their pre, during and post meeting communication all within the Zoom Platform.

Zoom is monetising AI on many fronts

We are monetizing AI on many fronts. You look at our Zoom AI Companion, right? So first of all, for our existing customers, because they all like the value we created, right, to generate meeting summary, meeting [indiscernible] and so on and so forth, because of that, we really do not — because customers, they’re also trying to reduce the cost. That’s why we do not charge the customers for those features. However, a lot of areas we can monetize. Take our AI Companion, for example. Enterprise customers, how to lever enterprise customer directionally, source data and also to build a tailored — the Zoom AI Companion for those customers, sort of like a customized Zoom AI Companion, we can monetize. And also look at all the services. Maybe I’ll just take Contact Center, for example. We are offering Zoom Virtual Agent, that’s one we can monetize. And recently, we announced 3 tiers of Zoom Contact Center product. The last one is per agent per month, we charge $149. The reason why, there are a few features. One of the feature is Zoom Expert Assist, right? All those features are empowered by AI features.

Zoom’s AI-powered Virtual Agent was deployed internally and has saved Zoom 400,000 agent hours per month, and handled more than 90% of inbound inquiries; Zoom’s management believes that Zoom’s AI features help improve companies’ agent-efficiency in contact centers 

Zoom, we — internally, we deployed our Virtual Agent. Guess what? Every month, we saved 400,000 agent hours. And more than 90% inbound inquiries can be done by our Virtual Agent driven by the AI technology…

…If you look at our Zoom Meeting product, right, customer discovered that Zoom AI Companion to help you with the meeting summary. And after they discovered that feature and they would like to adopt that, right? Contact Center, exact same thing. And like Virtual Agent, Zoom Expert Assist, right, leverage those AI features. Manager kind of knows what’s going on in real time and also — and the agent while can have the AI, to get a real-time in order base and any update about these customers. All those AI features can dramatically improve the agent efficiency, right? That’s the reason why it’s kind of — will not take a much longer time for those agents to realize the value of the AI features because it’s kind of very easy to use. And I think that in terms of adoption rate, I feel like Contact Center AI adoption rate even probably faster than the other — the core features, so — core services.

Zoom’s management is seeing that having AI features at no additional cost to customers helps the company to attract users to Zoom Team Chat

[Question] And for Eric, what’s causing customers to move over to the Zoom chat function and off your main competitor like Teams? Just further consolidation onto one platform? Or is it AI Companion playing a larger role here, especially as you guys are including it as opposed to $30, $35 a month?

[Answer] Customers, they see — using their chat solution, they want to use AI, right? I send you — James, I send you a message. I want to leverage AI, send a long message. However, if you use other solutions, sometimes, other solutions itself, even without AI, it’s not free, right? And in our case, not only do we have core functionalities, but also AI Companion built in also at no additional cost. I can use — for any users, customers, you already have a Meeting license, Zoom Team Chat already built in, right? All the core features, you can use the Zoom AI Companion in order to leverage AI — write a chat message and so on and so forth. It works so well at no additional cost. The total cost of ownership of the Zoom Team Chat is much better than any other team chat solutions.


 Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I have a vested interest in Adobe, Alphabet, Amazon, Apple, Datadog, Etsy, Fiverr, Mastercard, MercadoLibre, Meta Platforms, Microsoft, PayPal, Shopify, TSMC, Tesla, The Trade Desk, Wix, and Zoom. Holdings are subject to change at any time.

Insights From Warren Buffett’s 2023 Shareholder’s Letter

There’s much to learn from Warren Buffett’s latest letter, including his thoughts on oil & gas companies and the electric utility industry.

One document I always look forward to reading around this time of the year is Warren Buffett’s annual Berkshire Hathaway shareholder’s letter. Over the weekend, Buffett published the 2023 edition. This letter is especially poignant because Buffett’s long-time right-hand man, the great Charlie Munger, passed away last November. Besides containing a touching eulogy from Buffett to Munger, the letter also had some fascinating insights from Buffett that I wish to document and share. 

Without further ado (emphases are Buffett’s)…

The actions of a wonderful partner 

Charlie never sought to take credit for his role as creator but instead let me take the bows and receive the accolades. In a way his relationship with me was part older brother, part loving father. Even when he knew he was right, he gave me the reins, and when I blundered he never – never –reminded me of my mistake. 

It’s hard to tell a good business from a bad one

Within capitalism, some businesses will flourish for a very long time while others will prove to be sinkholes. It’s harder than you would think to predict which will be the winners and losers. And those who tell you they know the answer are usually either self-delusional or snake-oil salesmen. 

Holding onto a great business – one that can deploy additional capital at high returns – for a long time is a recipe for building a great fortune

At Berkshire, we particularly favor the rare enterprise that can deploy additional capital at high returns in the future. Owning only one of these companies – and simply sitting tight – can deliver wealth almost beyond measure. Even heirs to such a holding can – ugh! – sometimes live a lifetime of leisure…

…You may be thinking that she put all of her money in Berkshire and then simply sat on it. But that’s not true. After starting a family in 1956, Bertie was active financially for 20 years: holding bonds, putting 1⁄3 of her funds in a publicly-held mutual fund and trading stocks with some frequency. Her potential remained unnoticed. 

Then, in 1980, when 46, and independent of any urgings from her brother, Bertie decided to make her move. Retaining only the mutual fund and Berkshire, she made no new trades during the next 43 years. During that period, she became very rich, even after making large philanthropic gifts (think nine figures). 

Berkshire’s size is now a heavy anchor on the company’s future growth rates

This combination of the two necessities I’ve described for acquiring businesses has for long been our goal in purchases and, for a while, we had an abundance of candidates to evaluate. If I missed one – and I missed plenty – another always came along.

Those days are long behind us; size did us in, though increased competition for purchases was also a factor.

Berkshire now has – by far – the largest GAAP net worth recorded by any American business. Record operating income and a strong stock market led to a yearend figure of $561 billion. The total GAAP net worth for the other 499 S&P companies – a who’s who of American business – was $8.9 trillion in 2022. (The 2023 number for the S&P has not yet been tallied but is unlikely to materially exceed $9.5 trillion.) 

By this measure, Berkshire now occupies nearly 6% of the universe in which it operates. Doubling our huge base is simply not possible within, say, a five-year period, particularly because we are highly averse to issuing shares (an act that immediately juices net worth)…

…All in all, we have no possibility of eye-popping performance…

…Our Japanese purchases began on July 4, 2019. Given Berkshire’s present size, building positions through open-market purchases takes a lot of patience and an extended period of “friendly” prices. The process is like turning a battleship. That is an important disadvantage which we did not face in our early days at Berkshire.  

Are there a dearth of large, great businesses outside of the USA? 

There remain only a handful of companies in this country capable of truly moving the needle at Berkshire, and they have been endlessly picked over by us and by others. Some we can value; some we can’t. And, if we can, they have to be attractively priced. Outside the U.S., there are essentially no candidates that are meaningful options for capital deployment at Berkshire.

Markets can occasionally throw up massive bargains because of external shocks

Occasionally, markets and/or the economy will cause stocks and bonds of some large and fundamentally good businesses to be strikingly mispriced. Indeed, markets can – and will – unpredictably seize up or even vanish as they did for four months in 1914 and for a few days in 2001.

Stock market participants today exhibit even more gambling-like behaviour than in the past

Though the stock market is massively larger than it was in our early years, today’s active participants are neither more emotionally stable nor better taught than when I was in school. For whatever reasons, markets now exhibit far more casino-like behavior than they did when I was young. The casino now resides in many homes and daily tempts the occupants.

Stock buybacks are only sensible if they are done at a discount to business-value

All stock repurchases should be price-dependent. What is sensible at a discount to business-value becomes stupid if done at a premium.

Does Occidental Petroleum play a strategic role in the long-term economic security of the USA?

At yearend, Berkshire owned 27.8% of Occidental Petroleum’s common shares and also owned warrants that, for more than five years, give us the option to materially increase our ownership at a fixed price. Though we very much like our ownership, as well as the option, Berkshire has no interest in purchasing or managing Occidental. We particularly like its vast oil and gas holdings in the United States, as well as its leadership in carbon-capture initiatives, though the economic feasibility of this technique has yet to be proven. Both of these activities are very much in our country’s interest.

Not so long ago, the U.S. was woefully dependent on foreign oil, and carbon capture had no meaningful constituency. Indeed, in 1975, U.S. production was eight million barrels of oil-equivalent per day (“BOEPD”), a level far short of the country’s needs. From the favorable energy position that facilitated the U.S. mobilization in World War II, the country had retreated to become heavily dependent on foreign – potentially unstable – suppliers. Further declines in oil production were predicted along with future increases in usage. 

For a long time, the pessimism appeared to be correct, with production falling to five million BOEPD by 2007. Meanwhile, the U.S. government created a Strategic Petroleum Reserve (“SPR”) in 1975 to alleviate – though not come close to eliminating – this erosion of American self-sufficiency.

And then – Hallelujah! – shale economics became feasible in 2011, and our energy dependency ended. Now, U.S. production is more than 13 million BOEPD, and OPEC no longer has the upper hand. Occidental itself has annual U.S. oil production that each year comes close to matching the entire inventory of the SPR. Our country would be very – very – nervous today if domestic production had remained at five million BOEPD, and it found itself hugely dependent on non-U.S. sources. At that level, the SPR would have been emptied within months if foreign oil became unavailable.

Under Vicki Hollub’s leadership, Occidental is doing the right things for both its country and its owners. 

Nobody knows what the price of oil would do in the short-term and the long-term

No one knows what oil prices will do over the next month, year, or decade.

Nobody can predict the movement of major currencies

Neither Greg nor I believe we can forecast market prices of major currencies. We also don’t believe we can hire anyone with this ability. Therefore, Berkshire has financed most of its Japanese position with the proceeds from ¥1.3 trillion of bonds.

Rail is a very cost-efficient way to move products around America, and railroads should continue to be an important asset for the USA for a long time to come

Rail is essential to America’s economic future. It is clearly the most efficient way – measured by cost, fuel usage and carbon intensity – of moving heavy materials to distant destinations. Trucking wins for short hauls, but many goods that Americans need must travel to customers many hundreds or even several thousands of miles away…

…A century from now, BNSF will continue to be a major asset of the country and of Berkshire. You can count on that.

Railroad companies gobble up capital, such that its owners have to spend way more on annual maintenance capital expenditure than depreciation – but this trait allowed Berkshire to acquire BNSF for far less than its replacement value

BNSF is the largest of six major rail systems that blanket North America. Our railroad carries its 23,759 miles of main track, 99 tunnels, 13,495 bridges, 7,521 locomotives and assorted other fixed assets at $70 billion on its balance sheet. But my guess is that it would cost at least $500 billion to replicate those assets and decades to complete the job.

BNSF must annually spend more than its depreciation charge to simply maintain its present level of business. This reality is bad for owners, whatever the industry in which they have invested, but it is particularly disadvantageous in capital-intensive industries.

At BNSF, the outlays in excess of GAAP depreciation charges since our purchase 14 years ago have totaled a staggering $22 billion or more than $11⁄2 billion annually. Ouch! That sort of gap means BNSF dividends paid to Berkshire, its owner, will regularly fall considerably short of BNSF’s reported earnings unless we regularly increase the railroad’s debt. And that we do not intend to do.

Consequently, Berkshire is receiving an acceptable return on its purchase price, though less than it might appear, and also a pittance on the replacement value of the property. That’s no surprise to me or Berkshire’s board of directors. It explains why we could buy BNSF in 2010 at a small fraction of its replacement value.

Railroad companies are having trouble with hiring because of tough working conditions

An evolving problem is that a growing percentage of Americans are not looking for the difficult, and often lonely, employment conditions inherent in some rail operations. Engineers must deal with the fact that among an American population of 335 million, some forlorn or mentally-disturbed Americans are going to elect suicide by lying in front of a 100-car, extraordinarily heavy train that can’t be stopped in less than a mile or more. Would you like to be the helpless engineer? This trauma happens about once a day in North America; it is far more common in Europe and will always be with us.

American railroad companies are at times at the mercy of the US government when it comes to employees’ wages, and they are also required to carry products they would rather not

Wage negotiations in the rail industry can end up in the hands of the President and Congress. Additionally, American railroads are required to carry many dangerous products every day that the industry would much rather avoid. The words “common carrier” define railroad responsibilities.

Last year BNSF’s earnings declined more than I expected, as revenues fell. Though fuel costs also fell, wage increases, promulgated in Washington, were far beyond the country’s inflation goals. This differential may recur in future negotiations.

Has the electric utility industry in the USA become uninvestable because of a change in the authorities’ stance toward electric utilities?

For more than a century, electric utilities raised huge sums to finance their growth through a state-by-state promise of a fixed return on equity (sometimes with a small bonus for superior performance). With this approach, massive investments were made for capacity that would likely be required a few years down the road. That forward-looking regulation reflected the reality that utilities build generating and transmission assets that often take many years to construct. BHE’s extensive multi-state transmission project in the West was initiated in 2006 and remains some years from completion. Eventually, it will serve 10 states comprising 30% of the acreage in the continental United States. 

With this model employed by both private and public-power systems, the lights stayed on, even if population growth or industrial demand exceeded expectations. The “margin of safety” approach seemed sensible to regulators, investors and the public. Now, the fixed-but-satisfactoryreturn pact has been broken in a few states, and investors are becoming apprehensive that such ruptures may spread. Climate change adds to their worries. Underground transmission may be required but who, a few decades ago, wanted to pay the staggering costs for such construction?

At Berkshire, we have made a best estimate for the amount of losses that have occurred. These costs arose from forest fires, whose frequency and intensity have increased – and will likely continue to increase – if convective storms become more frequent.

It will be many years until we know the final tally from BHE’s forest-fire losses and can intelligently make decisions about the desirability of future investments in vulnerable western states. It remains to be seen whether the regulatory environment will change elsewhere.

Other electric utilities may face survival problems resembling those of Pacific Gas and Electric and Hawaiian Electric. A confiscatory resolution of our present problems would obviously be a negative for BHE, but both that company and Berkshire itself are structured to survive negative surprises. We regularly get these in our insurance business, where our basic product is risk assumption, and they will occur elsewhere. Berkshire can sustain financial surprises but we will not knowingly throw good money after bad.

Whatever the case at Berkshire, the final result for the utility industry may be ominous: Certain utilities might no longer attract the savings of American citizens and will be forced to adopt the public-power model. Nebraska made this choice in the 1930s and there are many public-power operations throughout the country. Eventually, voters, taxpayers and users will decide which model they prefer. 

When the dust settles, America’s power needs and the consequent capital expenditure will be staggering. I did not anticipate or even consider the adverse developments in regulatory returns and, along with Berkshire’s two partners at BHE, I made a costly mistake in not doing so. 


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

Shorting Stocks Is Hard, Really Hard

It’s far easier to recognise poor underlying business fundamentals in a stock and simply avoid investing in it.

In investing parlance, to “short a stock” is to make an investment with the view that a stock’s price will decline. On the surface, shorting seems like a fairly easy thing to do for an investor who has skill in “going long”, which is to invest with the view that a stock’s price will rise – you just have to do the opposite of what’s working.

But if you peer beneath the hood, shorting can be a really difficult way to invest in the stock market. Nearly four years ago in April 2020, I wrote Why It’s So Difficult To Short Stocks, where I used the story of Luckin Coffee to illustrate just how gnarly shorting stocks can be:

In one of our gatherings in June 2019, a well-respected member and deeply accomplished investor in the club gave a presentation on Luckin Coffee (NASDAQ: LK)…

…At the time of my club mate’s presentation, Luckin’s share price was around US$20, roughly the same level from the close of its IPO in May 2019. He sold his Luckin shares in January 2020, around the time when Luckin’s share price peaked at US$50. Today, Luckin’s share price is around US$4. The coffee chain’s share price tanked by 76% from US$26 in one day on 2 April 2020 and continued falling before stock exchange operator NASDAQ ordered a trading halt for Luckin shares…

…The wheels came off the bus only on 2 April 2020. On that day, Luckin announced that the company’s board of directors is conducting an internal investigation. There are fraudulent transactions – occurring from the second quarter of 2019 to the fourth quarter of 2019 – that are believed to amount to RMB 2.2 billion (around US$300 million). For perspective, Luckin’s reported revenue for the 12 months ended 30 September 2019 was US$470 million, according to Ycharts. The exact extent of the fraudulent transactions has yet to be finalised. 

Luckin also said that investors can no longer rely on its previous financial statements for the nine months ended 30 September 2019. The company’s chief operating officer, Liu Jian, was named as the primary culprit for the misconduct. He has been suspended from his role…

…it turns out that fraudulent transactions at Luckin could have happened as early as April 2019. From 1 April 2019 to 31 January 2020, Luckin’s share price actually increased by 59%. At one point, it was even up by nearly 150%.

If you had shorted Luckin’s shares back in April 2019, you would have faced a massive loss – more than what you had put in – even if you had been right on Luckin committing fraud. This shows how tough it is to short stocks. Not only must your analysis on the fundamentals of the business be right, but your timing must also be right because you could easily lose more than you have if you’re shorting. 

Recent developments at a company named Herbalife (NYSE: HLF) present another similar illustration of the onerous task of shorting. High-profile investor Bill Ackman first disclosed that he was short Herbalife in December 2012. Back then, the company was a “global network marketing company that sells weight management, nutritional supplement, energy, sports & fitness products and personal care products” in 79 countries, according to its 2011 annual report. Today, Herbalife is a “global nutrition company that provides health and wellness products to consumers in 95 markets,” based on a description given in its 2023 annual report. So the company has been in pretty much the same line of business over this span of time.

Ackman’s short-thesis centred on his view that Herbalife was a company running an illegal pyramid scheme, and so the business model was simply not sustainable. When Ackman announced that he was short Herbalife’s shares, the company was reporting consistent and strong growth in its business. From 2006 to 2011, Herbalife’s revenue compounded at an annualised rate of 13% from US$1.9 billion to US$3.5 billion while its profit grew from US$143 million to US$415 million, representing a compounded annual growth rate of 24%.

Although Herbalife has to-date never officially been found to be operating an illegal pyramid scheme, its business results since Ackman came public with his short has been poor. The table below shows Herbalife’s revenue, net income, and net income margins from 2011 to 2023. What’s notable is the clear downward trend in both Herbalife’s net income and net income margin in that time frame. 

Source: Tikr

According to a Bloomberg article published at the end of February 2018, Ackman had effectively ended his short position on Herbalife by the time the piece came to print. I think most investors who are made to guess Ackman’s returns from his Herbalife short by looking only at the trajectory of the company’s financials from 2011 to 2017 would have noted the stark deterioration – the company’s net income declined by nearly 40% and its net income margin shrank from 12.0% to 4.8% – and conclude that Ackman had probably made a decent gain. 

But the stock market had other ideas. Herbalife’s stock price closed at US$23.16 on the day just prior to Ackman’s first public declaration of his short position. It closed at US$46.05 – a double from US$23.16 – when the aforementioned Bloomberg article was published. From December 2012 to today, the highest close for Herbalife’s stock price was US$61.47, which was reached on 4 February 2019. Right now, Herbalife’s stock price is at US$8.07. This comes after Herbalife’s stock price fell by 32% to US$8.03 on 15 February 2024 after the company reported its 2023 fourth-quarter results. Following the sharp decline, Ackman proclaimed on X (previously known as Twitter) that “it is a very good day for my psychological short on Herbalife.” 

The market eventually reflected the deterioration in Herbalife’s fundamentals, but the interim journey was a wild ride. In a similar manner to Luckin’ Coffee (and borrowing the prose from the last paragraph of the excerpts above from Why It’s So Difficult To Short Stocks), if you had shorted Herbalife’s shares back in December 2012 and held onto the position till now, you would have faced a massive loss in the interim – more than what you had put in – even if you were right on Herbalife’s collapsing fundamentals and eventual stock price decline.  

The investing sage Philip Fisher once wrote that “it is often easier to tell what will happen to the price of a stock than how much time will elapse before it happens.” This explains why shorting stocks is hard – really hard. To be successful at shorting, you need to correctly read both the stock’s underlying business fundamentals and the timing of the stock’s price movement. In contrast, it’s far easier to recognise poor underlying business fundamentals in a stock and simply avoid investing in it.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

How Innovation Happens

Innovation can appear from the most unexpected places, take unpredictable paths, or occur when supporting technologies improve over time.

There are a myriad of important political, social, economic, and healthcare issues that are plaguing our globe today. But Jeremy and I are still long-term optimistic on the stock market.

This is because we still see so much potential in humanity. There are nearly 8.1 billion individuals in the world right now, and the vast majority of people will wake up every morning wanting to improve the world and their own lot in life. This – the desire for progress – is ultimately what fuels the global economy and financial markets. Miscreants and Mother Nature will occasionally wreak havoc but we have faith that humanity can clean it up. To us, investing in stocks is ultimately the same as having faith in the long-term ingenuity of humanity. We will remain long-term optimistic on stocks so long as we continue to have this faith.

There may be times in the future when it seems that mankind’s collective ability to innovate is faltering (things are booming now with the AI rush). But here are three stories I learnt recently that would help me – and I hope you, too – keep the faith.

The first story is from Morgan Housel’s latest book Same As Ever. In it, he wrote: 

“Author Safi Bahcall notes that Polaroid film was discovered when sick dogs that were fed quinine to treat parasites showed an unusual type of crystal in their urine. Those crystals turned out to be the best polarizers ever discovered. Who predicts that? Who sees that coming? Nobody. Absolutely nobody.”

What the quinine and polarizers story shows is that the root of innovative ideas can show up completely unexpectedly. This brings me to the second story, which is also from Same As Ever. This time, it is Housel’s recounting of how the invention of planes moved in an unpredictable path that led to the invention of nuclear power plants (nuclear power is a zero-emission, clean energy source, so it could play a really important role in society’s sustainable energy efforts), and how a 1960s invention linking computers to manage Cold War secrets unpredictably led to the photo-sharing social app Instagram:

“When the airplane came into practical use in the early 1900s, one of the first tasks was trying to foresee what benefits would come from it. A few obvious ones were mail delivery and sky racing.

No one predicted nuclear power plants. But they wouldn’t have been possible without the plane. Without the plane we wouldn’t have had the aerial bomb. Without the aerial bomb we wouldn’t have had the nuclear bomb. And without the nuclear bomb we wouldn’t have discovered the peaceful use of nuclear power. Same thing today. Google Maps, TurboTax, and Instagram wouldn’t be possible without ARPANET, a 1960s Department of Defense project linking computers to manage Cold War secrets, which became the foundation for the internet. That’s how you go from the threat of nuclear war to filing your taxes from your couch—a link that was unthinkable fifty years ago, but there it is.”

This idea of one innovation leading to another, brings me to my third story. There was a breakthrough in the healthcare industry in November 2023 when the UK’s health regulator approved a drug named Casgevy – developed by CRISPR Therapeutics and Vertex Pharmaceuticals – for the treatment of blood disorders known as sickle cell disease and  beta thalassaemia. Casgevy’s greenlight is groundbreaking because it is the first drug in the world to be approved that is based on the CRISPR (clustered regularly interspaced short palindromic repeats) gene editing technique. A few weeks after the UK’s decision, Casgevy became the first gene-editing treatment available in the USA for sickle cell disease (the use of Casgevy for beta thalassaemia in the USA is currently still being studied). Casgevy is a huge upgrade for sickle cell patients over the current way the condition is managed. Here’s Sarah Zhang, writing at The Atlantic in November 2023:

When Victoria Gray was still a baby, she started howling so inconsolably during a bath that she was rushed to the emergency room. The diagnosis was sickle-cell disease, a genetic condition that causes bouts of excruciating pain—“worse than a broken leg, worse than childbirth,” one doctor told me. Like lightning crackling in her body is how Gray, now 38, has described the pain. For most of her life, she lived in fear that it could strike at any moment, forcing her to drop everything to rush, once again, to the hospital.

After a particularly long and debilitating hospitalization in college, Gray was so weak that she had to relearn how to stand, how to use a spoon. She dropped out of school. She gave up on her dream of becoming a nurse.

Four years ago, she joined a groundbreaking clinical trial that would change her life. She became the first sickle-cell patient to be treated with the gene-editing technology CRISPR—and one of the first humans to be treated with CRISPR, period. CRISPR at that point had been hugely hyped, but had largely been used only to tinker with cells in a lab. When Gray got her experimental infusion, scientists did not know whether it would cure her disease or go terribly awry inside her. The therapy worked—better than anyone dared to hope. With her gene-edited cells, Gray now lives virtually symptom-free. Twenty-nine of 30 eligible patients in the trial went from multiple pain crises every year to zero in 12 months following treatment.

The results are so astounding that this therapy, from Vertex Pharmaceuticals and CRISPR Therapeutics, became the first CRISPR medicine ever approved, with U.K. regulators giving the green light earlier this month; the FDA appears prepared to follow suit in the next two weeks.” 

The manufacturing technologies behind Casgevy include electroporation, where an electric field is used to increase the permeability of a cell’s membrane. This enables molecules, such as genetic material and proteins, to be introduced in a cell for the purposes of gene editing. According to an expert-call on electroporation that I reviewed, the technology has been around for over four decades, but only started gaining steam in recent years with the decline in genetic sequencing costs; without affordable genetic sequencing, it was expensive to know if a gene editing process done via electroporation was successful. The relentless work of Illumina has played a huge role in lowering genetic sequencing costs over time.

These show how one innovation (cheaper genetic sequencing) supported another in a related field (the viability of electroporation) that then enabled yet another in a related field (the creation of gene editing therapies).    

The three stories I just shared highlight the different ways that innovation can happen. It can appear from the most unexpected places (quinine and polarizers); it can take unpredictable paths (from planes to nuclear power plants); and it can occur when supporting technologies improve over time (the development of Casgevy). What they signify is that we shouldn’t lose hope in mankind’s creative prowess when it appears that nothing new of significance has been built for a while. Sometimes, what’s needed is just time


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

An Attempt To Expand Our Circle of Competence

We tried to expand the limits of our investing knowledge.

Jeremy and I have not invested in an oil & gas company for years. The reason can be traced to the very first stocks I bought when I started investing. Back then, in October 2010, I bought six US-listed stocks at one go, two of which were Atwood Oceanics and National Oilwell Varco (or NOV). Atwood was an owner of oil rigs while NOV supplied parts and equipment that kept oil rigs running. 

I invested in them because I wanted to be diversified according to sectors. I thought that oil & gas was a sector that was worth investing in since the demand for oil would likely remain strong for a long time. My view on the demand for oil was right, but the investments still went awry. By the time I sold Atwood and NOV in September 2016 and June 2017, respectively, their stock prices were down by 77% and 31% from my initial investments. 

It turned out that while global demand for oil did indeed grow from 2010 to 2016 – the consumption of oil increased from 86.5 million barrels per day to 94.2 million barrels – oil prices still fell significantly over the same period, from around US$80 per barrel to around US$50. I was not able to predict prices for oil and I had completely missed out on the important fact that these prices would have an outsized impact on the business fortunes of both Atwood and NOV.

In its fiscal year ended 30 September 2010 (FY2010), Atwood’s revenue and net income were US$650 million and US$257 million, respectively. By FY2016, Atwood’s revenue had increased to US$1.0 billion, but its net income barely budged, coming in at US$265 million. Importantly, its return on equity fell from 21% to 9% in that period while its balance sheet worsened dramatically. For perspective, Atwood’s net debt (total debt minus cash and equivalents) ballooned from US$49 million in FY2010 to US$1.1 billion in FY2016.

As for NOV, from 2010 to 2016, its revenue fell from US$12.2 billion to US$7.2 billion and its net income collapsed from US$1.7 billion to a loss of US$2.4 billion. This experience taught me to be wary of companies whose business results have strong links to commodity prices, since I had no ability to foretell their movements. 

Fast forward to the launch of the investment fund that Jeremy and I run in July 2020, and I was clear that I still had no ability to divine oil prices – and neither did Jeremy. Said another way, we were fully aware that companies related to the oil & gas industry were beyond our circle of competence. Then 2022 rolled around and during the month of August, we came across a US-listed oil & gas company named Unit Corporation. 

At the time, Unit had three segments that spanned the oil & gas industry’s value chain: Oil and Natural Gas; Mid-Stream, and Contract Drilling. In the Oil and Natural Gas segment, Unit owned oil and natural gas fields in the USA – most of which were in the Anadarko Basin in the Oklahoma region – and was producing these natural resources. The Mid-Stream segment consisted of Unit’s 50% ownership of Superior Pipeline Company, which gathers, processes, and treats natural gas, and owns more than 3,800 miles of gas pipelines (a private equity firm, Partners Group, controlled the other 50% stake). The last segment, Contract Drilling, is where Unit owned 21 available-for-use rigs for the drilling of oil and gas.

When we first heard of Unit in August 2022, it had a stock price of around US$60, a market capitalisation of just over US$560 million, and an enterprise value (market capitalisation minus net-cash) of around US$470 million (Unit’s net-cash was US$88 million back then). But the company’s intrinsic value could be a lot higher. 

In January 2022, Unit launched a sales process for its entire Oil and Natural Gas segment, pegging the segment’s proven, developed, and producing reserves at a value of US$765 million. This US$765 million value came from the estimated future cash flows of the segment – based on oil prices we believe were around US$80 per barrel – discounted back to the present at 10% per year. Unit ended the sales process for the Oil and Natural Gas segment in June 2022 after selling only a small portion of its assets for US$45 million. Nonetheless, when we first knew Unit, the Oil and Natural Gas segment probably still had a value that was in the neighbourhood of the company’s estimation during the sales process, since oil prices were over US$80 per barrel in August 2022. Meanwhile, we also saw some estimates in the same month that it would cost at least US$400 million for someone to build the entire fleet of rigs that were in the Contract Drilling segment. As for the Mid-Stream segment, due to Superior Pipeline’s ownership structure and the cash flows it was producing, the value that accrued to Unit was not significant*.

So here’s what we saw in Unit in August 2022 after putting everything together: The value of the company’s Oil and Natural Gas and Contract-Drilling segments (around US$765 million and US$400 million, respectively) dwarfed its enterprise value of US$470 million.

But there was a catch. The estimated intrinsic values of Unit’s two important segments Oil and Natural Gas, and Contract Drilling – were based on oil prices in the months leading up to August 2022. This led Jeremy and I to attempt to expand our circle of competence: We wanted to better understand the drivers for oil prices. There were other motivations. First, Warren Buffett was investing tens of billions of dollars in the shares of oil & gas companies such as Occidental Petroleum and Chevron in the first half of 2022. Second, we also came across articles and podcasts from oil & gas investors discussing the supply-and-demand dynamics in the oil market that could lead to sustained high prices for the energy commodity. So, we started digging into the history of oil prices and what influences it.

Here’s a brief history on major declines in the price of WTI Crude over the past four decades:

  • 1980 – 1986: From around US$30 to US$10
  • 1990 – 1994: From around US$40 to less than US$14
  • 2008 – 2009: From around US$140 to around US$40
  • 2014 – 2016: From around US$110 to less than US$33
  • 2020: From around US$60 to -US$37 

Since oil is a commodity, it would be logical to think that differences in the level of oil’s supply-and-demand would heavily affect its price movement – when demand is lower than supply, prices would crash, and vice versa. The UK-headquartered BP, one of the largest oil-producing companies in the world, has a dataset on historical oil production and consumption going back to 1965. BP’s data is plotted in Figure 1 below and it shows that from 1981 onwards, the demand for oil (consumption) was higher than the supply of oil (production) in every year. What this means is the price of oil has surprisingly experienced at least five major crashes over the past four decades despite its demand being higher than supply over the entire period

Figure 1; Source: BP

We shared our unexpected findings with our network of investor friends, which included Vision Capital’s Eugene Ng. He was intrigued and noticed that the U.S. Energy Information Administration (EIA) maintained its own database for long-term global oil consumption and production. After obtaining similar results from EIA’s data compared to what we got from BP, Eugene asked the EIA how it was possible for oil consumption to outweigh production for decades. The EIA responded and Eugene kindly shared the answers with us. It turns out that there could be errors within EIA’s data. The possible sources of errors come from incomplete accounting of Transfers and Backflows in oil balances: 

  • Transfers include the direct and indirect conversion of coal and natural gas to petroleum.
  • Backflows refer to double-counting of oil-streams in consumption. Backflows can happen if the data collection process does not properly account for recycled streams.

The EIA also gave an example of how a backflow could happen with the fuel additive, MTBE, or methyl tert-butyl ether (quote is lightly edited for clarity):

“The fuel additive MTBE is an useful example of both, as its most common feedstocks are methanol (usually from a non-petroleum fossil source) and Iso-Butylene whose feedstock likely comes from feed that has already been accounted for as butane (or iso-butane) consumption. MTBE adds a further complexity in that it is often exported as a chemical and thus not tracked in the petroleum trade balance.”

Thanks to the EIA, we realised that BP’s historical data on the demand and supply of oil might contain errors and how they could have happened. But despite knowing this, Jeremy and I still could not tell what the actual demand-and-supply dynamics of oil were during the five major price crashes that happened from the 1980s to today**. We tried expanding our circle of competence to creep into the oil & gas industry, but were stopped in our tracks. As a result, we decided to pass on investing in Unit. 

I hope that my sharing of how Jeremy and I attempted to enlarge our circle of competence would provide any of you reading this ideas on how you can improve your own investing process. 

*In April 2018, Unit sold a 50% stake in Superior Pipeline to entities controlled by Partners Group – that’s how Partners Group’s aforementioned 50% control came about. When we first studied Unit in August 2022, either Unit or Partners Group could initiate a process after April 2023 to liquidate Superior Pipeline or sell it to a third-party. If a liquidation or sale of Superior Pipeline were to happen, Partners Group would be entitled to an annualised return of 7% on its initial investment of US$300 million before Unit could receive any proceeds; as of 30 June 2022, a sum of US$354 million was required for Partners Group to achieve its return-goal. In the first half of 2022, the cash flow generated by Superior Pipeline was US$24 million, which meant that Unit’s Mid-stream segment was on track to generate around US$50 million in cash flow for the whole of 2022. We figured that a sale of Superior Pipeline in April 2023, with around US$50 million in 2022 cash flow, would probably fetch a total amount that was in the neighbourhood of the US$354 million mentioned earlier that Partners Group was entitled to. So if Superior Pipeline was sold, there would not be much proceeds left for Unit after Partners Group has its piece. 

**If you’re reading this and happen to have insight on the actual historical levels of production and consumption of oil during the past crashes, we would deeply appreciate it if you could get in touch with us. Thanks in advance!


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life.  I currently have no vested interest in any company mentioned. Holdings are subject to change at any time.

What The USA’s Largest Bank Thinks About The State Of The Country’s Economy In Q4 2023

Insights from JPMorgan Chase’s management on the health of American consumers and businesses in the fourth quarter of 2023.

JPMorgan Chase (NYSE: JPM) is currently the largest bank in the USA by total assets. Because of this status, JPMorgan is naturally able to feel the pulse of the country’s economy. The bank’s latest earnings conference call – for the fourth quarter of 2023 – was held two weeks ago and contained useful insights on the state of American consumers and businesses. The bottom-line is this: The US economy remains resilient, but there are significant risks that are causing JPMorgan’s management team to be cautious.  

What’s shown between the two horizontal lines below are quotes from JPMorgan’s management team that I picked up from the call.


1. The US economy and consumer remains resilient, and management’s base case is that consumer credit remains strong, although loan losses (a.k.a net charge-off rate) for credit cards is expected to be “<3.5%” in 2024 compared to around 2.5% for 2023

The U.S. economy continues to be resilient, with consumers still spending, and markets currently expect a soft landing…

…We continue to expect the 2024 card net charge-off rate to be below 3.5%, consistent with Investor Day guidance…

…In terms of consumer resilience, I made some comments about this on the press call. The way we see it, the consumers find all of the relevant metrics are now effectively normalized. And the question really in light of the fact that cash buffers are now also normal, but that, that means that consumers have been spending more than they’re taking in is how that spending behavior adjusts as we go into the new year, in a world where cash buffers are less comfortable than they were. So one can speculate about different trajectories that, that could take, but I do think it’s important to take a step back and remind ourselves that consistent with that soft landing view, just in the central case modeling, obviously, we always worry about the tail scenarios is a very strong labor market. And a very strong labor market means, all else equal, strong consumer credit. So that’s how we see the world.

2.  Management thinks that inflation and interest rates may be higher than markets expect…

It is important to note that the economy is being fueled by large amounts of government deficit spending and past stimulus. There is also an ongoing need for increased spending due to the green economy, the restructuring of global supply chains, higher military spending and rising healthcare costs. This may lead inflation to be stickier and rates to be higher than markets expect.

3. …and they’re also cautious given the multitude of risks they see on the horizon

On top of this, there are a number of downside risks to watch. Quantitative tightening is draining over $900 billion of liquidity from the system annually, and we have never seen a full cycle of tightening. And the ongoing wars in Ukraine and the Middle East have the potential to disrupt energy and food markets, migration, and military and economic relationships, in addition to their dreadful human cost. These significant and somewhat unprecedented forces cause us to remain cautious.

4. Management is seeing a deterioration in the value of commercial real estate

The net reserve build was primarily driven by loan growth in card and the deterioration in the outlook related to commercial real estate valuations in the commercial bank.

5. Auto loan growth was strong

And in auto, originations were $9.9 billion, up 32% as we gained market share, while retaining strong margins.

6. Overall capital markets activity is picking up, but merger & acquisition (M&A) activity still remains weak…

We are starting the year with a healthy pipeline, and we are encouraged by the level of capital markets activity, but announced M&A remains a headwind and the extent as well as the timing of capital markets normalization remains uncertain…

…Gross Investment Banking and Markets revenue of $924 million was up 32% year-on-year primarily reflecting increased capital markets and M&A activity…

…So as you know, all else equal, this more dovish rate environment is, of course, supportive for capital markets. So if you go into the details a little bit, if you start with ECM [Equity Capital Markets], that helps higher — and the recent rally in the equity markets helps. I think there have been some modest challenges with the 2023 IPO vintage in terms of post-launch performance or whatever. So that’s a little bit of a headwind at the margin in terms of converting the pipeline, but I’m not too concerned about that in general. So I would expect to see rebound there. In DCM [Debt Capital Markets], again all else equal, lower rates are clearly supportive. One of the nuances there is the distinction between the absolute level of rates and the rate of change. So sometimes you see corporates seeing and expecting lower rates and, therefore, waiting to refinance in the hope of even lower rates. So that can go both ways. And then M&A, it’s a slightly different dynamic. I think there’s a couple of nuances there. One, as you obviously know, announced volume was lower this year. So that will be a headwind in reported revenues in 2024, all else equal. And of course, we are in an environment of M&A regulatory headwinds, as has been heavily discussed. But having said that, I think we’re seeing a bit of pickup in deal flow, and I would expect the environment to be a bit more supportive. 

7. …and appetite for loans among businesses is muted

C&I loans were down 2%, reflecting lower revolver utilization and muted demand for new loans as clients remain cautious…

…We expect strong loan growth in card to continue but not at the same pace as 2023. Still, this should help offset some of the impact of lower rates. Outside of card, loan growth will likely remain muted. 

8. Management is not seeing any changes to their macro outlook for the US economy

So the weighted average unemployment rate and the number is still 5.5%. We didn’t have any really big revisions in the macro outlook driving the numbers, and our skew remains as it has been, a little bit skewed to the downside. 

9. Management’s outlook for 2024 includes six rate-cuts by the Fed, but that outlook comes from financial market data, and not from management’s insights

[Question] Coming back to your outlook and forecast for net interest income for the upcoming year with the 6 Fed fund rate cuts that you guys are assuming. Can you give us a little insight why you’re assuming 6 cuts? 

[Answer] I wish the answer were more interesting, but it’s just our practice. We just always use the forward curve for our outlook, and that’s what’s in there.


Disclaimer: The Good Investors is the personal investing blog of two simple guys who are passionate about educating Singaporeans about stock market investing. By using this Site, you specifically agree that none of the information provided constitutes financial, investment, or other professional advice. It is only intended to provide education. Speak with a professional before making important decisions about your money, your professional life, or even your personal life. I don’t have a vested interest in any company mentioned. Holdings are subject to change at any time.