I agree that growth will inevitably slow down, but that doesn’t automatically mean that this couldn’t become a machine generating steady cash flow for decades with a huge profit margin (yes, less than today, but still 50%+ is certainly easy). The open question then is at what point growth slows down – the stock price often takes a small haircut at the point when growth forecasts turn into more steady cash generation, but is this next quarter, next year, or later…
As a foolish layman, I ask. Can this chip demand (temporarily) suddenly flatten like a pancake? AI as a whole is not like a railway or a bridge, which only starts generating returns when “ready.” If the construction of a railway is stopped before even the last piece of rail is installed, the investment is unproductive. Even against headwinds, such a project is either seen through to a bitter end or it fails completely. Now, those developing and leveraging AI can surely profit from the mega-investments made so far, even if the investment in “hardware” were put on hold.
It must be remembered that the content of server halls is “consumable goods,” and currently, the power of chips is increasing at such a rate that due to electricity consumption alone, within about 4-6 years, the equipment will become what is known as e-waste, because by renewing the devices, you get so much more computing power with the same space, cooling, and electricity consumption. So, if the future is that all megacaps have huge data centers grinding AI tasks, these will have to be renewed at regular intervals. For this reason, the demand for chips will never truly end. It may fluctuate, and currently there is a big investment boom, so I can well imagine that within 2-4 years we will be at a trough when the currently purchased equipment is first “used up,” and then there will be an urgent need to renew again. The bigger the one-time splurge on new equipment, the “bumpier” subsequent renewal cycles will become. But I don’t see a world where server hardware for AI use is suddenly no longer needed as a reality.
In this respect, the situation is very different from the time of the mining boom – the demand for mining cards was killed by Ethereum’s transition to a proof-of-stake system, and long before that, Bitcoin mining’s shift to highly specialized ASIC chips, which NVIDIA did not participate in. So, the entire demand disappeared pretty much all at once.
On the AI side, I argue that the market is so big that if chips need to start specializing, NVIDIA will indeed provide versions for different purposes instead of “giving away” this market share to specialized chips. And as long as giant firms run their data centers, they will need better hardware at regular intervals. The only way to “lose” these sales is to fall behind in the hamster wheel of development and watch a competing company make a better product and take the market. NVIDIA’s lead is so strong that I don’t see this as a reality in the very near future.
Demand might slump if returns are not generated from these investments. That’s what is mainly feared here. On the other hand, the entire development is geared towards a large part of human work being replaced by various AI solutions. If development falls into the wrong hands, society might be forced to regulate or prohibit it in some way – imposing strict oversight rules – which will certainly slow down development. The development of quantum technology will at some point start directing funds to this side as well, as it is currently still in its infancy both in terms of functionality and the amount of investment.
I recall one of the world’s largest Finnish network companies back then drawing fantastically magnificent PowerPoints, which showed a data tornado and all sorts of other nice things, depicting how gigantically bitstreams would grow in the future. So shareholders were supposed to get manna from heaven, so much that it couldn’t all be collected by a bulldozer into storage.
It wasn’t many years later when the PowerPoints read something about the challenges of price erosion and how margins were under pressure, which in turn led to staff cuts.
In other words, it doesn’t matter how much of that data center equipment needs to be renewed. What matters is how much profit is gained from that replacement. And that is then a bit harder to estimate.
As long as NVIDIA is practically the only supplier and its market share is what it is, things are rocking. Then if competitors eat into market share, margins will be shredded…
I recall a similar discussion 18-24 months ago on this forum (also from my side). Back then, it was updated that a company like Nvidia, which supplies hardware to a narrow business segment, could never be the world’s most valuable company… I remember how terrible it felt even to me to calculate possible market valuations with the Hopper demand outlook and net margin percentages and conclude that they couldn’t possibly be real. Well, what happened then…
Fortunately, Nvidia is ultimately quite easy to analyze. Data points on customers’ investment intentions and suppliers’ capacity outlooks are available 12 months ahead, and based on them, it’s quite easy to draw conclusions about the future with relatively high confidence.
As for yesterday’s results, it’s funny to say that a raise and beat of over a billion already felt somewhat lukewarm. By objective metrics, this once again became cheaper both looking backward and forward.
A small amount of uncertainty is caused primarily by the fact that, based on the conf call, it would appear that tariffs or export controls are in no way factored into the company’s own views on the future, and in that regard, Nvidia is also riding the waves like a chip.
Nvidia’s mishaps with RTX 50-series graphics card chips continue… It’s impossible, and perhaps inadvisable, to draw conclusions from the larger scale, but it is certainly always concerning if Nvidia, known for its good quality, has let its quality assurance decline in both design and production. In this case, according to Nvidia, it concerned 0.5% of produced RTX 5090/5090D and RTX 5070 Ti graphics cards. Claims have appeared online that there were also problems with RTX 5080 graphics cards, which Nvidia did not mention.
The focus in the market is increasingly shifting to the inference side, and there’s already a lot of crowding in that market.
For example, Meta is increasingly relying on AMD because the TCO is simply better.
Also, for instance, Mistral’s LeChat runs inference on Cerebras’ specialized chips.
Groq is also growing rapidly with Saudi money:
https://www.datacenterdynamics.com/en/news/groq-secures-15bn-from-saudi-arabia-to-expand-ai-inference-infrastructure-in-the-region/
AMD just got a pretty significant boost for inference in Sglang’s new version:
Even though reasoning models can take up to, say, 100x compute compared to other models, these reasoning predictions are a very small part in relation to basic “prediction needs.”
It’s a bit like of the world’s “logistics tasks,” only a small portion requires a rocket to deliver goods. Most are handled by trucks and other vehicles (Let’s say, for example, Gemini Flash 2.0, which has quite strong price/performance capabilities and is fast too)
Ultimately, this is just heavy-handed news. If a company’s domestic policies start to restrict its operating environment too much, then it’s not impossible even for Nvidia to change its headquarters’ address.
Here’s Investing visuals’ visual tweet on how money flows in Nvidia. ![]()
https://x.com/ZeevyInvesting/status/1899466339514814801


Nvidia is preparing to unveil its next-generation AI platforms, GB300 and Rubin, at the GTC 2025 event. The company aims to lower the costs of AI, which could accelerate its broader adoption.
Nvidia is still down about 10 percent since the beginning of the year, despite positive signals in the market. Demand outlooks from Foxconn and TSMC have strengthened, and production of the Blackwell platform has started rapidly.
Investments by major technology companies in data centers remain strong, and various countries are planning multi-billion dollar investments in infrastructure, e.g., the United States is increasing its investments from 100 billion to 500 billion dollars.
GTC is indeed on Tuesday already!
You can start with the Acquired podcast’s GTC preview at 5:00 PM:
And here you can watch the keynote starting at 7:00 PM:
There must be good news about Rubin. Such news would be, for example:
- Production starting already in 2025
- Shipments by Q2 2026 at the latest
A possible teaser about post-Rubin roadmaps would also be good - Rubin was, however, already announced in 06/24, so information about 27/28 products should gradually come out according to the yearly cadence.
Here is a tweet with Morgan Stanley’s preliminary comments regarding the company’s GTC event, where Jensen Huang will give the opening speech.
According to the tweet, the event will likely feature the Blackwell Ultra chip, which offers better performance, larger memory, and higher power consumption, benefiting, among others, suppliers of power systems and cooling solutions.
Nvidia, as expected, showcased a range of new AI solutions and hardware at its much-discussed conference in the United States. According to the man in the leather jacket, AI is moving into a new phase where systems will not only generate content but also be capable of more complex reasoning and autonomous operation.
The company unveiled its next-generation processor technology, which is expected to significantly improve AI computing power. Additionally, it announced new networking technology that promises to reduce energy consumption in data centers.
The importance of robotics emerged in the speech as a solution to future labor shortages. The company announced collaborations with several firms to develop intelligent robotic systems.
On the stock market, the company’s share price was on a downward trend, even though the innovations presented at the conference indicated, according to the article, its aim to maintain its position at the forefront of AI development.
An anticipated event, but there was less coverage of it in the news feed than I expected. It’s a foreign world to me, so that’s why I checked the news feed. I didn’t come across any mega-hyper boosts about this event on X. ![]()
https://www.investors.com/news/technology/nvidia-stock-gtc-conference-march-2025-live-coverage/
https://x.com/danielnewmanUV/status/1902050060650639635


https://x.com/danielnewmanUV/status/1902049686673912176

https://x.com/StockMKTNewz/status/1902074068582723717

Here’s Investing visuals’ fresh look at Nvidia. ![]()
Good article about NVIDIA’s research team, which is surprisingly small: 300 people!
The philosophy is to let things mature patiently, even for years, if they are believed in. And to fail often, but quickly.
My thoughts on Nvidia after GTC, in a nutshell:
-
Data center modernization continues and accelerates

-
It’s natural for the gross margin to temporarily dip when new production is ramped up (Blackwell)

-
With this image, Jensen aims to tell “the whole world” that Nvidia strives to find the optimal balance point (sweet spot) between raw computing power and AI inferencing in applications. In other words, Jensen knows precisely that the AI application side must constantly evolve. Otherwise, the famous AI bubble will form, which I still don’t observe myself.

-
This is a basic IT slide. If systems are left unupdated, costs will eventually spiral out of control relative to competitors.

In my opinion, we are currently in a normal development phase of the IT world, where Nvidia is paving the way. Because its competitors somehow managed to miss the start and stayed in the starting blocks. But the need for data center updates has not disappeared; that task is now largely being handled by Nvidia.
I picked the images from the video found via the link:












