- Data center revenue hit $35.6B in Q4 2025, up 93% YoY, driven by Blackwell GPU demand
- Gross margin compressed 2.1 percentage points to 73.5% as Blackwell ramp costs hit
- FY2026 guidance of $43B in Q1 revenue implies continued hyperscaler capex acceleration
- At 28x forward earnings, Nvidia remains expensive but justifiably so for a company growing 80%+ YoY
Section 1 — The Numbers That Shocked Wall Street
When Nvidia reported Q4 2025 results in February, even the most bullish analysts were caught flat-footed. Total revenue came in at $39.3 billion, beating consensus estimates of $37.1 billion by a full 5.9%. This wasn't a small beat — it was a statement. For context, Nvidia's entire annual revenue in fiscal 2023 was $26.9 billion. The company is now generating more than that every single quarter.
The headline data center number of $35.6 billion represents 90.6% of total company revenue, a concentration that would concern most investors if the demand trajectory weren't so compelling. Hyperscalers — Microsoft, Google, Amazon, and Meta — collectively spent over $220 billion on capital expenditure in calendar 2025, and a disproportionate share of that went directly to Nvidia's H100 and Blackwell GPU clusters. The Blackwell architecture, which launched commercially in mid-2025, is now shipping at scale with ASPs (average selling prices) ranging from $30,000 to $40,000 per GPU depending on configuration.
Gaming revenue, once Nvidia's core business, contributed a comparatively modest $2.5 billion, up just 11% YoY. This segment is now almost an afterthought for investors, though it provides a relatively stable baseline and benefits from the same CUDA ecosystem moat that makes Nvidia's AI business defensible.
The EPS figure of $0.89 (GAAP) and $0.93 (non-GAAP) both beat estimates, though the GAAP figure was impacted by a $1.2 billion charge related to Blackwell manufacturing ramp costs that are now largely behind the company.
Section 2 — Blackwell vs. Hopper: The Architecture Transition
The critical question for Nvidia investors heading into 2026 is whether the Blackwell transition represents a one-time inventory digestion risk or a clean ramp. Based on Q4 commentary from CEO Jensen Huang, the answer is increasingly the latter.
Blackwell GPU shipments accelerated in Q4 to approximately 500,000 units, up from roughly 200,000 in Q3. The B200 chip delivers roughly 4x the inference performance of the H100 at similar power envelopes, which matters enormously for hyperscalers optimizing for cost-per-token in large language model deployments. Microsoft's Azure AI division confirmed in January that Blackwell clusters are already handling production workloads, with H100 clusters being reserved for training runs where legacy software dependencies make migration impractical.
The gross margin compression from 75.6% to 73.5% reflects real economics: Blackwell uses CoWoS-L (chip-on-wafer-on-substrate, large) packaging from TSMC, which is more expensive than the CoWoS-S packaging used for Hopper. TSMC has been aggressively expanding CoWoS capacity, and Nvidia's management guided for margins to recover to the 74.5-75% range by Q2 FY2026 as yields improve and TSMC passes through scale economies.
The competitive landscape deserves mention. AMD's MI325X has gained meaningful traction at Microsoft and Meta, capturing an estimated 8-12% of AI accelerator shipments in Q4 2025. Intel's Gaudi 3 remains a rounding error at roughly 2% share. Custom silicon — Google's TPU v5, Amazon's Trainium 2, and Meta's MTIA — is growing but serves primarily captive workloads. Nvidia's CUDA ecosystem, with over 4 million registered developers and 3,000+ GPU-accelerated applications, remains the dominant switching cost.
| Accelerator | Peak FP8 TFLOPS | HBM3e Memory | 2025 Market Share |
|---|---|---|---|
| Nvidia B200 | 9,000 TFLOPS | 192GB | ~78% |
| AMD MI325X | 2,610 TFLOPS | 288GB | ~11% |
| Google TPU v5 | N/A (internal) | N/A | ~6% (captive) |
| Intel Gaudi 3 | 1,835 TFLOPS | 128GB | ~2% |
Section 3 — The Demand Sustainability Question
The single largest risk to Nvidia's thesis is hyperscaler capital expenditure discipline. In 2000-2001, telecom companies cut capex by 40-60% within 18 months of peak spending. If AI ROI disappointments cause Microsoft, Google, or Amazon to pause GPU orders, Nvidia's revenue could decline 30-40% before recovering. Monitor hyperscaler earnings calls for any language shifts around "optimizing" or "rationalizing" AI spend.
The bear case on Nvidia centers on demand durability. At $220 billion in combined hyperscaler capex, the question isn't whether these companies are spending — it's whether the spending is sustainable and whether Nvidia captures the same share going forward.
Three data points suggest demand remains robust through at least mid-2026. First, Microsoft confirmed an $80 billion AI capex budget for calendar 2026, with 60% allocated to U.S. facilities. Second, Google's Sundar Pichai told analysts in January that "underspending on AI infrastructure is the greater risk than overspending." Third, the sovereign AI buildout — government-directed GPU clusters in UAE, Saudi Arabia, France, Japan, and India — represents an entirely new demand vector that was essentially zero in 2024.
The enterprise AI adoption curve is also still early. IDC estimates that enterprise AI software spending will grow from $67 billion in 2025 to $227 billion by 2030. Every dollar of enterprise AI software deployment ultimately drives hardware spending, though the correlation is lagged and not 1:1.
Networking revenue, often overlooked, was $3.1 billion in Q4 — representing Nvidia's InfiniBand and Ethernet switch businesses. As GPU cluster sizes scale to 100,000+ units for frontier model training runs, networking becomes a bottleneck and a profit center. Nvidia's NVLink 4 interconnect, exclusive to its own chips, creates an additional lock-in that AMD and Intel cannot easily replicate.
Section 4 — Investment Framework
Valuing Nvidia requires accepting that traditional P/E multiples are inadequate for a company at this growth rate. At $130 per share (as of early March 2026), Nvidia trades at approximately 28x consensus FY2027 earnings of $4.65 per share. The PEG ratio — P/E divided by earnings growth rate — sits at roughly 0.35, which is actually cheap for a company growing earnings at 80%+ YoY.
The more useful framework is a discounted cash flow analysis anchored to three scenarios. In the base case, data center revenue grows 50% in FY2026 and 25% in FY2027, with margins recovering to 76%. This produces a fair value of approximately $150-160 per share. In the bull case, which assumes continued hyperscaler capex acceleration and 65% data center growth in FY2026, fair value reaches $190-200. In the bear case — hyperscaler capex pause, AMD share gains to 20%, and margin compression — fair value falls to $85-95.
For position sizing, we recommend treating Nvidia as a 3-7% core holding in a diversified tech portfolio rather than a concentrated bet. The upside case is compelling, but the stock's 30-day realized volatility of 42% means drawdowns of 20-30% are normal and should be expected.
The optionality in Nvidia's software business — CUDA-X libraries, NVIDIA AI Enterprise subscriptions at $4,500 per GPU per year, and the emerging DGX Cloud managed service — remains underappreciated by most sell-side models. If Nvidia captures even 15% of the AI software stack by revenue, it adds $15-20 billion in high-margin recurring revenue to the model by 2028.
Verdict
Nvidia remains the most structurally advantaged company in the AI infrastructure buildout. Q4 2025 results confirmed that Blackwell demand exceeds supply, margins are temporarily compressed but structurally intact, and the competitive moat — CUDA, NVLink, and developer ecosystem — is wider than bears acknowledge. At 28x FY2027 earnings for 80%+ EPS growth, the stock is not cheap but is not egregiously expensive. The primary risk is hyperscaler capex discipline. Maintain core position; add on 15%+ pullbacks tied to macro rather than fundamental deterioration.
Data as of March 2026. Not financial advice.
— iBuidl Research Team