generalactive

AI Infrastructure in 2026: Where the Capex Cycle Still Has Pricing Power

AI infrastructure spending is still elevated in 2026, but the best investment opportunities are concentrated in bottlenecks that preserve pricing power across power, cooling, networking, and advanced packaging. The capex cycle is maturing rather than ending, which makes selectivity and financial discipline more important than broad AI exposure.

Updated April 7, 2026
Share:
0-5 YearConfidence: 8/10
technology|Very Bullishindustrials|Bullishutilities|Bullishgrowth|Bullishus equity|Bullish
AI infrastructure remains investable in 2026, but the best opportunities are no longer defined by generic accelerator scarcity alone. The more durable pricing power is shifting toward the infrastructure bottlenecks that determine whether AI capacity can actually be energized, cooled, connected, and productively used as hyperscaler spending stays high but investor scrutiny increases. The capex cycle is maturing, not ending. Microsoft said in January 2025 that it was on track to invest approximately $80 billion in AI-enabled datacenters in fiscal 2025. Alphabet said on its April 2025 first-quarter earnings call that it expected about $75 billion of capital expenditures in 2025, largely for technical infrastructure. Meta, in its first-quarter 2025 results, raised planned 2025 capital expenditures to $64 billion to $72 billion. Amazon framed the picture somewhat differently: in its fourth-quarter 2024 earnings discussion, management said the quarter's $26.3 billion of property-and-equipment purchases was a reasonably representative quarterly run rate for 2025, implying annualized spending above $100 billion, with the majority aimed at AWS and AI infrastructure. These figures are not perfectly like-for-like, but together they show that the largest cloud platforms are still deploying extraordinary capital into AI capacity. That matters because the central investment question has changed. In the first phase of the AI buildout, investors were rewarded for owning the obvious chokepoint in leading accelerators. By 2026, top-end compute still matters, but scarcity has broadened into the surrounding system. Advanced packaging determines how much leading-edge silicon can ship. Optical interconnect and switching determine whether clusters can scale efficiently. Power equipment, substation access, and utility interconnection determine whether new capacity can be turned on. Liquid cooling and thermal management determine whether high-density racks can run at target utilization. Pricing power increasingly sits with the suppliers that keep those systems working, not with every company that can claim AI exposure. Power is the clearest non-consensus bottleneck. The IEA's Energy and AI work argues that global electricity demand from data centers is set to rise sharply by 2030 as AI adoption expands. More concretely for U.S. investors, Lawrence Berkeley National Laboratory reported in January 2025 that data centers consumed about 4.4% of total U.S. electricity in 2023 and could reach 6.7% to 12.0% by 2028, with total data-center electricity use rising from 176 TWh in 2023 to 325 to 580 TWh by 2028. That means the gating variable is increasingly the ability to secure grid access, transformers, switchgear, backup power, and transmission capacity quickly enough to support AI loads. The investment implication is that selected electrical-equipment vendors, grid-enablement suppliers, and utilities with advantaged territories or interconnection pipelines may enjoy more durable bargaining power than owners of generic data-center shells. Cooling is the second structural bottleneck. AI servers draw far more power per rack than legacy enterprise workloads, and the upgrade path from air cooling to liquid cooling is becoming an economic requirement rather than a premium feature. Microsoft itself highlighted electricity and liquid cooling among the enabling inputs required for its datacenter expansion. As rack density rises, cooling stops being a support function and becomes a determinant of usable capacity, uptime, and customer ROI. That pushes pricing power toward thermal-management specialists, liquid-cooling providers, and infrastructure vendors able to retrofit or design facilities for higher-density AI deployments. Networking and optical interconnect remain another durable pocket of pricing power. The AI buildout is not only a compute story; it is also a bandwidth and latency story. Large model training clusters and broader inference deployments both require fast movement of data across increasingly dense systems. Independent industry work from Cignal AI has shown optical component revenue reaching new highs as AI data-center demand accelerated, while carrier and enterprise surveys from Heavy Reading pointed to strong demand for 800G upgrades and the roadmap beyond. The practical point for investors is that differentiated optics, switching, and interconnect suppliers can keep pricing power longer than lower-value hardware assemblers because network performance is directly tied to cluster efficiency and utilization. Advanced packaging is still tighter than many investors assume. TSMC has repeatedly discussed very strong AI-related demand and aggressive CoWoS expansion in its earnings materials, which reframes the bottleneck away from wafers alone. What matters is not simply whether leading-edge chips can be fabricated, but whether packaging, memory integration, and system assembly can keep pace with AI accelerator demand. If CoWoS and related advanced-packaging capacity expands faster than expected, pricing power near the silicon layer will normalize sooner. But until that happens, the packaging ecosystem remains one of the more credible places where scarcity can persist beyond the initial GPU land grab. The transition from training to inference also reshapes the opportunity set. Training still anchors large capital commitments, but inference broadens spending across more customers, more workloads, and more deployment environments. That shifts the emphasis toward infrastructure that improves throughput, energy efficiency, memory bandwidth, and latency at scale. The next leg of AI capex is not just more of the same accelerator shortage. It is a broader optimization problem across power, thermals, interconnect, and efficient system design. The winners are likely to include second-order enablers that help customers turn AI infrastructure into commercially useful output, not only the most obvious chip vendors. A more institutional way to express this thesis is through selectivity, not theme ownership. Investors should look for evidence that a company controls a true bottleneck rather than simply participating in a hot capex cycle. That means watching gross-margin resilience, backlog quality, cancellation rates, lead times, utilization, customer concentration, and whether revenue growth is translating into free-cash-flow leverage rather than being consumed by endless capacity additions. A supplier with multi-quarter backlog growth, stable pricing, disciplined capex, and visible utilization is more attractive than a company with headline AI exposure but deteriorating margins or orders driven by temporary shortages. In this part of the cycle, pricing power must show up in the financials, not only in industry narratives. Where is pricing power most likely to fade first? GPU rental and other forms of generic capacity are the most vulnerable once supply catches up. Commodity data-center shells without power or interconnection advantages also look exposed. So do suppliers whose economics depend on temporary scarcity rather than embedded technical differentiation. If capacity becomes easier to source and customers become more price-sensitive, those layers should see margin compression well before the infrastructure bottlenecks tied to power, cooling, advanced packaging, or high-performance interconnect. The monetization question still matters because capex can outrun end-demand for long periods. For the thesis to hold, investors need to see sustained inference demand, improving utilization, and evidence that enterprise AI workloads are moving from pilot programs into revenue-bearing production. If model efficiency improves so quickly that customers need materially less hardware per workload, or if AI revenue ramps lag badly enough that hyperscalers moderate spending after the first buildout wave, the pricing-power map will narrow. The key falsifiers are layer-specific. The power thesis weakens if utilities and regulators accelerate interconnection and power-delivery timelines enough to relieve scarcity faster than expected. The cooling thesis weakens if rack-density demands plateau or if standardized solutions compress vendor differentiation. The optics thesis weakens if 800G and higher-speed supply normalizes quickly and competitive intensity erodes pricing. The advanced-packaging thesis weakens if CoWoS expansion outpaces end demand and lead times fall sharply. The broader capex thesis weakens if hyperscalers pivot from aggressive buildout to digestion, or if inference intensity and enterprise adoption disappoint. The bottom line is that AI infrastructure remains a credible 2026 investment theme, but only in a narrower and more disciplined form than the early-cycle trade. The strongest opportunities are where real bottlenecks still govern deployment economics: power equipment and grid access, high-density cooling, optical networking and interconnect, advanced packaging, and the efficiency layers that support inference at scale. That is where the capex cycle still has pricing power.

Key Data Points

indicator: Microsoft FY2025 AI-enabled datacenter investment
value: Approximately $80 billion
source: Microsoft, The golden opportunity for American AI, Jan. 3 2025
implication: Microsoft's own FY2025 target confirms hyperscaler AI infrastructure spending remains exceptionally large.
indicator: Alphabet 2025 capital expenditure expectation
value: About $75 billion
source: Alphabet investor relations, Q1 2025 earnings call materials
implication: Alphabet's guidance supports the view that technical infrastructure spending remains elevated beyond the initial AI surge.
indicator: Meta 2025 capital expenditure outlook
value: $64 billion to $72 billion
source: Meta investor relations, Meta Reports First Quarter 2025 Results
implication: Meta's raised capex range reinforces ongoing demand for AI data-center and supporting infrastructure.
indicator: Amazon quarterly capex run rate entering 2025
value: $26.3 billion in Q4 2024, implying annualized spending above $100 billion
source: Amazon investor relations, Q4 2024 earnings release and management commentary
implication: Amazon's spending cadence suggests AWS and AI infrastructure remain major capital priorities in 2025.
indicator: U.S. data center share of electricity consumption
value: 4.4% in 2023, projected to reach 6.7% to 12.0% by 2028
source: Lawrence Berkeley National Laboratory, Jan. 15 2025
implication: Power access and electrical equipment are becoming structural constraints on AI capacity growth.
indicator: U.S. data center electricity usage
value: 176 TWh in 2023, projected to rise to 325 to 580 TWh by 2028
source: Lawrence Berkeley National Laboratory, Jan. 15 2025
implication: The magnitude of projected load growth strengthens the case for utilities, grid equipment, and cooling infrastructure as bottleneck exposures.
indicator: Optical component revenue tied to AI data-center demand
value: Nearly $25 billion in 2025
source: Cignal AI, Optical Component Revenue Reaches Nearly $25B in 2025
implication: Optics and interconnect remain a differentiated beneficiary as cluster bandwidth requirements increase.
indicator: Advanced packaging tightness
value: TSMC continued aggressive CoWoS expansion amid very strong AI-related demand
source: TSMC investor relations, 4Q24 earnings transcript
implication: Advanced packaging remains a credible near-silicon bottleneck rather than an already-normalized part of the supply chain.

Sources

Apr 7, 2026

Want to apply these insights to your own portfolio?

Create your free account →