Meta has expanded its planned AI data center in El Paso, Texas, scaling the facility to 1 gigawatt of capacity and increasing total investment to more than $10 billion. The update materially changes the scope of a project originally announced at $1.5 billion, positioning the site as one of the company’s largest infrastructure builds to date and aligning it with escalating compute demands tied to AI model development and deployment.

The facility, expected to come online later this decade, is designed to support AI-optimised workloads at hyperscale. A 1GW footprint places it in the upper tier of global data center capacity, reflecting a broader shift among large technology firms toward vertically integrated infrastructure to meet rising training and inference requirements. This expansion also reinforces Texas as a strategic geography for AI infrastructure, combining grid capacity, land availability, and policy alignment.

Operationally, the increase in scale introduces both capacity and cost implications. The project is expected to support more than 300 permanent roles and several thousand construction jobs at peak, but its significance lies less in employment and more in securing long-term compute supply. As AI workloads become more persistent and resource-intensive, ownership of dedicated infrastructure is increasingly tied to performance predictability, cost control, and supply chain resilience.

The announcement also outlines supporting investments in energy and water systems, which remain critical constraints for large-scale AI infrastructure. The company states it is adding significant clean energy capacity to the Texas grid to match electricity demand and is implementing a closed-loop liquid cooling system designed to minimise water use for most of the year. It has also committed to water restoration efforts intended to offset more than the facility’s consumption locally.

These measures reflect growing scrutiny of the environmental footprint of AI infrastructure, particularly in regions facing water stress or grid constraints. While hyperscalers increasingly position sustainability as a design parameter, the scale of new AI data centers continues to test the balance between performance requirements and local resource impact.

The El Paso expansion underscores a wider industry pattern: capital expenditure is shifting decisively toward physical infrastructure as a prerequisite for AI capability. As model performance becomes more tightly coupled to compute availability, large-scale, purpose-built data centers are emerging not as backend assets, but as core strategic differentiators.


Share this post
The link has been copied!