Elon Musk, the legendary figure in the tech industry, has once again shaken the world with his ambitious plans. His latest project, the xAI Supercomputer Factory, is scheduled to be operational by the fall of 2025, linking 100,000 specialized training chips into a supercomputer. This feat will not only reshape the future of the AI field but also push human society towards a more intelligent and efficient direction.

Musk's Vision for the Supercomputer

According to a report by The Information, Musk confirmed the grand blueprint of the xAI plan to investors. This supercomputer will be composed of up to 100,000 of Nvidia's flagship H100 GPU chips, making its scale four times that of the largest GPU clusters today. This means that once built, it will become one of the world's most powerful AI training platforms.

Investment and Challenges

Despite the need for billions of dollars in investment and ample power supply for Musk's supercomputer, it will help the one-year-old startup xAI quickly catch up with its better-funded competitors. These competitors, including OpenAI and Microsoft, also plan to launch AI chip clusters of similar scale next year and even larger chip clusters in the future.

The Power of Clusters

A cluster refers to numerous server chips within a single data center connected by cables so that they can perform complex calculations simultaneously in a more efficient manner. Having a larger and more computationally powerful cluster will bring about more powerful AI, which is exactly what Musk is pursuing.

Cooperation with Oracle

It has been revealed that xAI may collaborate with Oracle to develop this supercomputer. The company has been discussing with Oracle executives the potential expenditure of 10 billion dollars to rent cloud servers over the next few years. xAI has already rented servers with about 16,000 H100 chips from Oracle, becoming Oracle's largest customer for such chips.

Musk's AI Assistant - Grok

The AI assistant Grok envisioned by Musk will have fewer voice restrictions than the assistants of OpenAI and Google. Currently, xAI is training Grok 2.0 on 20,000 GPUs, and the latest version can handle documents, diagrams, and objects in the real world. Musk also plans to expand the model to the fields of audio and video.

Site Selection for the Supercomputer

Although xAI's office is located in the San Francisco Bay Area, the most important factor in determining the location of the AI data center is the power supply. According to those familiar with the power needs of GPUs, a data center with 100,000 GPUs may require a dedicated power supply of 100 megawatts. This is much more than what traditional cloud computing centers require.


Musk's xAI Supercomputer Factory is not only a leap in technology but also a bold prophecy for the future intelligent society. As 2025 approaches, we have reason to believe that this supercomputer will bring an unprecedented intelligent experience to humanity and usher in a new era of AI. Let's look forward to the next miracle from Musk.