Nvidia to launch its second supercomputer in Taiwan

Nvidia is all in, in Taiwan.

Jon Peddie

Nvidia CEO Jensen Huang announced plans to increase investments in Taiwan and revealed plans for a second supercomputer there. Huang praised TSMC’s advanced technology, work ethic, and flexibility, and also acknowledged other important tech suppliers in Taiwan, and Nvidia’s collaborations with South Korean suppliers. Huang also addressed concerns about AI data center power consumption, suggesting they be set up away from population centers, and emphasized Nvidia’s dominance in the AI computing market. The company plans to recruit at least 1,000 engineers in the next five years for its large-scale AI R&D center.

Nvidia Supercomputer
Nvidia supercomputer. (Source: Nvidia)
Nvidia is fully committed to Taiwan

While in Taiwan for Computex 2024, Nvidia’s CEO Jensen Huang said his company would increase its investments in the country notwithstanding the geopolitical tensions. He highlighted the crucial local chip supply chain and long-standing relationship with TSMC. Speaking in Taipei, Huang revealed plans for another supercomputer center in Taiwan following the announcement last year of Taipei-1. It will be similar to the Taipei-1 center in Kaohsiung, which is owned and operated by Nvidia. The location for the second machine is undecided at this time. Taipei-1 is shared free of charge with Taiwan’s academia and research and development community.

“TSMC is incredible,” said Huang, “advanced technology, incredible work ethic, super flexible,” and cited the 25-year partnership between the companies.

In addition to TSMC, Taiwan hosts major tech suppliers like Foxconn, Inventec, Wistron, and Pegatron, described by Huang as “underappreciated, unsung heroes.” Huang also expressed interest in evaluating Intel as a potential contract chipmaking service provider. Nvidia collaborates with South Korean suppliers SK Hynix and Samsung, along with Micron from the U.S., for high-bandwidth memory (HBM) essential for AI applications.

Nvidia dominates the AI computing market, employing its GPUs, Ethernet, and NVLink switches. Recently, Nvidia locked its NVLink technology into a proprietary system, prompting competitors like AMD, Intel, and Cisco to form the Ultra Ethernet Consortium to develop an open standard for AI chip communication. Huang commented on their efforts, saying, “This technology is incredibly complicated.” Huang said that in the next five years, Nvidia will employ at least 1,000 engineers and is currently recruiting talent on an ongoing basis for the large-scale AI R&D center.

Huang also addressed concerns about the power consumption of AI data centers, suggesting that data centers be set up away from population centers and emphasizing that AI models can be trained anywhere and then deployed as needed.