UPDATED 18:25 EDT / MARCH 18 2025

AI

Nvidia debuts new silicon photonics switches for AI data centers

Nvidia Corp. has introduced a new collection of data center switches described as significantly more efficient than current-generation hardware.

The devices made their debut today at the company’s GTC event in San Jose. During the conference, Nvidia Chief Executive Jensen Huang revealed the company’s latest Blackwell Ultra graphic card. He also detailed that Nvidia’s next generation of artificial intelligence chips, the Vera Rubin series, will launch in the second half of 2026.

Nvidia’s new switches are organized into two product lines. The Spectrum-X Photonics product family uses Ethernet, while the Quantum-X Photonics series is based on InfiniBand. Ethernet and InfiniBand are two data transfer standards that are widely used to power enterprise networks.

Both of Nvidia’s new switch lineups are based on a fairly nascent chip technology called co-packaged optics, or CPO. It’s positioned as a more power-efficient alternative to the traditional way of building networking gear. Nvidia says that its new devices are 3.5 times more power-efficient than earlier products.

Faster optical networks 

Switches are responsible for moving data between servers or from servers to storage equipment and vice versa. Historically, information was transmitted over copper wires in the form of electrical signals. To boost network speeds, data center operators are increasingly switching to fiber-optic network designs that transit data as light rather than electricity.

The servers connected to an optical network process data in the form of electrical signals. As a result, data has to be turned into light beams before it can be sent over a fiber-optic cable. After reaching its destination, the light has to be turned back into electrical signals that the receiving server can understand.

The process of turning data from light to electricity and vice versa is usually done with compact devices called pluggable transceivers. They attach to the switches that power a data center’s network. A pluggable transceiver contains a laser emitter that generates light beams, a modulator that encodes data into the physical properties of those light beams and various other optical components. A large data center can contain up to millions of such devices. 

Nvidia says its new switches remove the need for pluggable transceivers. The company achieved that by integrating a transceiver directly into the chip powers the switches. This arrangement will reduce the need for customers to purchase standalone transceivers, which could significantly lower hardware costs.

Nvidia’s switches are powered by a technology called CPO. The technology makes it possible to combine a switch’s processor and transceiver into a single chip. In the past, those two components could only be implemented on separate chips, which is why companies currently have to plug standalone transceiver modules into their switches.

Placing the processor and transceiver on the same chip reduces the physical distance between them, which allows data to move between the two components faster. Moreover, Nvidia says that its switches require four times fewer lasers than earlier hardware to turn data into light beams. This is significant because laser emitters often account for most of the power consumed by pluggable transceiver modules.

“AI factories are a new class of data centers with extreme scale, and networking infrastructure must be reinvented to keep pace,” Huang said. “By integrating silicon photonics directly into switches, Nvidia is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories.”

New network devices

The chips in Nvidia’s new switches are made using Taiwan Semiconductor Manufacturing Co.’s implementation of CPO. TSMC refers to the technology as the Compact Universal Photonic Engine, or COUPE for short. The chipmaker says that it provides the ability to combine a 65-nanometer electronic processor with a photonic integrated circuit.

“TSMC’s silicon photonics solution combines our strengths in both cutting-edge chip manufacturing and TSMC-SoIC 3D chip stacking to help NVIDIA unlock an AI factory’s ability to scale to a million GPUs and beyond,” said TSMC CEO C.C. Wei.

Nvidia’s InfiniBand-based silicon photonics switches, the Quantum-X Photonics series, will ship with 144 ports that can each provide 800 gigabits per second of throughput. The company says the devices provide twice the speed of earlier hardware when powering AI data center networks. The product line is scheduled to ship later this year. 

The Ethernet-based Spectrum-X Photonics series, meanwhile, will provide throughput of up to 400 terabits per second. The switches are set to ship in multiple configurations with 128 or 512 ports. Nvidia plans to launch the product line in 2026.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU