The NVIDIA ConnectX-7 family of network adapters supports both the InfiniBand and Ethernet protocols. It enables a wide range of smart, scalable,
and feature-rich networking solutions that address traditional enterprise needs up to the world’s most demanding AI, scientific computing, and hyperscale cloud data center workloads.
Features
PCIe Interface
– PCIe 5.0 compliant, 16 lanes
– Supports PCIe x1, x2, x4, x8, and x16 configurations
– NVIDIA Multi-Host™ supports connection of up to 8 Hosts
– PCIe Atomic
– Transaction Layer Packet (TLP) Processing Hint (TPH)
– PCIe Switch Downstream Port Control (DPC)
– Advanced Error Reporting (AER)
– Advanced Error Reporting (AER) >Access Control Service (ACS) for peer-to-peer secure communications
– Process Address Space ID (PASID)
– Address Translation Service (ATS)
– MSI/MSI-X mechanism support
– Support for SR-IOV
InfiniBand
– Compliant with InfiniBand Trade Association (IBTA) Specification 1.5
– Up to four ports
– Remote Direct Memory Access (RDMA), send/receive semantics
– Hardware-based congestion control
– Atomic operations
– 16 million input/output (IO) channels
– Maximum Transmission Unit (MTU) support from 256B to 4KB, with message length support up to 2GB
– 8 virtual channels (VL) + VL15
Enhanced Networking
– Hardware-based Reliable Transport
– Extended Reliable Connection (XRC) Transport
– Dynamic Connection Transfer (DCT)
– GPUDirect® RDMA
– GPUDirect Storage
– GPUDirect Storage >GPUDirect RDMA Support for Dynamic Routing
– Enhanced Atomic Operations
– Advanced memory mapping support, allowing user-mode registration and memory remapping (UMR)
– On-Demand Paging (ODP), including registration-free RDMA Memory Access
– Enhanced congestion control
– Burst buffer offload
Network Computing
– Set Operation Offloading
– Vector Set Operation Offload
– MPI Tag Matching
– MPI_Alltoall Uninstallation
– Rendezvous Protocol Offload
– Network Memory
Hardware-based IO Virtualization
– Single Root IO Virtualization (SR-IOV)
Storage Offloads
– Block level encryption: XTS-AES 256/512-bit key
– NVMe over Fabrics (NVMe-oF) offload for target servers
– Wire-speed T10 Data Integrity Field (DIF) signature handoff for ingress and egress traffic
– Storage protocols: SRP, iSER, NFS RDMA, SMB Direct, NVMe-oF
Remote Boot
– Remote boot over InfiniBand
– Remote boot over iSCSI
– Unified Extensible Firmware Interface (UEFI)
– Pre-boot Execution Environment (PXE)
Security
– Secure Boot via Hardware Root of Trust
– Secure Firmware Updates
– Flash encryption
System Requirements / Distribution*
– Built-in drivers for major operating systems:
Linux: RHEL, Ubuntu
Windows
– Virtualization and Containers
VMware ESXi (SR-IOV)
Kubernetes
– OpenFabrics Enterprise Distribution (OFED)
– OpenFabrics Windows
Distribution (WinOF-2)
Essentials | |
Product Collection | 400G Ethernet SmartNIC |
Controller Processors | Mellanox ConnectX-7 |
Bus Type/Bus Width | PCIe Gen 4.0/5.0 (16GT/s / 32GT/s) through x16 edge connector. |
Configuration | 1 x 200G QSFP56 Network Interface |
Data Rate Supported | 200/100/50/40/25/10GbE |
Bracket | Full-height bracket installed, Low-profile bracket included in package |
Dimension | 172 mm x 69 mm (PCIe Board) |
Compatibility | |
Ethernet Network Standards | Mellanox adapters comply with the following IEEE 802.3 standards: 200G /100GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE |
IEEE802.3ck, 100/ 200Gb/s Gigabit Ethernet | |
IEEE802.3cd,IEEE802.3bs,IEEE802.3cm,IEEE802.3cn,IEEE802.3cu | |
IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet | |
IEEE 802.3by, Ethernet Consortium25, 50 Gigabit Ethernet, supporting all FEC modes | |
IEEE 802.3ba 40 Gigabit Ethernet, IEEE 802.3by 25 Gigabit Ethernet, IEEE 802.3ae 10 Gigabit Ethernet | |
IEEE 802.3ap based auto-negotiation and KR startup | |
Proprietary Ethernet protocols (20/40GBASE-R2, 50GBASE-R4) | |
IEEE 802.3ad, 802.1AX Link Aggregation | |
IEEE 802.1Q, 802.1P VLAN tags and priority | |
IEEE 802.1Qau (QCN) | |
Congestion Notification | |
IEEE 802.1Qaz (ETS), IEEE 802.1Qbb (PFC), IEEE 802.1Qbg, IEEE 1588v2 | |
Jumbo frame support (9.6KB) | |
Protocol Support | InfiniBand: IBTA v1.5 Auto-Negotiation: NDR (4 lanes x 100Gb/s per lane) port, NDR200 (2 lanes x 100Gb/s per lane) port, HDR (50Gb/s per lane) port, HDR100 (2 lane x 50Gb/s per lane), EDR (25Gb/s per lane) port, FDR (14.0625Gb/s per lane), 1X/2X/4X SDR (2.5Gb/s per lane). |
Ethernet: 200GAUI-2 C2M, 200GAUI-4 C2M, 200GBASE-CR4, 100GAUI-2 C2M, 100GAUI-1 C2M, 100GBASE-CR4, 100GBASE-CR2, 100GBASE-CR1, 50GAUI-2 C2M, 50GAUI-1 C2M, 50GBASE-CR, 50GBASE-R2 , 40GBASE-CR4, 40GBASE-R2, 25GBASE-R, 10GBASE-R, 10GBASE-CX4, 1000BASE CX, CAUI-4 C2M, 25GAUI C2M, XLAUI C2M , XLPPI, SFI | |
Remote Boot | Remote boot over InfiniBand |
Remote boot over iSCSI | |
UEFI and PXE support for x86 and Arm servers | |
Operating Systems/Distributions* | |
System Requirements* | Windows, RHEL/CentOS, FreeBSD, VMware, OpenFabrics Enterprise Distribution (OFED), OpenFabrics Windows Distribution (WinOF) |
An available PCI Express x16 slot | |
Environment & Certification | |
Power Consumption | Max. 17.5W |
Storage Humidity Maximum | 90% non-condensing relative humidity at 35 °C |
Storage Temperature | -40 °C to 70 °C (-40 °F to 158 °F) |
Operating Temperature | 0 °C to 55 °C (32 °F to 131 °F) |
LED Indicators | A constant Green indicates a link with the maximum networking speed |
A constant Yellow indicates a link with less than the maximum networking speed | |
Certifications | CE FCC RoHS |
SUNWEIT
We make every effort to troubleshoot technical information or other errors in product specifications, but some errors may still occur due to our oversight.
We reserve the right not to accept any order that contains incorrect information. Images are for reference only.
All other trademarks or trade names mentioned in the text refer to the organization that owns the trademark or name or its products.
We do not own any rights or interests in the trademarks or trade names of other organizations.
Copyright © 2023 SUNWEIT Corporation 丨 All Rights Reserved 丨 Privacy Protection 丨 Legal Agreements