Mellanox ConnectX-4 network controller cards with 100Gb/s Ethernet connectivity provides a high performance and flexible solution for Web 2.0, Cloud, data analytics, database, and storage platforms.ConnectX-4 provides an unmatched combination of 100Gb/s bandwidth in a Dual, single port, low latency, and specific hardware offloads, addressing both today’s and the next generation’s compute and storage data center demands.
NEW FEATURES
– 100Gb/s Ethernet per port
– 1/10/25/40/50/56/100 Gb/s speeds
– Single and dual-port options available
– T10-DIF Signature Handover
– CPU offloading of transport operations
– Application offloading
– Mellanox PeerDirect communication acceleration
– Hardware offloads for NVGRE, VXLAN and GENEVE encapsulated traffic
– End-to-end QoS and congestion control
– Hardware-based I/O virtualization
– RoHS compliant
BENEFITS
– High performance silicon for applications requiring high bandwidth, low latency and high message rate
– World-class cluster, network, and storage performance
– Smart interconnect for x86, Power, Arm, and GPU-based compute and storage platforms
– Cutting-edge performance in virtualized overlay networks NVGRE and GENEVE
– Efficient I/O consolidation, lowering data center costs and complexity
– Virtualization acceleration
– Power efficiency
– Scalability to tens-of-thousands of nodes
RDMA over Converged Ethernet (RoCE)
ConnectX-4 supports RoCE specifications delivering low-latency and high-performance over Ethernet networks. Leveraging data center bridging (DCB) capabilities as well as ConnectX-4 EN advanced congestion control hardware mechanisms,
RoCE provides efficient low-latency RDMA services over Layer 2 and Layer 3 networks.
I/O Virtualization
ConnectX-4 SR-IOV technology provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VMs) within the server. I/O virtualization with ConnectX-4 gives data center administrators better server utilization while reducing cost, power,
and cable complexity, allowing more Virtual Machines and more tenants on the same hardware.
Overlay Networks
In order to better scale their networks, data center operators often create overlay networks that carry traffic from individual virtual machines over logical tunnels in encapsulated formats such as NVGRE. While this solves network scalability issues, it hides the TCP packet from the hardware offloading engines,
placing higher loads on the host CPU. ConnectX-4 effectively addresses this by providing advanced NVGRE and GENEVE hardware offloading engines that encapsulate and de-capsulate the overlay protocol headers, enabling the traditional offloads to be performed on the encapsulated traffic. With ConnectX-4,
data center operators can achieve native performance in the new network architecture.
Software Support
All Mellanox adapter cards are supported by Windows, Linux distributions, VMware, FreeBSD, and Citrix XENServer. ConnectX-4 adapters support OpenFabrics-based RDMA protocols and software and are compatible with configuration and management tools from OEMs and operating system vendors.
Essentials | |
Product Collection | 100G Ethernet SmartNIC |
Controller Processors | Mellanox ConnectX-4 |
Bus Type/Bus Width | PCI Express (PCIe) Gen 3.0 x16 |
Configuration | 2 x 100G QSFP28 Ports |
Data Rate Supported | 100/56/50/25/10/1GbE Per Port |
Bracket | Full-height bracket installed, Low-profile bracket included in package |
Dimension | 150 mm x 68 mm (PCIe Board) |
Compatibility | |
Ethernet Network Standards | 100GbE / 56GbE / 50GbE / 40GbE / 25GbE / 10GbE / 1GbE |
IEEE 802.3bj, 802.3bm 100 Gigabit Ethernet | |
25G Ethernet Consortium 25, 50 Gigabit Ethernet | |
IEEE 802.3ba 40 Gigabit Ethernet | |
IEEE 802.3ae 10 Gigabit Ethernet | |
IEEE 802.3az Energy Efficient Ethernet | |
IEEE 802.3ap based auto-negotiation and KR startup | |
IEEE 802.3ad, 802.1AX Link Aggregation | |
IEEE 802.1Q, 802.1P VLAN tags and priority | |
IEEE 802.1Qau (QCN) – Congestion Notification | |
IEEE 802.1Qaz (ETS), IEEE 802.1Qbb (PFC), IEEE 802.1Qbg, IEEE 1588v2 | |
Jumbo frame support (9.6KB) | |
Protocol Support | OpenMPI, IBM PE, OSU MPI (MVAPICH/2), Intel MPI |
Platform MPI, UPC, Open SHMEM | |
TCP/UDP, MPLS, VXLAN, NVGRE, GENEVE | |
iSER, NFS RDMA, SMB Direct | |
uDAPL | |
Remote Boot | Remote boot over Ethernet, Remote boot over iSCSI, PXE and UEFI |
Operating Systems/Distributions* | |
System Requirements* | Windows, RHEL/CentOS, FreeBSD, VMware, OpenFabrics Enterprise Distribution (OFED), OpenFabrics Windows Distribution (WinOF) |
An available PCI Express x16 slot | |
Environment & Certification | |
Power Consumption | Max. 24.5W |
Storage Humidity Maximum | 90% non-condensing relative humidity at 35 °C |
Storage Temperature | -40 °C to 70 °C (-40 °F to 158 °F) |
Operating Temperature | 0 °C to 60 °C (32 °F to 140 °F) |
LED Indicators | A constant Green indicates a link with the maximum networking speed |
A constant Yellow indicates a link with less than the maximum networking speed | |
Certifications | CE FCC RoHS |
SUNWEIT
We make every effort to troubleshoot technical information or other errors in product specifications, but some errors may still occur due to our oversight.
We reserve the right not to accept any order that contains incorrect information. Images are for reference only.
All other trademarks or trade names mentioned in the text refer to the organization that owns the trademark or name or its products.
We do not own any rights or interests in the trademarks or trade names of other organizations.
Copyright © 2023 SUNWEIT Corporation 丨 All Rights Reserved 丨 Privacy Protection 丨 Legal Agreements