400 Gbit/s flagship ports.
Quarter-terabit egress for the heaviest IPTV multicast, CDN origins and DDoS scrubbing nodes. Custom-quoted, on Equinix AM5 and NewTelco Kyiv.
If you're asking, you probably do.
400G isn't a vanity port. These are the workloads that actually saturate it.
100k+ concurrent HD viewers
At 4 Mbps per stream, 100k HLS clients = 400 Gbit/s. One server, one port, no LB.
Tier-1 origin shields
Push to multiple CDN POPs simultaneously. Single 400G origin replaces a rack of 25G boxes.
Multi-Tbit/s scrubbing
Inline filtering at line rate. We use 400G internally for our own scrubbing fabric.
AAA patch days
Steam-style 80GB patch drops. 400G handles 50k concurrent downloads without queueing.
Hot DR replication
Sync a 100TB dataset across continents in under an hour. 400G + jumbo frames.
Bulk data transfer
LHC-class scientific transfers. We've moved 5PB datasets between AMS and KBP for clients.
What's in the rack.
Arista 7280R3 / 7800R3
- Deep buffers for elephant-flow IPTV multicast
- Full-table BGP, EVPN-VXLAN underlay
- 1µs port-to-port latency
NVIDIA ConnectX-7 / Intel E810
- 2×200G or 1×400G QSFP-DD
- RDMA, RoCEv2, DPDK out of the box
- Hardware timestamping for PTP
DR4 (500m) / FR4 (2km)
- Single-mode 4×100G PAM4
- Hot-spare optics on-site
- Coherent ZR optional for DCI
Where you can get it.
400G isn't a SKU. It's a conversation.
Custom-quoted per workload. Lead time typically 7–14 days for new builds.