Like high-performance sports cars, today's data centers must perform faster and more efficiently than ever before. Advanced compute and storage I/O require higher speed top of rack connectivity to connect fabrics into 400G spines (with 800G on the horizon).
Because most network operations haven’t kept up with new API and automation-driven practices, legacy enterprise data centers are like a sports car stuck in first gear, unable to take full advantage of its speed. Manual network provisioning and configuration can’t match the velocity of modern development practices and microservice applications. Bolted-on appliances, agents, and complex traffic engineering introduce further drag, and the ability to deliver logs and telemetry to analytics tools to generate meaningful, actionable output is limited.
Five design principles for next-gen data centers
How do you move past the limitations of legacy architectures to a modern data center that runs like a Ferrari? Get your data center out of first gear and operating at full speed with these five design principles:
1. Modernize with DPU-enabled switches
Data processing units (DPUs) are processors that offload and accelerate network and security functions. Originally designed for servers, hyperscalers have adopted DPUs at scale, proving out the technology.
The HPE Aruba Networking CX 10000 Switch Series with AMD Pensando is the first to fully integrate (dual) DPUs into an L2/3 switch, moving stateful firewalling, NAT, and micro segmentation closer to workloads without impacting switch processing performance.
Embedding DPUs into data center switches instead of installing them in servers simplifies brownfield deployment and lowers total cost. Instead of buying DPUs for each server in the rack, a DPU-enabled ToR switch provides similar benefits at a fraction of the price—without the need to unrack and crack open every server to install the new silicon. DPU-enabled switches mean you can adopt a distributed services model in existing data center environments at the rack and/or POD level, without painful upgrades or long deployment times.
2. Bring network and security services closer to workloads with a distributed services architecture
Security services in traditional data centers are typically delivered in two ways:
- Hardware appliances that hang off the data center network, requiring traffic engineering to direct flows out to the security cluster through a stack of appliances, then back into the network fabric, adding operational complexity and latency.
- Software agents that run in VMs or containers on servers, requiring installation of a host of agents and drivers that take device memory and CPU away from application processing and add a new tier of licensing and management costs.
Running firewall, NAT, and segmentation services within the network fabric applies these services closer to workloads and traffic flows while avoiding complex traffic engineering and cost and management burdens of server-based agents. DPU-enabled switches enable easier adoption of distributed services architectures in brownfield data centers, modernizing infrastructure at a lower cost and with less operational disruption.
3. Extend Zero Trust closer to applications
Zero Trust allows finer-grained control of application and service communications than typical port/protocol rules or ACLs, but it requires visibility into all your traffic. Most data center traffic in modern hypervisor or microservices-based application development runs east-west and passes through ToR switches. Distributing stateful firewall and micro segmentation services on ToR DPUs takes advantage of the visibility switches already have into these communications, to apply and enforce precise rules on host-to-host communication—without the need to hairpin traffic out to security appliances.
And because you can inspect every packet or flow that passes through your ToR layer, you dramatically increase your chances of spotting—and stopping—the kind of lateral movement that attackers use to burrow into your infrastructure.
4. Blend network and security AIOps
Data is priceless information that can be analyzed for security, troubleshooting, performance monitoring, and other uses. A new generation of analytics tools uses AI and machine learning to extract actionable insights from data and provide predictive analytics to spot small issues before they become big.
Until now, network operations teams have had to rely on probes and taps to get this data, requiring either building a second network to monitor the first, or limiting the data sample. DPU-based switches from HPE Aruba Networking collect and export standards-based IPFix flow records and extend telemetry to include syslogs from stateful firewalls that run on the DPU. The DPU can export syslogs to third-party security tools including SIEMs and XDR systems, helping reduce blind spots and enabling network operators to respond to issues faster and more effectively.
5. Incorporate edge, colocation, and IaaS
Distributing services directly on a DPU-based switch extends network, security, and telemetry capabilities outside the data center to outside locations like colocation facilities, factories, branch locations, or public cloud edges. The HPE Aruba Networking CX 10000 can dramatically simplify a private/private 400G site-site IPsec handoff to either Microsoft Azure, AWS, or across on-premises and globally adjacent collocated hybrid cloud services such as HPE GreenLake.
Leveraging designs that combine colocation and infrastructure as a service (IaaS) offers additional benefits, including low latency, high bandwidth connections to major cloud providers, improved transaction speed, and data sovereignty. These integrated solutions also help reduce costs by limiting upfront CapEx, paying only for what you use, and avoiding public cloud egress charges.
The next generation of distributed services architecture supports applications in a variety of locations where critical data needs to be collected, processed, inspected, or passed along to the public cloud.
Accelerate your data center
High-performing sports cars are out of reach for many of us. A data center, built for speed with a distributed services architecture enabled by industry-first DPU-enabled switches, is not. Today it’s possible to transform your data center to meet workload needs without needing to rebuild it from the ground up. A next-gen solution extends Zero Trust deep into the data center, leverages network and security AIOps, and brings critical network and security capabilities to edge locations.
All to say: you might not get that Ferrari, but with a distributed services architecture, you can have a data center that runs like one.
Discover how VectorUSA and HPE Aruba are revolutionizing data centers together. Explore our comprehensive brochure, "Five Principles for Modern Data Centers."
Post Topic(s): MANAGED SERVICES | DATA NETWORKS | DIGITAL DATA NETWORK PROTECTION