VyOS VPP Dataplane
VyOS VPP Dataplane is an optional userspace dataplane designed for packet-rate-intensive workloads where Linux kernel forwarding becomes the bottleneck. By integrating Vector Packet Processing (VPP), VyOS delivers significantly higher throughput, predictable latency, and efficient multi-core scaling while preserving the operational model network teams already know. VPP can be enabled per interface and adopted incrementally, allowing you to accelerate critical traffic paths without redesigning your entire network.
VyOS Supports Two Dataplanes Side by Side
- The traditional Linux kernel dataplane
- The high-performance VPP userspace dataplane
Want to learn more about VyOS Technical Capabilities?
Download the Technical Datasheet


Traffic Flow Paths
Choose your interface placement based on performance vs feature needs.
Green Path
VPP → VPPTraffic between two VPP interfaces is processed entirely inside VPP for maximum throughput and lowest latency. Only features available in the VPP dataplane apply.
Blue Path
VPP ↔ KernelTraffic between a VPP interface and a kernel interface traverses both dataplanes. This allows the combined use of VPP and kernel features.
Red Path
Kernel → KernelTraffic between two kernel interfaces is handled entirely by the Linux kernel dataplane. This is a traditional VyOS operation and does not benefit from VPP acceleration.
What VPP Changes
VPP is a high-performance packet processing engine optimized for multi-core CPUs. Instead of processing packets one by one, it processes traffic in vectors, improving cache efficiency and throughput consistency under load.
In VyOS, VPP is not an all-or-nothing decision. Only interfaces explicitly assigned to VPP use the VPP forwarding path, while the rest of the system continues to operate on the kernel dataplane.
How VyOS Integrates VPP
VyOS VPP integration is designed to minimize operational disruption
This approach lets teams introduce VPP where it delivers the most value, without sacrificing flexibility or control.
Explicit per-interface VPP enablement
Kernel dataplane features remain available and unchanged
Hybrid operation allows phased adoption based on traffic patterns and feature needs
Unified VyOS CLI, API, and automation workflows across dataplanes
Key Benefits
Higher Packet-Rate Efficiency
Vectorized packet processing delivers higher packets-per-second throughput compared to kernel-based forwarding.
Multi-Core Scalability
VPP efficiently distributes workloads across CPU cores, maintaining performance under sustained load.
Userspace Dataplane for Demanding Workloads
By bypassing the kernel networking stack, VPP reduces overhead in forwarding paths where pps dominates.
DPDK and Hardware Acceleration
On supported platforms, VPP can leverage DPDK-enabled NICs, SR-IOV, hardware queues, and offload features. Actual results depend on NIC, driver, tuning, and deployment design.
Operational Continuity
Use the same VyOS CLI, APIs, and automation tools (Ansible, Terraform, PyVyOS) while benefiting from VPP acceleration.
Accelerate critical traffic paths without redesigning your network
When to Use VPP
VPP is an ideal choice for environments with
If your design relies heavily on cross-dataplane traffic (Blue path), evaluate interface placement carefully to balance features and performance.
High packets-per-second bottlenecks
Latency-sensitive applications requiring predictable performance
Service provider, telco, and cloud edge designs where throughput per core matters
Example Architecture: High-Performance IPsec on AWS
A real-world deployment pattern illustrating how VPP is used in packet-rate-intensive cloud environments.
VPP-enabled VyOS gateway handling high packets-per-second IPsec termination and routed subnet forwarding inside an AWS VPC.
Resources
Here are some resources to help you learn more about VyOS, keep up with the development, and participate in it.