Cisco SD-WAN (Viptela) Overview

Traditional WAN architectures built on MPLS circuits, router-by-router CLI configuration, and static policy-based routing have struggled to keep pace with the demands of modern enterprises: cloud-first applications, direct internet breakout, multiple transport types (MPLS, broadband, 4G/5G), and the need to change routing policy across hundreds of branch sites in minutes rather than weeks. Cisco SD-WAN, built on the Viptela technology acquired by Cisco in 2017, addresses all of these requirements through a clean separation of the management, control, and data planes — each managed by a dedicated component with a well-defined role. For a high-level overview see SD-WAN Overview and Controller-Based Networking. For traditional WAN background see WAN and WAN Technologies.

At its core, Cisco SD-WAN is an overlay WAN fabric: the physical transport (MPLS, internet, LTE) forms the underlay, and an encrypted IPsec tunnel mesh — the overlay — is built on top of it. All routing intelligence lives in the centralised controller (vSmart), all configuration and policy management lives in the orchestration platform (vManage), authentication and NAT traversal are handled by vBond, and all actual data forwarding is performed by the branch and data centre routers (vEdge or cEdge). The result is a WAN where every routing and forwarding decision is policy-driven, topology-aware, and centrally visible — with zero-touch provisioning (ZTP) allowing new branch sites to come online automatically without on-site engineering.

This lab covers the SD-WAN architecture in full — the role of each component, the control-plane and data-plane protocols (OMP, DTLS/TLS, BFD, IPsec), the onboarding process for a vEdge router, VPN segmentation, and how to build and apply an application-aware routing policy that steers voice traffic to the low-latency MPLS path and bulk data to the broadband path. For the underlying IPsec concepts used in the SD-WAN data plane, see Site-to-Site IPsec VPN and IPsec Basics. For VRF segmentation concepts that map to SD-WAN VPNs, see VRF-Lite Configuration. For policy-based routing concepts that SD-WAN application-aware routing extends, see Policy-Based Routing. For DMVPN — the predecessor overlay technology — see DMVPN Phase 1, 2 & 3. For NETCONF/YANG that underpins vManage's device template push, see NETCONF & RESTCONF Overview and JSON, XML & YANG.

1. SD-WAN Architecture — The Four Planes

  ┌─────────────────────────────────────────────────────────────────┐
  │                    MANAGEMENT PLANE                             │
  │  ┌──────────────────────────────────────────────────────────┐  │
  │  │  vManage  (NMS / Orchestration)                          │  │
  │  │  • Single pane of glass GUI and REST API                 │  │
  │  │  • Pushes device templates, feature templates, policies  │  │
  │  │  • Real-time monitoring, alarms, flow analytics          │  │
  │  │  • Certificate authority and device certificate mgmt     │  │
  │  │  • Communicates to controllers and vEdges via NETCONF    │  │
  │  └──────────────────────────────────────────────────────────┘  │
  └─────────────────────────────────────────────────────────────────┘
           │ NETCONF/HTTPS                    │ NETCONF
           ▼                                  ▼
  ┌──────────────────────┐        ┌──────────────────────────────┐
  │    CONTROL PLANE     │        │   ORCHESTRATION PLANE        │
  │  ┌────────────────┐  │        │  ┌────────────────────────┐  │
  │  │    vSmart      │  │        │  │        vBond           │  │
  │  │  (Controller)  │  │        │  │  (Orchestrator)        │  │
  │  │                │  │        │  │                        │  │
  │  │  • Runs OMP    │  │        │  │  • First point of      │  │
  │  │  • Receives    │  │        │  │    contact for vEdge   │  │
  │  │    TLOC routes │  │        │  │    at ZTP boot         │  │
  │  │    from vEdges │  │        │  │  • Authenticates       │  │
  │  │  • Computes    │  │        │  │    vEdge certificate   │  │
  │  │    best path   │  │        │  │  • Provides vSmart     │  │
  │  │  • Distributes │  │        │  │    and vManage IP      │  │
  │  │    routes &    │  │        │  │    addresses to vEdge  │  │
  │  │    policies    │  │        │  │  • Facilitates NAT     │  │
  │  │    to vEdges   │  │        │  │    traversal for vEdge │  │
  │  │  • DTLS/TLS    │  │        │  │    behind NAT          │  │
  │  │    to vEdges   │  │        │  └────────────────────────┘  │
  │  └────────────────┘  │        └──────────────────────────────┘
  └──────────────────────┘
           │ OMP (DTLS/TLS port 12346)
           │ Policy distribution
           ▼
  ┌─────────────────────────────────────────────────────────────────┐
  │                      DATA PLANE                                 │
  │  ┌──────────────┐   BFD+IPsec   ┌──────────────┐              │
  │  │  vEdge/cEdge │◄─────────────►│  vEdge/cEdge │  (per site)  │
  │  │  Branch A    │               │  Branch B    │              │
  │  │              │  Data tunnels │              │              │
  │  │  • Forwards  │  (UDP 12346)  │  • Forwards  │              │
  │  │    packets   │               │    packets   │              │
  │  │  • IPsec     │               │  • IPsec     │              │
  │  │    encrypt   │               │    encrypt   │              │
  │  │  • BFD path  │               │  • BFD path  │              │
  │  │    monitoring│               │    monitoring│              │
  │  │  • App-aware │               │  • App-aware │              │
  │  │    routing   │               │    routing   │              │
  └──┴──────────────┴───────────────┴──────────────┴──────────────┘
  

The Four SD-WAN Components

Component Plane Primary Role Protocols Used Deployment
vManage Management Centralised configuration, monitoring, policy management, and orchestration GUI. The single pane of glass for the entire SD-WAN fabric. All configuration changes originate here and are pushed to vSmart and vEdge devices via NETCONF. Provides REST API for automation. Hosts the certificate authority for device authentication NETCONF (to vSmart and vEdge — see NETCONF & RESTCONF Overview), HTTPS (GUI and REST API), SNMP (southbound monitoring — see SNMP v2c/v3 Configuration) On-premises VM (ESXi, KVM) or Cisco cloud-hosted. Typically deployed as a cluster of three vManage nodes for HA
vSmart Control The SD-WAN routing controller. Runs OMP (Overlay Management Protocol) with all vEdge routers. Receives TLOC route advertisements from vEdges, computes optimal paths, and distributes routes and policies back to vEdges. Does NOT forward any data-plane traffic — it is a pure control-plane device OMP over DTLS/TLS (port 12346 to vEdges), NETCONF (from vManage) On-premises VM or cloud. Multiple vSmart controllers provide redundancy — each vEdge connects to all vSmarts simultaneously
vBond Orchestration The initial contact point for a new vEdge device. When a vEdge boots for the first time, it contacts vBond using a pre-configured or DHCP-provided IP address. vBond authenticates the vEdge's certificate, then provides the IP addresses of all vSmart controllers and vManage — enabling the vEdge to establish its OMP and management connections. vBond also assists with NAT traversal (hole punching) so vEdges behind NAT can build data-plane tunnels to each other DTLS (port 12346), STUN-like NAT traversal Must have a public IP address reachable from all vEdge sites. Often deployed in the cloud or DMZ. A single vBond can serve the entire fabric
vEdge / cEdge Data The WAN edge router deployed at branch sites, data centres, or colocation facilities. Forwards all user data traffic, maintains IPsec-encrypted tunnels to all remote vEdge routers, runs BFD on every tunnel to measure latency/jitter/loss, and enforces application-aware routing and QoS policies received from vSmart. vEdge = Viptela-specific hardware/software platform; cEdge = Cisco IOS-XE router (ISR 4K, ASR 1K, CSR 1000v) running SD-WAN software OMP (to vSmart), IPsec (data plane tunnels — see Site-to-Site IPsec VPN), BFD (tunnel health), OSPF/BGP/EIGRP (LAN-side service-side routing — see OSPF and BGP) Physical appliance or VM at every WAN edge location

2. Key SD-WAN Concepts

Underlay vs Overlay

Layer What It Is Who Manages It Protocols
Underlay The physical WAN transport network — MPLS, broadband internet, 4G/5G LTE, ADSL. The underlay provides IP connectivity between vEdge routers' WAN interfaces. The SD-WAN fabric does not control or configure the underlay — it simply uses whatever IP connectivity the ISP provides ISP / carrier (MPLS), internet (broadband), no management needed for LTE BGP (MPLS PE-CE), DHCP/PPPoE (broadband), LTE radio protocols
Overlay The SD-WAN fabric built on top of the underlay — IPsec-encrypted tunnels between every pair of vEdge routers across every available transport. The overlay is fully managed by the SD-WAN fabric and is transparent to the underlay. All branch-to-branch and branch-to-DC traffic flows through the overlay tunnels vManage (policy), vSmart (routing), vEdge (forwarding) OMP (routing), IPsec (encryption), BFD (path monitoring)

TLOC — Transport Locator

A TLOC (Transport Locator) is the SD-WAN equivalent of a next-hop address — it uniquely identifies an endpoint of a WAN tunnel on a vEdge router. A TLOC is defined by three values: the vEdge's system IP address (a unique loopback-like identifier), the colour (transport type label such as mpls, biz-internet, lte, public-internet), and the encapsulation (IPsec or GRE). Each WAN interface on a vEdge router has one TLOC. A dual-homed branch with both MPLS and broadband internet has two TLOCs — one for each transport. TLOCs are advertised to vSmart via OMP, and vSmart distributes them to all other vEdge routers so they can build data-plane tunnels directly between TLOCs.

OMP — Overlay Management Protocol

OMP is the SD-WAN control-plane routing protocol — it runs between vEdge routers and vSmart controllers over DTLS/TLS-secured sessions on TCP/UDP port 12346. OMP is conceptually similar to BGP: it is a path-vector protocol that carries routes (called OMP routes), TLOC advertisements, and policy information between controllers and edge routers. vEdge routers do not exchange OMP directly with each other — all OMP routing information flows through vSmart, which acts as the route reflector. When vEdge-A learns a route to a site-B prefix, it advertises that route to vSmart via OMP. vSmart evaluates all policies, selects the best TLOCs for the route, and redistributes the route with TLOC information to all other vEdge routers.

VPN Segmentation in SD-WAN

SD-WAN uses VPN numbers (0–65530) to segment traffic — conceptually equivalent to VRFs. There are two special system VPNs and an unlimited number of service VPNs:

VPN Name Purpose
VPN 0 Transport VPN Contains all WAN-facing interfaces (MPLS, internet, LTE). All control-plane connections (OMP to vSmart, DTLS to vBond, NETCONF to vManage) run in VPN 0. IPsec data-plane tunnels are also established in VPN 0. VPN 0 is the underlay-facing VPN — it has routes to reach the internet and the MPLS network
VPN 512 Management VPN Out-of-band management access to the vEdge router itself — SSH, SNMP, syslog. Separate from data traffic. In most deployments, this connects to an out-of-band management network
VPN 1–511, 513–65530 Service VPNs Carry actual user/application traffic. LAN-facing interfaces are assigned to service VPNs. VPN 1 typically carries corporate data traffic; VPN 2 might carry voice; VPN 3 might carry guest/internet traffic. Each service VPN is logically isolated — traffic cannot cross VPN boundaries without explicit policy

BFD — Bidirectional Forwarding Detection in SD-WAN

Every IPsec data-plane tunnel between vEdge routers runs BFD to continuously measure the health and performance of each transport path. BFD probes are sent at configurable intervals (default 1 second) and measure round-trip latency, jitter, and packet loss for every tunnel. These per-tunnel BFD metrics are the input to application-aware routing — the policy engine uses them to decide which transport path to use for each application class. If an MPLS path's latency spikes above a threshold, application-aware routing can automatically steer voice calls to the broadband path. BFD also detects tunnel failures within seconds, enabling fast failover. See IP SLA with Tracking for the traditional IOS equivalent of path monitoring, and MPLS for the underlying MPLS transport concepts.

Application-Aware Routing (AAR)

Application-aware routing is the SD-WAN policy feature that steers specific application traffic to the best available WAN transport based on real-time path quality (BFD metrics) rather than static routing decisions. An AAR policy defines: an application match (using NBAR2 Deep Packet Inspection to classify traffic — voice, video, Salesforce, Office 365, etc.), a set of SLA thresholds (maximum acceptable latency, jitter, and loss for that application), and a preferred transport order (try MPLS first; fall back to broadband if MPLS does not meet SLA). The policy is configured in vManage, pushed to vSmart, and vSmart distributes it to all relevant vEdge routers. Each vEdge enforces the policy locally using real-time BFD data for its own tunnel paths. For DSCP marking concepts used in AAR traffic matching see QoS Marking and DSCP Marking & Classification. For the traditional PBR equivalent see Policy-Based Routing.

3. Control Plane and Data Plane Separation

Control Plane — OMP and DTLS/TLS

  OMP Route Advertisement Flow:
  ─────────────────────────────────────────────────────────────

  Branch-A vEdge                vSmart               Branch-B vEdge
      │                            │                       │
      │─── OMP Update ────────────►│                       │
      │    Advertise:              │                       │
      │    • Prefix: 10.1.0.0/24  │                       │
      │    • TLOC: [1.1.1.1,      │                       │
      │             mpls, ipsec]  │                       │
      │    • TLOC: [1.1.1.1,      │                       │
      │             biz-internet, │                       │
      │             ipsec]        │                       │
      │                           │─── OMP Update ───────►│
      │                           │    Redistribute:       │
      │                           │    • Prefix: 10.1.0.0/24
      │                           │    • TLOC list:        │
      │                           │      [1.1.1.1,mpls]   │
      │                           │      [1.1.1.1,inet]   │
      │                           │    • Policy applied    │
      │                           │                        │
      │◄── OMP Update ────────────│                        │
      │    Redistribute:          │                        │
      │    • Prefix: 10.2.0.0/24 │                        │
      │    • TLOC list:           │                        │
      │      [2.2.2.2, mpls]     │                        │
      │      [2.2.2.2, inet]     │                        │

  OMP Route Types:
  ┌─────────────────┬────────────────────────────────────────────┐
  │  OMP Route Type │  Contents                                  │
  ├─────────────────┼────────────────────────────────────────────┤
  │  vRoute         │  IP prefix + TLOC list — the core routing  │
  │                 │  entry. "Prefix X is reachable at TLOC Y"  │
  ├─────────────────┼────────────────────────────────────────────┤
  │  TLOC Route     │  TLOC endpoint advertisement — "I have a   │
  │                 │  WAN interface with this system-IP, colour, │
  │                 │  encap, and public/private IP address"      │
  ├─────────────────┼────────────────────────────────────────────┤
  │  Service Route  │  Advertisement of network services (firewall│
  │                 │  load balancer) reachable via this vEdge   │
  └─────────────────┴────────────────────────────────────────────┘

  Control connections secured by:
  • DTLS (Datagram TLS) — default, UDP-based, port 12346
  • TLS — optional, TCP-based, same port
  • All controllers and vEdge devices use X.509 certificates
    signed by vManage's internal CA for mutual authentication
  

Data Plane — IPsec Tunnels and BFD

  Data Plane Tunnel Mesh (Branch-A with 2 transports to Branch-B with 2 transports):

  Branch-A vEdge                                  Branch-B vEdge
  WAN1 (MPLS)   ──── IPsec tunnel ──────────────► WAN1 (MPLS)
  WAN1 (MPLS)   ──── IPsec tunnel ──────────────► WAN2 (Internet)
  WAN2 (Internet) ── IPsec tunnel ──────────────► WAN1 (MPLS)
  WAN2 (Internet) ── IPsec tunnel ──────────────► WAN2 (Internet)

  4 tunnels total between 2 dual-homed sites.
  Scales as: (transports-A × transports-B) tunnels per site pair.

  Data Plane Key Facts:
  ─────────────────────────────────────────────────────────────
  ● Tunnels are established DIRECTLY between vEdge routers —
    vSmart provides the TLOC addresses but does NOT forward data
  ● All data-plane traffic is encrypted with AES-256-GCM IPsec
  ● Tunnel keys are negotiated and rotated automatically —
    no manual IKE or pre-shared key configuration needed
  ● Each tunnel runs BFD to measure latency, jitter, packet loss
    every second (default 1000ms hello interval)
  ● vEdge uses BFD data + AAR policy to select which tunnel
    to use for each application flow — forwarding table updated
    in real time as path quality changes
  ● UDP port 12346 is used for both IPsec data encapsulation
    and BFD probes

  BFD Probe Packet Path:
  vEdge-A ──[BFD probe]──► vEdge-B (RTT measured)
  vEdge-A ◄─[BFD reply]── vEdge-B
  Results: latency=8ms, jitter=2ms, loss=0%
  → Store in per-tunnel SLA database
  → AAR policy evaluates: voice requires latency ≤20ms → MPLS qualifies
  → Voice flows use MPLS tunnel
  

4. Lab Topology

  ┌─────────────────────────────────────────────────────────────────┐
  │                   SD-WAN Controllers (Cloud/DC)                 │
  │  vManage: 100.64.0.10   vSmart: 100.64.0.20   vBond: 100.64.0.30│
  └─────────────────────────────────────────────────────────────────┘
            │ NETCONF/HTTPS          │ OMP/DTLS          │ DTLS
            │                        │                   │
  ┌─────────────────────┐    ┌────────────────────────────────────┐
  │   HQ Data Centre    │    │           Branch Site              │
  │   vEdge-DC          │    │           vEdge-BR1                │
  │   System IP: 1.1.1.1│    │           System IP: 2.2.2.2       │
  │                     │    │                                    │
  │   WAN Interfaces:   │    │   WAN Interfaces:                  │
  │   Gi0/0: MPLS       │◄──►│   Gi0/0: MPLS (10.0.1.2/30)      │
  │   (10.0.1.1/30)     │    │   colour: mpls                     │
  │   colour: mpls      │    │                                    │
  │                     │◄──►│   Gi0/1: Internet (203.0.113.2/30) │
  │   Gi0/1: Internet   │    │   colour: biz-internet             │
  │   (198.51.100.1/30) │    │                                    │
  │   colour: biz-internet   │   LAN Interface:                   │
  │                     │    │   Gi0/2: 10.20.0.1/24 (VPN 1)     │
  │   LAN Interface:    │    │   PC-BR1: 10.20.0.10               │
  │   Gi0/2: 10.10.0.1/24   └────────────────────────────────────┘
  │   (VPN 1)           │
  │   Server: 10.10.0.50│
  └─────────────────────┘

  Lab Goals:
  1. Review controller bootstrap and vEdge onboarding process
  2. Verify OMP neighbour and route table on vEdge-BR1
  3. Verify BFD tunnel status
  4. Configure an application-aware routing policy:
     Voice (DSCP EF) → prefer MPLS (latency ≤150ms, loss ≤1%)
     Data             → prefer Internet, fall back to MPLS
  

5. Step 1 — Controller Bootstrap and vBond Configuration

In a production SD-WAN deployment, the controllers (vManage, vSmart, vBond) are typically deployed as cloud-hosted instances by Cisco or as on-premises VMs. This section describes the bootstrap configuration applied to each controller and the key parameters they need to know about each other before any vEdge can onboard. In a lab environment using Cisco's SD-WAN DevNet sandbox or CML (Cisco Modelling Labs), these controllers are pre-deployed — the steps below reflect the initial configuration applied on first boot.

vBond Bootstrap Configuration

! ═══════════════════════════════════════════════════════════
! vBond is the orchestrator — it must have a PUBLIC IP and
! must be reachable by all vEdge devices at onboarding time.
! vBond configuration uses the same IOS-XE CLI as cEdge, or
! the viptela-os CLI for dedicated vBond appliances.
! ═══════════════════════════════════════════════════════════

vbond# config terminal

! ── System-level identity ─────────────────────────────────
vbond(config)# system
vbond(config-system)#  system-ip 100.64.0.30
vbond(config-system)#  site-id 100
vbond(config-system)#  organization-name "NetsTuts-Lab"
vbond(config-system)#  vbond 100.64.0.30 local   ← vBond declares itself
vbond(config-system)# exit

! ── VPN 0 — transport VPN (WAN-facing) ───────────────────
vbond(config)# vpn 0
vbond(config-vpn)#  interface eth0
vbond(config-interface)#   ip address 100.64.0.30/24
vbond(config-interface)#   tunnel-interface
vbond(config-tunnel)#      encapsulation ipsec
vbond(config-tunnel)#      allow-service all      ← permit OMP, DTLS, HTTPS
vbond(config-interface)#   no shutdown
vbond(config-interface)#  exit
vbond(config-vpn)#  ip route 0.0.0.0/0 100.64.0.1   ← default route to internet
vbond(config-vpn)# exit

! ── VPN 512 — management VPN ─────────────────────────────
vbond(config)# vpn 512
vbond(config-vpn)#  interface eth1
vbond(config-interface)#   ip address 192.168.100.30/24
vbond(config-interface)#   no shutdown
vbond(config-interface)#  exit
vbond(config-vpn)# exit

vbond(config)# commit
  
The three critical parameters in system configuration are: system-ip (the vEdge/controller's unique identifier in the fabric — like a loopback address, never routed), site-id (identifies which physical site this device belongs to — all devices at the same site share a site-id), and organization-name (must match exactly across all controllers and vEdge devices — used during certificate validation). The vbond 100.64.0.30 local command tells this device it is the vBond orchestrator. On vEdge devices, vbond [IP-address] without local points to the vBond address. VPN 512 is the out-of-band management VPN — for SSH access configuration see SSH Configuration, and for centralised logging see Syslog Configuration.

vSmart Bootstrap Configuration

vsmart# config terminal

vsmart(config)# system
vsmart(config-system)#  system-ip 100.64.0.20
vsmart(config-system)#  site-id 100
vsmart(config-system)#  organization-name "NetsTuts-Lab"
vsmart(config-system)#  vbond 100.64.0.30     ← points to vBond for initial contact
vsmart(config-system)# exit

vsmart(config)# vpn 0
vsmart(config-vpn)#  interface eth0
vsmart(config-interface)#   ip address 100.64.0.20/24
vsmart(config-interface)#   tunnel-interface
vsmart(config-tunnel)#      encapsulation ipsec
vsmart(config-tunnel)#      allow-service netconf
vsmart(config-tunnel)#      allow-service stun        ← required for vBond NAT traversal
vsmart(config-interface)#   no shutdown
vsmart(config-interface)#  exit
vsmart(config-vpn)#  ip route 0.0.0.0/0 100.64.0.1
vsmart(config-vpn)# exit

vsmart(config)# commit
  

Add Controllers to vManage

! ── These steps are performed in the vManage GUI ──────────
! ── (GUI path: Configuration → Devices → Controllers) ─────

! Step 1: Add vBond
!   Click "Add Controller" → vBond
!   IP Address: 100.64.0.30
!   Username: admin / Password: [vBond password]
!   Generate CSR → Install Certificate

! Step 2: Add vSmart
!   Click "Add Controller" → vSmart
!   IP Address: 100.64.0.20
!   Username: admin / Password: [vSmart password]
!   Generate CSR → Install Certificate

! ── After adding controllers, verify in vManage ───────────
! ── Monitor → Network → Controllers
! ── vBond:  ● Connected
! ── vSmart: ● Connected

! ── Equivalent CLI check on vSmart ───────────────────────
vsmart# show control connections

                                          PEER                          PEER
PEER    PEER PEER            SITE       DOMAIN PEER                     PRIVATE  PEER
TYPE    PROT SYSTEM IP       ID         ID     STATE    UPTIME          IP       PORT
-----------------------------------------------------------------------
vmanage dtls 100.64.0.10    100        0      up       0:03:42:17      100.64.0.10  12346
  

6. Step 2 — vEdge Router Onboarding (Zero-Touch Provisioning)

SD-WAN's Zero-Touch Provisioning (ZTP) allows a new vEdge router to come online and join the fabric automatically — without any on-site engineer logging in to configure it. The router arrives with a factory-default configuration, is connected to a WAN link, gets a DHCP address, contacts a Cisco cloud ZTP server that redirects it to the organisation's vBond, authenticates, and downloads its full device template from vManage. The entire process takes 5–15 minutes after the router powers on. Understanding the ZTP flow is essential for both exam and production deployments.

ZTP Onboarding Flow

  New vEdge-BR1 boots with factory-default config
          │
          ▼ Step 1: Get WAN IP via DHCP
  vEdge-BR1 gets 203.0.113.2/30 on Gi0/1 (Internet link)
  Default gateway: 203.0.113.1
          │
          ▼ Step 2: Contact Cisco ZTP server (cloud.cisco.com)
  vEdge sends HTTPS request with its chassis serial number
  Cisco ZTP server looks up serial → returns:
    • Organisation name: "NetsTuts-Lab"
    • vBond IP: 100.64.0.30
          │
          ▼ Step 3: Contact vBond
  vEdge connects to vBond 100.64.0.30 via DTLS (port 12346)
  vBond validates vEdge's certificate (signed by Cisco root CA)
  vBond returns:
    • vSmart IP addresses: [100.64.0.20]
    • vManage IP address:  100.64.0.10
    • NAT traversal info (public IP/port mappings)
          │
          ▼ Step 4: Establish OMP session with vSmart
  vEdge connects to vSmart 100.64.0.20 via DTLS
  OMP session established → vEdge receives routing info
          │
          ▼ Step 5: Establish NETCONF session with vManage
  vEdge connects to vManage 100.64.0.10 via NETCONF
  vManage identifies vEdge by serial number → finds assigned template
  vManage pushes full device template configuration to vEdge
          │
          ▼ Step 6: vEdge fully operational
  All WAN interfaces up → MPLS and Internet TLOCs advertised via OMP
  IPsec tunnels built to vEdge-DC
  BFD probes running on all tunnels
  LAN interfaces up → VPN 1 prefixes advertised via OMP
  Site reachable from all other SD-WAN sites
  

Pre-Staging a vEdge in vManage (Before Physical Deployment)

! ── These steps are performed in vManage GUI BEFORE ────────
! ── the physical router arrives at the branch ───────────────

! ── Step 1: Upload vEdge serial number to vManage ──────────
! ── Configuration → Devices → WAN Edge List
! ── Upload CSV or add manually:
! ──   Chassis Number: [from router label or Cisco portal]
! ──   Certificate Serial: [from Cisco PKI portal]

! ── Step 2: Create Feature Templates ──────────────────────
! ── Configuration → Templates → Feature Templates
! ──
! ── Create: System Template
! ──   Organization Name: NetsTuts-Lab
! ──   Site ID: 200                (Branch site = 200)
! ──   System IP: 2.2.2.2          (unique per device, use variable)
! ──   vBond: 100.64.0.30
! ──
! ── Create: VPN 0 Template (Transport VPN)
! ──   VPN: 0
! ──   Interface Gi0/0:
! ──     IP: DHCP or static (use variable for static)
! ──     Tunnel colour: mpls
! ──     Encap: ipsec
! ──   Interface Gi0/1:
! ──     IP: DHCP
! ──     Tunnel colour: biz-internet
! ──     Encap: ipsec
! ──   Default route: 0.0.0.0/0 (via each interface gateway)
! ──
! ── Create: VPN 1 Template (Service/LAN VPN)
! ──   VPN: 1
! ──   Interface Gi0/2:
! ──     IP: 10.20.0.1/24 (use variable)
! ──     No tunnel-interface (LAN — not a WAN tunnel endpoint)
! ──   OSPF or static routes for LAN subnets

! ── Step 3: Create Device Template ────────────────────────
! ── Configuration → Templates → Device Templates
! ── Attach feature templates:
! ──   System Template:   [System-Branch]
! ──   VPN 0 Template:    [VPN0-Branch-DualWAN]
! ──   VPN 1 Template:    [VPN1-Branch-LAN]

! ── Step 4: Attach Device Template to vEdge-BR1 ───────────
! ── Select template → Attach Devices → Select vEdge-BR1
! ── Enter per-device variables:
! ──   System IP: 2.2.2.2
! ──   Site ID: 200
! ──   Gi0/0 IP: 10.0.1.2/30
! ──   Gi0/2 IP: 10.20.0.1/24
! ── Click Send → template is queued for delivery when vEdge connects

! ── vManage CLI equivalent — verify template attachment ────
vmanage# show device template list

Template Name        Device Type   Device Count
---------------------------------------------------
Branch-Template      vedge-cloud   1
DC-Template          vedge-cloud   1
  
Device templates in vManage follow a two-level hierarchy: feature templates define one specific aspect of configuration (a VPN, an interface, a routing protocol, a security policy), and device templates assemble multiple feature templates into the complete configuration for a device type. Variables (marked with double curly braces in the GUI, e.g., {{system_ip}}) allow the same template to be applied to many devices with different per-device values. This is the SD-WAN equivalent of network-wide configuration management — define the policy once in the template, set device-specific values per device, and vManage generates and pushes the correct configuration to each vEdge automatically.

Manual Bootstrap for vEdge-BR1 (Lab/Fallback Method)

! ═══════════════════════════════════════════════════════════
! If ZTP is not available (lab environment, no internet
! access, or staged deployment), the vEdge can be manually
! configured with a minimal bootstrap to reach vBond.
! vManage then pushes the full template.
! ═══════════════════════════════════════════════════════════

vEdge-BR1# config terminal

! ── System identity ───────────────────────────────────────
vEdge-BR1(config)# system
vEdge-BR1(config-system)#  system-ip 2.2.2.2
vEdge-BR1(config-system)#  site-id 200
vEdge-BR1(config-system)#  organization-name "NetsTuts-Lab"
vEdge-BR1(config-system)#  vbond 100.64.0.30
vEdge-BR1(config-system)# exit

! ── VPN 0 — transport VPN ─────────────────────────────────
vEdge-BR1(config)# vpn 0
vEdge-BR1(config-vpn)#  interface ge0/0
vEdge-BR1(config-interface)#   ip address 10.0.1.2/30
vEdge-BR1(config-interface)#   tunnel-interface
vEdge-BR1(config-tunnel-if)#      encapsulation ipsec
vEdge-BR1(config-tunnel-if)#      color mpls
vEdge-BR1(config-tunnel-if)#      allow-service all
vEdge-BR1(config-interface)#   no shutdown
vEdge-BR1(config-interface)#  exit
vEdge-BR1(config-vpn)#
vEdge-BR1(config-vpn)#  interface ge0/1
vEdge-BR1(config-interface)#   ip address 203.0.113.2/30
vEdge-BR1(config-interface)#   tunnel-interface
vEdge-BR1(config-tunnel-if)#      encapsulation ipsec
vEdge-BR1(config-tunnel-if)#      color biz-internet
vEdge-BR1(config-tunnel-if)#      allow-service all
vEdge-BR1(config-interface)#   no shutdown
vEdge-BR1(config-interface)#  exit
vEdge-BR1(config-vpn)#
vEdge-BR1(config-vpn)#  ip route 0.0.0.0/0 203.0.113.1
vEdge-BR1(config-vpn)# exit

! ── VPN 1 — service VPN (LAN) ────────────────────────────
vEdge-BR1(config)# vpn 1
vEdge-BR1(config-vpn)#  interface ge0/2
vEdge-BR1(config-interface)#   ip address 10.20.0.1/24
vEdge-BR1(config-interface)#   no shutdown
vEdge-BR1(config-interface)#  exit
vEdge-BR1(config-vpn)#  ip route 0.0.0.0/0 10.20.0.254   ← LAN gateway
vEdge-BR1(config-vpn)# exit

! ── VPN 512 — management ─────────────────────────────────
vEdge-BR1(config)# vpn 512
vEdge-BR1(config-vpn)#  interface eth0
vEdge-BR1(config-interface)#   ip address 192.168.100.50/24
vEdge-BR1(config-interface)#   no shutdown
vEdge-BR1(config-interface)#  exit
vEdge-BR1(config-vpn)# exit

vEdge-BR1(config)# commit
  
The tunnel-interface sub-configuration is what designates an interface as a WAN tunnel endpoint. Without tunnel-interface, the interface is a service-side (LAN) interface and will not participate in IPsec tunnel establishment or BFD. The color command assigns the TLOC colour — this label is used in application-aware routing policies to specify which transport type to prefer (e.g., "prefer colour mpls for voice"). The allow-service all command permits all SD-WAN control-plane services (OMP, DTLS, NETCONF, BFD, STUN) to use this tunnel interface. In production, you would restrict this to only the services needed on each transport.

7. Step 3 — Verify OMP and Data-Plane Tunnels

Verify OMP Neighbour Sessions

! ── Check OMP sessions on vEdge-BR1 ──────────────────────
vEdge-BR1# show omp peers

                            DOMAIN    OVERLAY   SITE
PEER             TYPE       ID        ID        ID        STATE    UPTIME
---------------------------------------------------------------------------
100.64.0.20      vsmart     1         1         100       up       0:02:15:08
  
show omp peers confirms the OMP session between vEdge-BR1 and vSmart is up. The state should always be up for a healthy fabric. Each vEdge connects to all vSmart controllers — in a dual-vSmart deployment you would see two entries here. The DOMAIN ID and OVERLAY ID are used for multi-tenant or segmented fabric deployments. SITE ID 100 is vSmart's site (the controller site). If the OMP peer state is not up, the first checks are: Is the certificate valid? Is there a route to vSmart's IP (100.64.0.20) in VPN 0? Is DTLS port 12346 not blocked by the underlay?

Verify OMP Routes Received

! ── Show all OMP routes in the routing table ─────────────
vEdge-BR1# show omp routes

CODE:
C   -> chosen (installed in fib)
I   -> ignored
Red -> redistributed
Rej -> rejected
L   -> looped
R   -> resolved
S   -> stale
Ext -> extranet

VPN 1 ROUTE TABLE
                                        TLOC                    ORIGIN  METRIC
IP PREFIX       FROM PEER      STATUS   IP          COLOR       ENCAP   TYPE    PREF
---------------------------------------------------------------------------------------------------
10.10.0.0/24    100.64.0.20    C,R     1.1.1.1     mpls        ipsec   OMP     100
                100.64.0.20    C,R     1.1.1.1     biz-internet ipsec  OMP     100
10.20.0.0/24    0.0.0.0        C,Red   -           -            -      Connctd 0
  
The OMP routing table shows the routes received from vSmart (redistributed from other vEdge devices) and the locally connected routes. Prefix 10.10.0.0/24 (the DC LAN) has two entries — one for each TLOC colour (mpls and biz-internet) at the DC vEdge (system IP 1.1.1.1). The C,R status means chosen and resolved — these entries are installed in the forwarding table and the next-hop TLOC is reachable. The 10.20.0.0/24 route is the local connected network, redistributed into OMP from the connected route. Compare with show ip route for traditional IP routing table concepts. For the LAN-side routing protocols that feed into SD-WAN service VPNs see OSPF Single-Area Configuration and BGP Basics & eBGP.

Verify BFD Tunnel Sessions

! ── Show all BFD sessions (data-plane tunnel health) ──────
vEdge-BR1# show bfd sessions

                             SOURCE TLOC      REMOTE TLOC
SYSTEM IP    SITE ID  STATE  COLOR           COLOR            PROTO ENCAP   UPTIME  TX/RX
---------------------------------------------------------------------------------------------
1.1.1.1      100      up     mpls            mpls             bfd   ipsec   0:00:47:22  1000/1000
1.1.1.1      100      up     mpls            biz-internet     bfd   ipsec   0:00:47:20  1000/1000
1.1.1.1      100      up     biz-internet    mpls             bfd   ipsec   0:00:47:18  1000/1000
1.1.1.1      100      up     biz-internet    biz-internet     bfd   ipsec   0:00:47:16  1000/1000
  
Four BFD sessions are running — one for each combination of source TLOC and remote TLOC (2 local transports × 2 remote transports = 4 tunnels). All show state up. The TX/RX count of 1000/1000 confirms BFD probes are being sent and received symmetrically on all tunnels — no probe loss. An asymmetric count (e.g., 1000 TX / 850 RX) indicates packet loss on the path, which would trigger AAR policy evaluation. A session in down state means the tunnel is broken — check the underlay connectivity to the remote TLOC's public IP.

Verify BFD Path Quality Statistics

! ── Check latency/jitter/loss per tunnel ─────────────────
vEdge-BR1# show bfd sessions detail

System IP: 1.1.1.1  Site ID: 100  Source Color: mpls  Remote Color: mpls
  State: Up          Uptime: 0:00:47:22    TX Interval: 1000ms
  BFD Policy: Default
  Source TLOC:   Local IP: 10.0.1.2    Colour: mpls
  Remote TLOC:   Remote IP: 10.0.1.1   Colour: mpls
  Encap: ipsec    Proto: bfd
  Latency:   8ms         ← current round-trip latency
  Jitter:    2ms         ← latency variance
  Loss:      0%          ← packet loss percentage
  Rx Errors: 0

System IP: 1.1.1.1  Site ID: 100  Source Color: biz-internet  Remote Color: biz-internet
  State: Up          Uptime: 0:00:47:16    TX Interval: 1000ms
  Source TLOC:   Local IP: 203.0.113.2    Colour: biz-internet
  Remote TLOC:   Remote IP: 198.51.100.1  Colour: biz-internet
  Latency:   45ms        ← internet path higher latency
  Jitter:    12ms
  Loss:      0%
  
The BFD detail output provides the real-time SLA metrics for each tunnel path — the direct input to application-aware routing decisions. The MPLS path shows 8ms latency and 2ms jitter — excellent for real-time traffic. The internet path shows 45ms latency and 12ms jitter — acceptable for bulk data but potentially too high for voice depending on the SLA policy thresholds configured. These values are continuously measured and updated. When a metric crosses a policy threshold, the vEdge automatically updates its forwarding table to move affected flows to a qualifying path.

8. Step 4 — Application-Aware Routing Policy

Application-aware routing (AAR) policies are created in vManage and distributed to vEdge routers by vSmart. A complete AAR policy has three components: an SLA class (defining the acceptable latency/jitter/loss thresholds for a traffic type), an application list or DSCP match (identifying which traffic this policy applies to), and a preferred path list (the ordered list of TLOC colours to try, falling back in order if the preferred path does not meet SLA).

AAR Policy — vManage GUI Workflow (Step by Step)

! ═══════════════════════════════════════════════════════════
! All AAR policy steps are performed in the vManage GUI.
! Navigation: Configuration → Policies
! ═══════════════════════════════════════════════════════════

! ── Step 1: Create SLA Classes ────────────────────────────
! Configuration → Policies → Custom Options → SLA Class
!
! SLA Class: VOICE-SLA
!   Latency:      150ms maximum
!   Jitter:       30ms maximum
!   Loss:         1% maximum
!
! SLA Class: DATA-SLA
!   Latency:      300ms maximum
!   Jitter:       100ms maximum
!   Loss:         5% maximum

! ── Step 2: Create Application Lists ─────────────────────
! Configuration → Policies → Custom Options → Application
!
! Application List: VOICE-APPS
!   Match: DSCP EF (46)          ← VoIP RTP streams
!   AND/OR: Application: cisco-webex-calling, zoom, ms-teams-calling
!
! Application List: DATA-APPS
!   Match: All traffic NOT in VOICE-APPS

! ── Step 3: Create the AAR Policy ─────────────────────────
! Configuration → Policies → Application Aware Routing
! Click "Add Policy" → Add Topology
!
! Policy Name: Branch-AAR-Policy
!
! Sequence Rule 1 (Voice):
!   Match:
!     DSCP: 46 (EF)              ← match voice traffic
!   Action:
!     SLA Class: VOICE-SLA
!     Preferred Color: mpls      ← try MPLS first
!     Fallback: biz-internet     ← fall back if MPLS fails SLA
!     If no path meets SLA: use preferred-color anyway (best effort)
!
! Sequence Rule 2 (Default Data):
!   Match:
!     (no specific match — catches all remaining traffic)
!   Action:
!     SLA Class: DATA-SLA
!     Preferred Color: biz-internet   ← prefer internet for data
!     Fallback: mpls                  ← fall back to MPLS if needed

! ── Step 4: Attach Policy to a Site List ─────────────────
! Configuration → Policies
! Click "Add Policy" → Topology = Hub and Spoke or Full Mesh
! Site List: All-Branch-Sites (site IDs 200-299)
! Attach AAR Policy: Branch-AAR-Policy
! Direction: From-Service (LAN → WAN)

! ── Step 5: Activate Policy ──────────────────────────────
! Click "Activate" → vManage pushes policy to vSmart
! vSmart distributes policy to all vEdge routers in site list
  

Verify AAR Policy Applied on vEdge

! ── Confirm policy received from vSmart ───────────────────
vEdge-BR1# show policy from-vsmart

From vsmart:
  data-policy:
    Branch-AAR-Policy
      sequence 1  match dscp 46
                  action:  sla-class VOICE-SLA
                           preferred-color mpls
                           fallback-to-best-tunnel color biz-internet
      sequence 2  match default
                  action:  sla-class DATA-SLA
                           preferred-color biz-internet
                           fallback-to-best-tunnel color mpls

! ── Check real-time forwarding decisions ──────────────────
vEdge-BR1# show app-route stats

Application route statistics:
Tunnel: 1.1.1.1  Source Color: mpls  Remote Color: mpls
  Mean Latency:    8ms   Mean Jitter: 2ms   Loss: 0%
  SLA Classes Met: VOICE-SLA, DATA-SLA
  Packets forwarded: 45823

Tunnel: 1.1.1.1  Source Color: biz-internet  Remote Color: biz-internet
  Mean Latency:    45ms  Mean Jitter: 12ms  Loss: 0%
  SLA Classes Met: DATA-SLA
  Packets forwarded: 312048

! ── Note: VOICE-SLA not met on internet path (jitter 12ms > 30ms? NO)
! ── But latency 45ms < 150ms, jitter 12ms < 30ms, loss 0% < 1%
! ── So both paths meet VOICE-SLA currently
! ── AAR prefers MPLS for voice due to preferred-color setting
  
show app-route stats shows the real-time forwarding statistics per tunnel, including which SLA classes each tunnel is currently meeting. When a tunnel's BFD metrics degrade and it no longer meets the SLA class thresholds, the vEdge automatically moves flows matching that SLA class to the next qualifying tunnel in the preference list. This entire failover happens locally on the vEdge without requiring any communication to vSmart — the policy and SLA thresholds were already distributed; the vEdge makes forwarding decisions autonomously using real-time BFD data.

Simulate AAR Failover — MPLS Path Degradation

! ── In a lab: simulate MPLS degradation by increasing ─────
! ── latency on the MPLS interface (or bring it down) ──────

! ── Before degradation: voice flows on MPLS ──────────────
vEdge-BR1# show app-route sla-class VOICE-SLA

TUNNEL                      LATENCY  JITTER  LOSS  SLA-MET
1.1.1.1 mpls→mpls           8ms      2ms     0%    YES  ← voice using this
1.1.1.1 biz-internet→biz   45ms     12ms     0%    YES

! ── Simulate MPLS SLA violation (latency spikes to 200ms) ─
! ── (In real production: MPLS congestion causes this) ──────

! ── After degradation ─────────────────────────────────────
vEdge-BR1# show app-route sla-class VOICE-SLA

TUNNEL                      LATENCY  JITTER  LOSS  SLA-MET
1.1.1.1 mpls→mpls           200ms    45ms    0%    NO  ← exceeds 150ms
1.1.1.1 biz-internet→biz    45ms    12ms     0%    YES ← voice auto-moved here

! ── Voice flows automatically moved to biz-internet ───────
! ── No manual intervention, no routing change, no tickets ──
  

9. SD-WAN Verification Command Reference

Command Run On What It Shows Key Fields
show omp peers vEdge OMP sessions to vSmart controllers — the control-plane connection status State: up = control plane healthy. If down, no routing information is being distributed to this vEdge
show omp routes vEdge All OMP routes received from vSmart — remote site prefixes with their TLOC next-hops and status codes C,R (chosen, resolved) = route installed in forwarding table. Missing routes indicate the remote vEdge has not advertised them or a policy is filtering them
show omp tlocs vEdge All known TLOCs (remote vEdge tunnel endpoints) received via OMP — system IP, colour, encap, and public IP/port Public IP addresses used for IPsec tunnel establishment. If a TLOC is missing, the corresponding tunnel cannot be built
show bfd sessions vEdge All active BFD sessions for data-plane IPsec tunnels — source and remote TLOC, state, uptime, TX/RX counts State: up = tunnel active and BFD probes flowing. Asymmetric TX/RX = path loss detected. State down = tunnel broken
show bfd sessions detail vEdge Per-tunnel SLA metrics — latency, jitter, packet loss, and error counts Raw BFD metric values used by AAR policy. Compare against configured SLA class thresholds to understand path selection
show app-route stats vEdge Application-route forwarding statistics per tunnel — packets forwarded and SLA classes met per tunnel Which tunnels are being used for which traffic classes. High packet counts on an unexpected tunnel indicate AAR has made a failover
show policy from-vsmart vEdge All policies received from vSmart and currently active on the vEdge — AAR policies, data policies, CFLoP (Centralized Forwarding and Localized Policy) Confirms the correct policy was received and distributed. If a policy change was made in vManage but is not shown here, the vSmart distribution failed
show interface [intf] vEdge Interface status including tunnel-interface parameters — colour, encapsulation, and whether the tunnel is operational Confirm colour assignment and tunnel-interface configuration matches the device template that was pushed
show control connections vEdge / vSmart All active control connections — vEdge to vSmart/vBond/vManage sessions and their states All three controller connections (vBond, vSmart, vManage) should show up. A specific controller connection down indicates a reachability or certificate issue
show certificate validity vEdge Certificate expiry dates for the vEdge's installed certificates — root CA, enterprise CA, and device certificate Expired certificates cause authentication failures at vBond and prevent all control connections. Monitor expiry dates proactively
show sdwan omp peers (IOS-XE cEdge) cEdge (IOS-XE) OMP peers on a cEdge (IOS-XE SD-WAN router). Uses the show sdwan prefix instead of bare commands Same fields as vEdge show omp peers. Use show sdwan prefix for all SD-WAN commands on IOS-XE cEdge devices

SD-WAN Troubleshooting Quick Reference

Symptom First Check Likely Cause and Fix
vEdge not appearing in vManage after ZTP show control connections on vEdge — is vBond connection up? vBond unreachable (no route in VPN 0 to vBond IP, or UDP 12346 blocked), certificate not yet uploaded to vManage, or wrong organization-name. Verify VPN 0 default route, ping vBond IP from VPN 0, confirm serial number is in vManage WAN Edge list
OMP peer down (vSmart connection lost) show omp peers — state not "up"; show control connections VPN 0 route to vSmart IP lost (underlay failure), certificate expired, or DTLS port 12346 blocked. Check underlay connectivity, certificate validity, and firewall ACLs on the WAN path
BFD session down for specific tunnel colour show bfd sessions — identify which colour/tunnel is down Underlay connectivity failure for that transport (ISP outage, MPLS circuit down), IPsec key exchange failed, or the remote TLOC's public IP changed. Check physical WAN interface status and verify underlay IP reachability to the remote TLOC address shown in show omp tlocs
AAR policy not steering voice to MPLS show policy from-vsmart — is the AAR policy present? show bfd sessions detail — does MPLS meet the SLA thresholds? Policy not received (check vSmart distribution), DSCP EF marking not set on voice traffic (verify QoS marking upstream), or MPLS path is currently failing the SLA class thresholds causing AAR to legitimately prefer internet. Check show app-route sla-class VOICE-SLA
Remote site prefixes missing from OMP route table show omp routes — are remote VPN 1 prefixes present? Remote vEdge has not advertised the prefix (LAN interface not in a service VPN, or service VPN not attached to the device template), or a centralized data policy on vSmart is filtering the route. Check the remote vEdge's VPN 1 configuration and any active vManage route policies

Key Points & Exam Tips

  • Four SD-WAN components, four planes: vManage (management plane — GUI, templates, policies, REST API), vSmart (control plane — OMP routing, policy distribution), vBond (orchestration plane — ZTP, NAT traversal, initial authentication), vEdge/cEdge (data plane — IPsec tunnels, BFD, packet forwarding, AAR enforcement). Know which component belongs to which plane — this is a common exam question.
  • The three VPN system defaults: VPN 0 (transport — all WAN-facing interfaces and control connections; do NOT put LAN interfaces here), VPN 512 (management — out-of-band SSH/SNMP access to the router itself), and service VPNs 1–511/513–65530 (user traffic, segmented per application or department). The VPN number in SD-WAN maps conceptually to a VRF in traditional IOS — see VRF-Lite Configuration.
  • A TLOC (Transport Locator) is defined by three values: system IP + colour + encapsulation. Every WAN interface on a vEdge has one TLOC. TLOCs are advertised to vSmart via OMP. vSmart distributes TLOC information to all other vEdges — they use TLOC public IP addresses to establish direct IPsec data-plane tunnels without vSmart in the forwarding path. vSmart never forwards user data.
  • OMP is the SD-WAN routing protocol — runs only between vEdge and vSmart (never directly vEdge-to-vEdge). It carries three route types: vRoutes (prefixes + TLOC next-hops), TLOC routes (tunnel endpoint advertisements), and service routes. All OMP traffic is secured with DTLS/TLS on port 12346. Know the show omp peers, show omp routes, and show omp tlocs verification commands.
  • BFD runs on every IPsec data-plane tunnel between vEdge pairs, measuring latency, jitter, and packet loss every second (default). BFD metrics are the real-time input to application-aware routing policy decisions. BFD also provides fast tunnel failure detection. show bfd sessions shows tunnel state; show bfd sessions detail shows the SLA metrics.
  • ZTP onboarding sequence: vEdge boots → DHCP on WAN interface → contacts Cisco ZTP cloud → gets vBond address → contacts vBond → authenticates certificate → gets vSmart/vManage addresses → OMP session to vSmart → NETCONF session to vManage → receives device template → fully operational. Failure at any step is identifiable with show control connections.
  • Application-aware routing is configured in vManage (SLA class → application match → preferred colour order), pushed via vSmart to vEdge, and enforced locally on the vEdge using real-time BFD data. Failover between paths is automatic, sub-second, and requires no vSmart involvement at enforcement time. This is fundamentally different from traditional PBR, which is static — see Policy-Based Routing.
  • The underlay vs overlay distinction: the underlay is the physical WAN (MPLS, internet) managed by ISPs; the SD-WAN overlay is the IPsec tunnel mesh built on top of it. SD-WAN improves WAN operations by abstracting from the underlay — it can use any combination of underlay transports and make intelligent forwarding decisions across them based on real-time quality metrics. Traditional MPLS WAN provides only one transport with no path-quality awareness.
  • On the exam: know that vBond must have a public IP (it is the first contact for ZTP and must be reachable from all sites), vSmart does NOT forward data traffic, the data plane tunnels are direct vEdge-to-vEdge (vSmart is not in the forwarding path), and certificates/PKI are central to SD-WAN security — all devices authenticate with X.509 certificates signed by vManage's CA. Certificate expiry is a common production failure mode.
Next Steps: For the traditional WAN technologies that SD-WAN replaces or complements — MPLS label switching at MPLS Fundamentals and MPLS Overview, GRE overlay tunnels at GRE Tunnel Configuration, and IPsec encryption at Site-to-Site IPsec VPN and IPsec Basics. For DMVPN — the predecessor technology to SD-WAN that also provides dynamic spoke-to-spoke tunnels — see DMVPN Phase 1, 2 & 3. For the BGP and OSPF routing fundamentals that run on the LAN-side of SD-WAN deployments, see BGP Basics & eBGP and OSPF Single-Area Configuration. For IP SLA concepts similar to BFD path monitoring used in traditional IOS deployments, see IP SLA with Tracking. For NETCONF/YANG programmability that underpins vManage's device template push mechanism, see NETCONF with ncclient (Python) and NETCONF & RESTCONF Overview. For SD-WAN in the broader controller-based networking context see Controller-Based Networking and Northbound & Southbound APIs.

TEST WHAT YOU LEARNED

1. A vEdge router at a new branch site has just powered on and received a DHCP address on its internet-facing WAN interface. In what order does it contact the SD-WAN components, and what does each component provide?

Correct answer is B. The ZTP onboarding sequence is a precise, ordered process and a key exam topic. The correct sequence is: (1) Cisco ZTP cloud → returns org's vBond address based on chassis serial; (2) vBond → authenticates certificate, provides vSmart and vManage addresses; (3) vSmart → OMP session for routing and policy; (4) vManage → NETCONF for device template delivery. Option A is incorrect because vManage is never the first contact — a new vEdge does not know vManage's address until vBond provides it. Option C is incorrect because vSmart is not the first contact and does not provide vBond or vManage addresses — that is vBond's role. Understanding why vBond must have a public IP address follows directly from this sequence: vBond is the entry point for all new devices, and if it is behind NAT or unreachable from the internet, no new vEdge device can onboard. In deployments where internet access is not available for ZTP, manual bootstrap configuration (setting the vBond IP directly in the vEdge's system configuration) bypasses the Cisco ZTP cloud step and goes directly to the vBond contact.

2. What is a TLOC, what three values uniquely define it, and why is it the fundamental forwarding identifier in the SD-WAN data plane?

Correct answer is D. The TLOC concept is one of the most important and distinctive elements of SD-WAN architecture. Understanding why it needs three values (not just an IP address) is key. The system IP alone is insufficient because a dual-homed site has multiple physical paths to the same vEdge — you need to specify not just "send to this vEdge" but "send to this vEdge via this specific transport type." The colour alone is insufficient because the same colour might exist at many sites. The combination of system IP (which site/device), colour (which transport interface), and encapsulation (which tunnel protocol) creates a globally unique identifier for each tunnel endpoint. This three-part key is what enables AAR policies to say "prefer colour mpls" — the policy is expressed in terms of transport type (colour) rather than specific IP addresses, making it portable across all sites. When a new branch site comes online with an MPLS link and an internet link, it automatically has two TLOCs that AAR policies can reference using colours without any policy modification. Option C is the most commonly confused answer — the system IP is part of the TLOC definition but is not the TLOC itself. A dual-homed vEdge has the same system IP but two different TLOCs.

3. Why does vSmart never appear in the data-plane forwarding path between vEdge routers, and what are the implications if vSmart becomes unavailable?

Correct answer is A. The separation of the control plane (vSmart) from the data plane (vEdge-to-vEdge tunnels) is the foundational architectural principle of SD-WAN and what distinguishes it from older SDN models where controllers were in the forwarding path. The key insight is that vSmart's role is to distribute routing intelligence — it provides the routing table and policy to each vEdge once, and the vEdge then operates autonomously. This is analogous to how a route reflector in BGP distributes routes to clients but is not in the forwarding path for the traffic those routes describe. The resilience characteristic in option A is important for production deployments: vSmart failure is a control-plane event, not a data-plane event. Traffic continues flowing on all established tunnels with the current forwarding tables. The failure only impacts the ability to react to topology changes (new routes, failed paths not being rerouted to new alternatives if the primary path goes down and BFD detects the failure). In practice, this means vSmart should be deployed redundantly (multiple vSmart instances) to ensure continuous policy and routing updates, but a temporary vSmart outage is survivable without a traffic blackout. This is a significant operational advantage over architectures where the controller is in the forwarding path.

4. VPN 0 contains an interface with tunnel-interface colour mpls. Another interface in VPN 1 has no tunnel-interface configuration. What is the functional difference between these two interfaces in the SD-WAN fabric?

Correct answer is C. The presence or absence of tunnel-interface is the single most important CLI indicator that distinguishes WAN-facing transport interfaces from LAN-facing service interfaces in SD-WAN viptela-OS configuration. This distinction is not about encryption level or VPN number alone — it is about the functional role of the interface in the SD-WAN fabric. An interface with tunnel-interface becomes a TLOC: it is registered with vSmart, its public IP is shared with all other vEdge routers via OMP TLOC advertisements, it participates in IPsec tunnel negotiation using that public IP as the IKE identity, and it runs BFD on every established tunnel. An interface without tunnel-interface is simply a standard IP interface in its VPN — it can be in VPN 0, VPN 1, or any service VPN, but it will not participate in the tunnel fabric. In practice: WAN interfaces always have tunnel-interface; LAN interfaces never have tunnel-interface. The VPN assignment is a separate concern — VPN 0 is conventionally used for transport interfaces, but the tunnel-interface command is what actually makes an interface a tunnel endpoint, not the VPN number. Option D is incorrect — the colour within tunnel-interface (mpls, biz-internet, lte, etc.) is a label for the type of transport, not a restriction on which transport provider can be used. The colour is used by AAR policies to express path preference in human-readable terms.

5. show bfd sessions shows that the MPLS tunnel between vEdge-BR1 and vEdge-DC is in state down, but the internet tunnel between the same two routers is in state up. Voice traffic configured to prefer MPLS is also experiencing quality issues. What is happening and what are the likely outcomes?

Correct answer is B. This scenario tests understanding of the relationship between BFD tunnel state, AAR policy failover, and the data-plane autonomy of vEdge routers. BFD runs directly on the IPsec data-plane tunnels — it sends probe packets through the tunnels and measures the responses. When the MPLS underlay fails, BFD probes on the MPLS tunnel stop being received; after the hold-down timer expires (configured in BFD — typically 3 missed hellos at 1-second intervals = 3 seconds), BFD marks the tunnel as down. This BFD down event triggers the AAR policy engine on the vEdge to re-evaluate path selection for all active flows. For voice flows (matching DSCP EF or the VOICE-APPS list), the policy tries the next available path — the internet tunnel. If the internet tunnel currently meets VOICE-SLA thresholds (latency ≤150ms, jitter ≤30ms, loss ≤1%), voice is moved there automatically. The transient quality degradation during the 3-second BFD detection window is expected — this is why some deployments use sub-second BFD timers for voice traffic. The key operational point is that this entire failover happens on the vEdge locally, with no involvement from vSmart. vSmart distributed the AAR policy; the vEdge enforces it autonomously using real-time BFD data. Option D is incorrect because well-designed AAR policies always include a fallback path — "if preferred colour not available or not meeting SLA, use next-best available tunnel." Dropping traffic with no fallback would be an AAR misconfiguration.

6. What is the organisation-name parameter in SD-WAN system configuration, and what happens during the onboarding process if the organisation-name on a new vEdge does not match the organisation-name on vBond?

Correct answer is D. The organisation-name mismatch is one of the most common and frustrating onboarding failures in SD-WAN deployments, precisely because it is easy to get wrong (trailing spaces, capitalisation differences, or abbreviation inconsistencies) and the error message can be cryptic. The technical mechanism: when vManage acts as the certificate authority and issues a device certificate to a vEdge, the organisation name is embedded in the certificate's Subject field (similar to how X.509 certificates include an Organization field). Every vBond, vSmart, and vManage also has certificates with the same organisation name. During DTLS mutual authentication (both sides present and verify each other's certificates), each device checks not just that the certificate is validly signed by the known root CA, but also that the organisation name field matches the locally configured value. This prevents cross-contamination between separate SD-WAN deployments that might share infrastructure. The practical implication: if you are deploying a new site and the vEdge shows control connections to vBond as constantly failing after ZTP redirected it correctly, the first thing to check — after verifying IP reachability — is whether the organisation-name in the vEdge's system configuration (visible via show system status) exactly matches the organisation name in vManage (Configuration → Settings → Organization Name). Even a single space difference causes authentication failure.

7. How does SD-WAN application-aware routing differ fundamentally from traditional policy-based routing (PBR) as configured with route maps on Cisco IOS, and why does this difference matter for production WAN operations?

Correct answer is A. Option C contains a true statement (AAR does use NBAR2 for application classification) but it misidentifies this as the fundamental difference — the fundamental difference is the dynamic, SLA-threshold-driven path selection, not the classification method. Option D is partially true — IOS PBR can be combined with IP SLA tracking to detect path failures — but this combination still only detects complete path failure (IP SLA probe timeout) rather than continuous quality measurement (BFD latency/jitter/loss metrics). IP SLA with track objects can change next-hops when a path goes completely down, but it cannot detect and respond to gradual quality degradation (rising latency, increasing jitter) that affects real-time applications before causing outright failure. BFD in SD-WAN measures quality in real time and can move voice traffic off an MPLS path when its latency rises to 140ms (approaching the 150ms threshold) even though the path is still technically "up" and IP SLA would consider it healthy. The ability to respond to quality degradation, not just binary up/down failures, is what makes AAR qualitatively different from PBR+IP SLA. Additionally, the centralised vManage policy management means an AAR policy change takes effect across all 500 branch sites simultaneously — with PBR+IP SLA, a policy change requires touching the configuration of every individual router.

8. A branch vEdge has both MPLS and internet TLOCs. show omp routes shows the DC prefix 10.10.0.0/24 with two TLOC entries (mpls and biz-internet) both in C,R (chosen, resolved) state. An AAR policy prefers MPLS for voice. How does the vEdge actually determine which tunnel to use for a specific VoIP packet destined for 10.10.0.10?

Correct answer is C. Option D is explicitly incorrect — vSmart is never in the per-packet forwarding path. This is one of the most important principles of the SD-WAN architecture (covered in question 3) and is worth re-emphasising. The vEdge's forwarding decision process for AAR is a local, real-time computation: the routing table (from OMP) provides the candidate next-hops (TLOCs), the AAR policy (received from vSmart and cached locally) specifies the SLA class and preference order, and the BFD SLA database (maintained locally from continuous BFD probes) provides the current quality metrics for each tunnel. The vEdge combines these three local data sources to make a forwarding decision in hardware/software on a per-flow basis without any controller involvement. Option A (round-robin ECMP) is incorrect — ECMP is the behaviour without any AAR policy, but with an AAR policy the preferred colour overrides the ECMP decision. The ordering of TLOC entries in the OMP table (option B) is not how the forwarding decision is made — the AAR policy's preferred-colour and SLA evaluation are the decision mechanism. Understanding this local autonomous forwarding model is essential for troubleshooting AAR behaviour: if voice is not going over MPLS as expected, the investigation is entirely on the vEdge itself — check the policy with show policy from-vsmart, check BFD metrics with show bfd sessions detail, and check actual forwarding with show app-route stats.

9. What are device templates in vManage, why are they structured as a hierarchy of feature templates, and what is the operational advantage of using variables within templates?

Correct answer is D. The template hierarchy and variable system are what make SD-WAN's centralised management operationally transformative compared to traditional per-device CLI management. The two-level structure is analogous to object-oriented programming: feature templates are reusable "classes" that define a configuration aspect, and device templates are "instances" that compose those classes into a complete device configuration. The variable mechanism is the key to scale: without variables, you would need one unique device template per device (defeating the purpose of templates). With variables, you define the policy once in the template and provide per-device values in a CSV or per-device input at attachment time. In production, a typical enterprise might have: one "Branch-Standard" device template, one "VPN0-DualWAN" feature template, one "VPN1-LAN" feature template, and a CSV with 500 rows of device-specific values (system IP, site ID, LAN IP, WAN IPs). The entire 500-device deployment is orchestrated through vManage. A change to the BFD timer in the VPN0 feature template updates all 500 devices. This is what Cisco means by "intent-based networking" — you express the intent (this is a standard branch configuration) and let vManage ensure all devices conform to that intent.

10. How does SD-WAN handle a scenario where a branch vEdge is behind a NAT device (e.g., it has a private WAN IP of 192.168.10.2 but appears on the internet with public IP 203.0.113.100), and why is this capability important for real-world deployments?

Correct answer is B. NAT traversal is one of the most operationally important capabilities of SD-WAN, and vBond's role in facilitating it is a key exam topic. The mechanism described in option B is precisely how vBond handles NAT: it exploits the fact that when the NATted vEdge sends an outbound UDP packet to vBond, the NAT device creates a mapping entry (private IP:port ↔ public IP:port). vBond sees the public IP:port of the NATted vEdge and can share this information with other vEdge routers. Remote vEdges can then send IPsec packets directly to the public IP:port, and the NAT device will translate them to the correct private address using the existing mapping. This is conceptually similar to STUN (Session Traversal Utilities for NAT) used in VoIP. The importance for real-world deployments cannot be overstated: broadband internet connections (DSL, cable, LTE) almost universally provide NAT'd addresses rather than dedicated public IPs. If SD-WAN required public IPs on all WAN interfaces, it would be unusable for most small branch deployments that rely on standard consumer/business broadband. The ability to build the SD-WAN overlay on top of NATted internet connections is what allows enterprises to replace expensive dedicated MPLS circuits with commodity broadband at branch sites while maintaining the secure overlay tunnel mesh. Option A is the pre-SD-WAN assumption about IPsec limitations — while traditional IKEv1 had NAT traversal challenges, SD-WAN's UDP encapsulation and vBond-assisted NAT traversal address this completely.