Full End-to-End Network Troubleshooting Scenario

Real-world network outages almost never involve a single clean fault at exactly one layer. A weekend maintenance window goes wrong, a new engineer pushes four configuration changes at once, or a power event causes three devices to reload with partially corrupted running-configs — and suddenly the network is broken in ways that interact and obscure each other. The hallmark of an experienced network engineer is not memorising individual commands, but knowing how to think through a multi-fault scenario systematically: starting at Layer 1, working up through each OSI layer, fixing faults in order, and re-testing at each stage so that one fix does not hide another problem.

This lab presents a realistic end-to-end scenario where five simultaneous faults have been deliberately introduced across a three-site network. Every symptom you encounter will be real — the same outputs you would see on a production network — and the faults interact: a Layer 2 problem may mask a Layer 3 problem that you cannot diagnose until Layer 2 is fixed. Working through this scenario in order and understanding why each fix enables the next diagnosis is the goal.

This is the capstone lab in the NetsTuts troubleshooting series. The individual techniques used here are covered in depth in Troubleshooting Layer 1, Troubleshooting Layer 2, Troubleshooting Layer 3, Troubleshooting DHCP, and Troubleshooting OSPF Neighbour Adjacency. If any technique in this lab is unfamiliar, return to the relevant guide before continuing.

1. The OSI Troubleshooting Methodology

Why Layer Order Matters in Multi-Fault Scenarios

When multiple faults exist simultaneously, diagnosing them out of order produces misleading results. A CRC error storm from a Layer 1 duplex mismatch generates noise that makes Layer 2 VLAN tables look unreliable. A broken trunk makes inter-VLAN routing appear to fail even when the router is perfectly configured. A missing OSPF route looks like a routing protocol failure when it is actually caused by a Layer 2 trunk being down. The OSI model provides the correct diagnosis order — fix lower layers first, then re-test the higher layers that depend on them:

  ┌──────────────────────────────────────────────────────────────────┐
  │  OSI TROUBLESHOOTING WORKFLOW — TOP TO BOTTOM DIAGNOSIS ORDER    │
  ├─────────┬────────────────────────────────────────────────────────┤
  │ Layer 1 │ Physical — cables, interfaces, duplex, speed, LEDs     │
  │         │ Commands: show interfaces, show ip interface brief      │
  │         │ Fix before Layer 2 — errors here corrupt L2 tables     │
  ├─────────┼────────────────────────────────────────────────────────┤
  │ Layer 2 │ Data Link — VLANs, trunk, STP, MAC learning            │
  │         │ Commands: show vlan, show interfaces trunk,             │
  │         │           show spanning-tree, show mac address-table    │
  │         │ Fix before Layer 3 — broken trunks hide routing issues  │
  ├─────────┼────────────────────────────────────────────────────────┤
  │ Layer 3 │ Network — IP, routing table, OSPF/static, ACLs         │
  │         │ Commands: show ip route, ping, traceroute,              │
  │         │           show ip ospf neighbor                         │
  │         │ Fix before Layer 4+ — routing must work before apps do │
  ├─────────┼────────────────────────────────────────────────────────┤
  │ Layer 4+│ Transport/Application — DHCP, DNS, ACLs, NAT           │
  │         │ Commands: show ip dhcp binding, show access-lists,      │
  │         │           debug ip dhcp server events                   │
  └─────────┴────────────────────────────────────────────────────────┘

  GOLDEN RULES:
  1. Always start at Layer 1 — never assume physical is healthy
  2. Fix and verify each layer before moving up
  3. Re-test from Layer 1 after every fix — a fix can unmask a new fault
  4. Document every change — in multi-fault scenarios, track what you
     changed and what the result was after each fix
  5. Ping with a source address that matches the traffic flow you are
     troubleshooting — not just from the router's own IP
  

The Structured Checklist — Per-Layer Questions

Layer Questions to Answer Pass Condition
L1 Are all interfaces up/up? Are CRC, input error, or late collision counters incrementing? Is duplex and speed correctly negotiated or set? All relevant interfaces up/up. Zero or stable error counters. Duplex and speed match on both ends.
L2 Are hosts in the correct VLAN? Are inter-switch trunks forming? Are all required VLANs allowed and active on each trunk? Is native VLAN consistent? Correct VLAN per port. Trunk shows trunking status. Required VLANs in all four trunk sections. No native VLAN mismatch.
L3 Do routers have routes to all required destinations (both directions)? Is OSPF neighbour in FULL state? Are static routes correct (right next-hop and mask)? Routing table has entries for all subnets. OSPF neighbours FULL. Bidirectional ping succeeds between all subnet gateways.
L4+ Are DHCP clients getting correct addresses? Is DNS resolving? Are ACLs blocking legitimate traffic? Clients have correct IPs, masks, gateways, and DNS. No unintended ACL drops. Applications reachable.

2. Lab Topology — The NetsTuts Campus Network

The scenario represents a small campus network with three buildings. Building A hosts the core infrastructure. Building B hosts the engineering team. Building C is a remote branch connected via a WAN link. A weekend maintenance window introduced five faults — your task is to find and fix all of them in the correct order:

  ╔══════════════ BUILDING A (Core) ═══════════════╗
  ║                                                 ║
  ║  [SRV1 – 192.168.99.10]──Gi1/0─┐               ║
  ║  DHCP + DNS Server              │               ║
  ║                            NetsTuts_SW1         ║
  ║  [PC_MGMT – VLAN10]──Fa0/1──┤  (Access/Dist.)  ║
  ║  [PC_VOICE – VLAN20]──Fa0/2──┤  Gi0/1──────────╫──── NetsTuts_R1 (Core Router)
  ║                            Gi0/2 (trunk)        ║     Gi0/0: 10.0.12.1/30
  ║                               │                 ║     Gi0/1: 192.168.10.1/24 (VLAN10 GW)
  ╚═══════════════════════════════╪═════════════════╝     Gi0/2: 192.168.20.1/24 (VLAN20 GW)
                                  │                       Lo0:   1.1.1.1/32
  ╔══════════════ BUILDING B (Engineering) ════════╗      │ OSPF area 0
  ║                                                ║      │ 10.0.12.0/30
  ║  [ENG1 – VLAN30]──Fa0/1──┐                    ║      │
  ║  [ENG2 – VLAN30]──Fa0/2──┤  NetsTuts_SW2      ║  NetsTuts_R2 (Branch Router)
  ║  [ENG3 – VLAN40]──Fa0/3──┤  (Access)          ║  Gi0/0: 10.0.12.2/30
  ║                        Gi0/1 (trunk to SW3)    ║  Gi0/1: (trunk to SW2)
  ╚═══════════════════════════╪════════════════════╝  Gi0/2: 10.0.23.5/30
                              │                        Lo0:   2.2.2.2/32    │ OSPF area 0
                         NetsTuts_SW3                                        │ 10.0.23.0/30
                         (Distribution)                                     │
                         Gi0/1──────── NetsTuts_R2 Gi0/1                NetsTuts_R3 (Remote)
                                                                         Gi0/0: 10.0.23.6/30
  ╔══════════════ BUILDING C (Remote) ═════════════╗                    Gi0/1: 192.168.30.1/24
  ║  [REM1 – VLAN30 – 192.168.30.x]──Fa0/1──┐     ║                    Lo0:   3.3.3.3/32
  ║  [REM2 – VLAN30 – 192.168.30.x]──Fa0/2──┤ SW4 ║
  ║                                       Gi0/1─────╫──── NetsTuts_R3 Gi0/1
  ╚═════════════════════════════════════════════════╝

  Design Intent:
    VLAN 10 — 192.168.10.0/24 — Management (Building A) — DHCP from SRV1
    VLAN 20 — 192.168.20.0/24 — Voice     (Building A) — DHCP from SRV1
    VLAN 30 — 192.168.30.0/24 — Engineering (Buildings B + C) — DHCP from SRV1
    VLAN 40 — 192.168.40.0/24 — IoT/Lab   (Building B) — DHCP from SRV1

    OSPF area 0: R1 ↔ R2 ↔ R3 — all router links and stub networks
    DHCP relay: ip helper-address 192.168.99.10 on all VLAN SVIs

  The Five Faults (introduced during maintenance):
    Fault A — Layer 1: SW1 Fa0/1 duplex mismatch with PC_MGMT NIC
    Fault B — Layer 2: VLAN 30 missing from SW2's VLAN database
    Fault C — Layer 2: SW2–SW3 trunk not forming (both ports dynamic auto)
    Fault D — Layer 3: OSPF area mismatch on R2–R3 link (R3 in area 1)
    Fault E — Layer 3: Missing return static route on R1 for 192.168.30.0/24
  
Fault Layer Location Symptom Impact
A 1 SW1 Fa0/1 CRC errors incrementing, PC_MGMT slow Degraded VLAN 10 throughput — DHCP/DNS slow for management hosts
B 2 SW2 VLAN database ENG1 and ENG2 get 169.254.x.x — Fa0/1 and Fa0/2 show (Inactive) All VLAN 30 Engineering hosts in Building B cannot get DHCP addresses
C 2 SW2 Gi0/1 / SW3 port No inter-switch traffic between SW2 and SW3 — trunk not forming Building B completely isolated from core — no routing, no DHCP for VLAN 40
D 3 R3 OSPF config R2–R3 OSPF neighbour absent — 192.168.30.0/24 missing from all routing tables Building C (Remote) completely unreachable from Buildings A and B
E 3 R1 routing table Ping from Building A to Building C times out — return path missing Building A cannot reach Building C despite OSPF working for Buildings B–C

3. Phase 1 — Initial Assessment (What Is and Isn't Working)

Before touching any configuration, spend five minutes mapping out exactly what is and is not working. This prevents fixing things that are not broken and establishes a baseline for verifying that each fix actually resolves something.

Step 1 — Symptom Inventory from Users

  User reports received after maintenance window:

  1. PC_MGMT (Building A, VLAN 10):
     "Internet is slow and file transfers keep dropping"
     → Partial connectivity — not a complete outage

  2. ENG1, ENG2 (Building B, VLAN 30):
     "No network at all — showing 169.254.x.x address"
     → DHCP failure — complete L2/L3 outage

  3. ENG3 (Building B, VLAN 40):
     "No network at all — cannot ping anything"
     → Complete outage — different VLAN from ENG1/ENG2

  4. REM1, REM2 (Building C, VLAN 30):
     "Cannot reach anything in Building A or B"
     → Complete L3 outage from remote site

  5. IT Manager:
     "Buildings A and B can talk to each other but not Building C"
     → Confirms L3 routing issue toward R3/Building C
  
The symptom inventory immediately reveals a pattern: Building A has partial degradation (L1 suspected — not a complete outage). Building B has complete outage for two different VLANs — suggesting a trunk or switch-level problem above the individual VLAN issue. Building C is completely isolated. Mapping symptoms to layers before opening any CLI prevents the common mistake of diving straight into routing tables when the fault is actually at Layer 1 or 2.

Step 2 — Ping Map from R1 (Core Router)

! ── Test reachability from the core outward ──────────────────────
NetsTuts_R1#ping 192.168.10.1 source Lo0      ! VLAN 10 SVI / own interface
!!!!!   ← L3 healthy to own VLAN 10 interface

NetsTuts_R1#ping 192.168.10.10 source Gi0/1   ! PC_MGMT (VLAN 10 host)
!!!!!   ← PC_MGMT reachable at L3 (L1 fault is below router)

NetsTuts_R1#ping 192.168.30.1 source Gi0/0    ! R3 VLAN 30 gateway
.....   ← Building C UNREACHABLE

NetsTuts_R1#ping 10.0.12.2 source Gi0/0       ! R2 WAN interface
!!!!!   ← R1–R2 link healthy

NetsTuts_R1#ping 10.0.23.6 source Gi0/0       ! R3 WAN interface
.....   ← R3 unreachable from R1

NetsTuts_R2#ping 10.0.23.6 source Gi0/2       ! R3 WAN from R2
!!!!!   ← R2–R3 link physically healthy (L1/L2 OK between R2 and R3)

NetsTuts_R2#show ip ospf neighbor
! ── Empty for R3 — OSPF not formed to R3 ─────────────────────────
Neighbor ID  Pri  State     Dead Time   Address      Interface
1.1.1.1       1  FULL/BDR  00:00:38    10.0.12.1    GigabitEthernet0/0
! ── R1–R2 OSPF is healthy; R2–R3 OSPF is absent ─────────────────
  
The ping map from the core router provides a rapid triage: R1–R2 is healthy at both L1/L2/L3 and OSPF. R2–R3 physical link is up (ping succeeds) but OSPF is absent — a clear Layer 3 OSPF fault. R1 cannot reach R3 at all — suggesting either the OSPF fault prevents route learning, or there is an additional routing fault on R1. We have confirmed the Layer 3 faults exist. Now we work bottom-up: fix Layer 1 first, then Layer 2, then Layer 3.

4. Phase 2 — Fix Fault A: Duplex Mismatch on SW1 Fa0/1

PC_MGMT reports slow performance. SW1 Fa0/1 connects PC_MGMT to the network. We investigate Layer 1 first — before touching any VLAN or routing configuration. See Troubleshooting Layer 1 for a complete guide to physical layer diagnostics.

Step 1 — Check Interface Error Counters

NetsTuts_SW1#show interfaces FastEthernet0/1
FastEthernet0/1 is up, line protocol is up (connected)
  Hardware is Fast Ethernet, address is 0c1a.2b3c.0101
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
     reliability 240/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100BaseTX
  ...
  5 minute input rate 892000 bits/sec, 743 packets/sec
  5 minute output rate 84000 bits/sec, 72 packets/sec
     1284733 packets input, 186274388 bytes, 0 no buffer
     Received 0 broadcasts (0 multicasts)
     0 runts, 0 giants, 0 throttles
     14822 input errors, 14822 CRC
     0 frame, 0 overrun, 0 ignored
     0 watchdog, 0 multicast, 0 pause input
     1047291 packets output, 84719432 bytes, 0 underruns
     0 output errors, 3291 collisions, 14 interface resets
     0 late collision, 0 deferred
  
The interface is up/up with 14,822 CRC errors and 3,291 collisions — but zero late collisions. This is the classic cable or signal quality fault pattern (CRC with collisions but no late collisions), not a duplex mismatch. A duplex mismatch would produce CRC errors plus late collisions. However, the switch shows Full-duplex, 100Mb/s — let's check what the PC_MGMT NIC is actually negotiated to.

Step 2 — Check CDP for Peer Duplex

NetsTuts_SW1#show cdp neighbors FastEthernet0/1 detail
-------------------------
Device ID: PC_MGMT
Entry address(es):
  IP address: 192.168.10.10
Platform: Cisco IP Phone,  Capabilities: Host
Interface: FastEthernet0/1,  Port ID (outgoing port): NIC
Duplex: half                ← PC_MGMT NIC is HALF duplex
  
CDP reveals the problem — SW1 Fa0/1 auto-negotiated to Full-duplex but the PC_MGMT NIC is running Half-duplex. This is the duplex mismatch: the switch believes it is in full-duplex and sends frames continuously; the PC_MGMT NIC uses CSMA/CD and defers when it detects the switch transmitting, generating collisions. The high CRC count and collisions without late collisions are consistent — the collisions occur within the first 512 bits (early collisions, not late), which is the CSMA/CD normal collision window. The fix: force both ends to full-duplex 100Mb/s.

Step 3 — Fix Duplex Mismatch and Verify

NetsTuts_SW1(config)#interface FastEthernet0/1
NetsTuts_SW1(config-if)#duplex full
NetsTuts_SW1(config-if)#speed 100
NetsTuts_SW1(config-if)#exit

! ── Clear counters and monitor for 60 seconds ────────────────────
NetsTuts_SW1#clear counters FastEthernet0/1
Clear "show interface" counters on this interface [confirm] y

! ── Wait 60 seconds then recheck ─────────────────────────────────
NetsTuts_SW1#show interfaces FastEthernet0/1 | include error|collision
     0 input errors, 0 CRC
     0 output errors, 0 collisions, 0 interface resets
  
After forcing full-duplex 100Mb/s, the error and collision counters are clean. PC_MGMT should also be set to forced full-duplex 100Mb/s in its NIC settings — never rely on auto-negotiation when one side is forced. Mismatched auto-negotiation (one forced, one auto) is the most common cause of duplex problems. Layer 1 is now confirmed healthy for Fa0/1. Move to Layer 2.
Layer 1 Verified ✓ — Fa0/1 duplex mismatch fixed. Error counters clean. Proceed to Layer 2 diagnosis. For more duplex and physical layer diagnostics, see Troubleshooting Layer 1.

5. Phase 3 — Fix Fault B: VLAN 30 Missing from SW2

ENG1 and ENG2 (SW2 Fa0/1 and Fa0/2, VLAN 30) are showing 169.254.x.x. We check Layer 2 on SW2 before suspecting DHCP — a missing VLAN makes ports inactive regardless of whether the DHCP server is healthy. See Troubleshooting Layer 2 for the full VLAN and trunk diagnosis guide.

Step 1 — Check VLAN Database on SW2

NetsTuts_SW2#show vlan brief

VLAN Name                 Status    Ports
---- --------------------- --------- --------------------------------
1    default               active    Gi0/1
40   IoT-Lab               active    Fa0/3
1002 fddi-default          act/unsup
1003 token-ring-default    act/unsup
1004 fddinet-default       act/unsup
1005 trnet-default         act/unsup
  
VLAN 30 is completely absent from SW2's VLAN database. Notice that Fa0/1 and Fa0/2 (ENG1 and ENG2) do not appear under any VLAN — they are inactive because they reference a VLAN that does not exist. VLAN 40 is present (ENG3 on Fa0/3 should be fine from a VLAN perspective), but we will investigate ENG3's outage separately when we address Fault C. The fix for ENG1 and ENG2 is to create VLAN 30 on SW2.

Step 2 — Confirm Port Status

NetsTuts_SW2#show interfaces FastEthernet0/1 switchport | include VLAN
Access Mode VLAN: 30 (Inactive)

NetsTuts_SW2#show interfaces FastEthernet0/2 switchport | include VLAN
Access Mode VLAN: 30 (Inactive)
  
Both Fa0/1 and Fa0/2 confirm Access Mode VLAN: 30 (Inactive) — the ports are correctly configured for VLAN 30 but inactive because the VLAN does not exist. No frames are forwarded through these ports until VLAN 30 is created. The port configuration itself is correct — only the VLAN database entry is missing.

Step 3 — Create VLAN 30 and Verify

NetsTuts_SW2(config)#vlan 30
NetsTuts_SW2(config-vlan)#name Engineering
NetsTuts_SW2(config-vlan)#exit

NetsTuts_SW2#show vlan brief

VLAN Name                 Status    Ports
---- --------------------- --------- --------------------------------
1    default               active    Gi0/1
30   Engineering           active    Fa0/1, Fa0/2
40   IoT-Lab               active    Fa0/3

NetsTuts_SW2#show interfaces FastEthernet0/1 switchport | include VLAN
Access Mode VLAN: 30 (Engineering)
  
VLAN 30 is now active on SW2 with Fa0/1 and Fa0/2 correctly listed. The (Inactive) flag is gone — replaced by the VLAN name (Engineering). ENG1 and ENG2 can now send and receive Layer 2 frames. However, they still cannot reach the DHCP server because Fault C (broken trunk between SW2 and SW3) prevents their frames from reaching the core network. We fix trunk next. See VLAN Creation and Management for the complete VLAN setup lab.
Fault B Fixed ✓ — VLAN 30 created on SW2. ENG1/ENG2 ports now active. Trunk must be fixed before DHCP will work for these hosts.

6. Phase 3 (Continued) — Fix Fault C: SW2–SW3 Trunk Not Forming

ENG3 (VLAN 40) also has no connectivity despite VLAN 40 existing on SW2. This points to a connectivity problem between SW2 and SW3 — not a VLAN database issue. We investigate the trunk. See Troubleshooting Layer 2 for the full trunk diagnosis methodology.

Step 1 — Check Trunk Status on SW2

NetsTuts_SW2#show interfaces trunk
! ── Empty — no trunk interfaces ──────────────────────────────────

NetsTuts_SW2#show interfaces GigabitEthernet0/1 switchport
Name: Gi0/1
Switchport: Enabled
Administrative Mode: dynamic auto
Operational Mode: static access
Negotiation of Trunking: On
Access Mode VLAN: 1 (default)
  
An empty show interfaces trunk confirms no trunks are active on SW2. The Gi0/1 uplink to SW3 shows Administrative Mode: dynamic auto but Operational Mode: static access — it has settled into access mode. This is the auto/auto DTP deadlock: both SW2 and SW3 are passively waiting for the other to initiate trunk negotiation. Neither sends active DTP solicitations, so the trunk never forms and all inter-switch traffic is blocked.

Step 2 — Confirm on SW3

NetsTuts_SW3#show interfaces GigabitEthernet0/1 switchport
Name: Gi0/1
Switchport: Enabled
Administrative Mode: dynamic auto
Operational Mode: static access
  
Confirmed — SW3's Gi0/1 is also dynamic auto. Two auto ports facing each other will never trunk. The fix is to explicitly configure both ends as trunk with switchport nonegotiate to disable DTP and prevent future rogue negotiation.

Step 3 — Fix Both Ends and Verify

! ── Fix on SW2 ───────────────────────────────────────────────────
NetsTuts_SW2(config)#interface GigabitEthernet0/1
NetsTuts_SW2(config-if)#switchport mode trunk
NetsTuts_SW2(config-if)#switchport nonegotiate
NetsTuts_SW2(config-if)#exit

! ── Fix on SW3 ───────────────────────────────────────────────────
NetsTuts_SW3(config)#interface GigabitEthernet0/1
NetsTuts_SW3(config-if)#switchport mode trunk
NetsTuts_SW3(config-if)#switchport nonegotiate
NetsTuts_SW3(config-if)#exit

! ── Verify trunk formed ───────────────────────────────────────────
NetsTuts_SW2#show interfaces trunk

Port      Mode   Encap   Status    Native vlan
Gi0/1     on     802.1q  trunking  1

Port      Vlans allowed on trunk
Gi0/1     1-4094

Port      Vlans allowed and active in the management domain
Gi0/1     1,30,40

Port      Vlans in spanning tree forwarding state and not pruned
Gi0/1     1,30,40
  
The trunk is now forming — VLANs 1, 30, and 40 are all active and forwarding across the SW2–SW3 link. Note that after fixing Fault B (creating VLAN 30 on SW2), VLAN 30 immediately appears in Section 3 of the trunk output — this confirms that the order of fixes mattered. Had we fixed the trunk first without creating VLAN 30, Section 3 would have shown only VLANs 1 and 40. Now that both Layer 2 faults are fixed, ENG1, ENG2, and ENG3 should be able to reach the DHCP server — provided the Layer 3 routing is healthy. See Trunk Port Configuration for the full trunk setup lab.

Step 4 — Re-Test L2 Connectivity End to End

! ── ENG3 on VLAN 40 — ping its default gateway (R2's VLAN40 SVI) ─
! ── (After ENG3 gets an IP via DHCP from SRV1 through the relay) ─
NetsTuts_R2#ping 192.168.40.1 source GigabitEthernet0/1
! ── R2 SVI for VLAN 40 pings itself — confirms SVI is up ─────────
!!!!!

! ── Check that DHCP relay is configured on R2's VLAN interfaces ──
NetsTuts_R2#show running-config interface GigabitEthernet0/1
interface GigabitEthernet0/1
 ip address 192.168.30.1 255.255.255.0
 ip helper-address 192.168.99.10
 no shutdown
  
R2's VLAN 30 interface has the correct ip helper-address. With the trunk now operational, DHCP Discovers from ENG1, ENG2, and ENG3 will reach R2 via SW2→SW3→R2, and R2 will relay them to SRV1 at 192.168.99.10. However, for the relay to work end-to-end, the routing between R2 and SRV1's subnet must also be functional — we need to confirm OSPF is working (or will be after we fix Fault D).
Fault C Fixed ✓ — SW2–SW3 trunk operational. VLANs 30 and 40 now traverse the inter-switch link. Layer 2 is confirmed healthy throughout. Move to Layer 3.

7. Phase 4 — Fix Fault D: OSPF Area Mismatch on R2–R3

With Layer 1 and Layer 2 confirmed healthy, we address the Layer 3 OSPF fault. R2 has no OSPF neighbour for R3 — 192.168.30.0/24 (Building C) is absent from all routing tables. See Troubleshooting OSPF Neighbour Adjacency for the full OSPF diagnosis guide.

Step 1 — Confirm OSPF State on R2

NetsTuts_R2#show ip ospf neighbor

Neighbor ID  Pri  State     Dead Time   Address      Interface
1.1.1.1       1  FULL/BDR  00:00:36    10.0.12.1    GigabitEthernet0/0
! ── Only R1 — R3 is absent ───────────────────────────────────────

NetsTuts_R2#ping 10.0.23.6
!!!!!
! ── Physical reachability to R3 confirmed — fault is OSPF-level ──
  

Step 2 — Compare OSPF Interface Config on R2 and R3

NetsTuts_R2#show ip ospf interface GigabitEthernet0/2
GigabitEthernet0/2 is up, line protocol is up
  Internet Address 10.0.23.5/30, Area 0, Attached via Network Statement
  Process ID 1, Router ID 2.2.2.2, Network Type BROADCAST, Cost: 1
  Timer intervals configured, Hello 10, Dead 40

NetsTuts_R3#show ip ospf interface GigabitEthernet0/0
GigabitEthernet0/0 is up, line protocol is up
  Internet Address 10.0.23.6/30, Area 1, Attached via Network Statement
  Process ID 1, Router ID 3.3.3.3, Network Type BROADCAST, Cost: 1
  Timer intervals configured, Hello 10, Dead 40
  
The area mismatch is confirmed — R2's Gi0/2 is in Area 0 and R3's Gi0/0 is in Area 1. All other parameters match (timers, network type). This is the sole cause of the OSPF failure on this link. The fix is to correct R3's network statement to use area 0.

Step 3 — Fix and Verify OSPF Adjacency

NetsTuts_R3(config)#router ospf 1
NetsTuts_R3(config-router)#no network 10.0.23.0 0.0.0.3 area 1
NetsTuts_R3(config-router)#no network 192.168.30.0 0.0.0.255 area 1
NetsTuts_R3(config-router)#network 10.0.23.0 0.0.0.3 area 0
NetsTuts_R3(config-router)#network 192.168.30.0 0.0.0.255 area 0
NetsTuts_R3(config-router)#exit

! ── Verify adjacency forms ────────────────────────────────────────
NetsTuts_R2#show ip ospf neighbor

Neighbor ID  Pri  State     Dead Time   Address      Interface
1.1.1.1       1  FULL/BDR  00:00:38    10.0.12.1    GigabitEthernet0/0
3.3.3.3       1  FULL/BDR  00:00:36    10.0.23.6    GigabitEthernet0/2

! ── Check OSPF routes now appear ─────────────────────────────────
NetsTuts_R2#show ip route ospf
O     192.168.30.0/24 [110/2] via 10.0.23.6, 00:00:08, Gi0/2
O     3.3.3.3/32 [110/2] via 10.0.23.6, 00:00:08, Gi0/2
  
R2 now has full adjacency with both R1 (FULL/BDR) and R3 (FULL/BDR). The OSPF route for 192.168.30.0/24 appears in R2's routing table. However, for end-to-end connectivity from Building A (R1) to Building C (R3), R1 also needs a route to 192.168.30.0/24. Does OSPF propagate it? Let's check — this directly leads to Fault E.
Fault D Fixed ✓ — R2–R3 OSPF adjacency established. 192.168.30.0/24 now in R2's routing table. Investigating R1's routing table next.

8. Phase 4 (Continued) — Fix Fault E: Missing Return Route on R1

With OSPF now healthy between R2 and R3, we check whether R1 has a route to Building C. The IT Manager reported Buildings A and B can talk to each other but not Building C — even after fixing OSPF between R2 and R3. See Troubleshooting Layer 3 for the full routing diagnosis guide.

Step 1 — Check R1's Routing Table

NetsTuts_R1#show ip route
Codes: L - local, C - connected, S - static, O - OSPF ...

Gateway of last resort is not set

      1.0.0.0/32 is subnetted, 1 subnets
C        1.1.1.1/32 is directly connected, Loopback0
      10.0.0.0/30 is subnetted, 1 subnet
C        10.0.12.0/30 is directly connected, GigabitEthernet0/0
L        10.0.12.1/32 is directly connected, GigabitEthernet0/0
      192.168.10.0/24 is variably subnetted, 2 subnets, 2 masks
C        192.168.10.0/24 is directly connected, GigabitEthernet0/1
L        192.168.10.1/32 is directly connected, GigabitEthernet0/1
      192.168.20.0/24 is variably subnetted, 2 subnets, 2 masks
C        192.168.20.0/24 is directly connected, GigabitEthernet0/2
L        192.168.20.1/32 is directly connected, GigabitEthernet0/2
O     192.168.20.0/24 [110/2] via 10.0.12.2, GigabitEthernet0/0
! ── 192.168.30.0/24 is ABSENT from R1's routing table ───────────
  
R1 has no route to 192.168.30.0/24 (Building C). This is surprising — OSPF is running between R1 and R2, so why hasn't the 192.168.30.0/24 route propagated from R2 to R1? The answer is Fault E: during maintenance, a static route was added on R1 for the 10.0.23.0/30 transit network, but the maintenance engineer accidentally deleted the OSPF network statement covering the R1–R2 link's OSPF route propagation. Let us investigate.

Step 2 — Check OSPF Configuration on R1

NetsTuts_R1#show running-config | section router ospf
router ospf 1
 network 192.168.10.0 0.0.0.255 area 0
 network 192.168.20.0 0.0.0.255 area 0
 passive-interface GigabitEthernet0/1
 passive-interface GigabitEthernet0/2
! ── 10.0.12.0/30 (R1–R2 WAN link) is NOT in any OSPF network statement
! ── R1 is receiving OSPF updates from R2 (neighbour is FULL) but ──
! ── is not learning routes from R3 because R2 cannot relay LSAs ──
! ── back to R1 about 192.168.30.0/24 without the WAN being in OSPF

NetsTuts_R1#show ip ospf neighbor
Neighbor ID  Pri  State     Dead Time   Address      Interface
2.2.2.2       1  FULL/DR   00:00:37    10.0.12.2    GigabitEthernet0/0
! ── Neighbour IS forming — how? The interface IP 10.0.12.1 IS in OSPF
! ── because it falls within... let's check actual network statements

NetsTuts_R1#show ip ospf interface GigabitEthernet0/0
  Internet Address 10.0.12.1/30, Area 0, Attached via Network Statement
! ── It IS in OSPF — so the network statement must cover it somehow
! ── Check more carefully: maybe there is a separate static route issue
  

Step 3 — Trace the Actual Problem

! ── R1 neighbour is FULL but 192.168.30.0 is not in table ────────
! ── This is the asymmetric routing pattern — check if R1 has a ───
! ── route to 10.0.23.0/30 (R2–R3 transit network) ───────────────

NetsTuts_R1#show ip route 10.0.23.0
% Network not in table

! ── R1 has no route to the R2–R3 transit network ─────────────────
! ── Without a route to 10.0.23.0/30, R1 cannot forward packets ──
! ── to 192.168.30.0/24 via R2 because R2's next hop for ──────────
! ── 192.168.30.0 is 10.0.23.6 — R1 doesn't know how to reach it

NetsTuts_R1#show running-config | include ip route
ip route 192.168.30.0 255.255.255.0 10.0.12.2
! ── There IS a static route for 192.168.30.0/24 via R2 (10.0.12.2)
! ── but R3's reply goes to 192.168.10.x (Building A) ─────────────
! ── Check R3's routing table for 192.168.10.0/24 ─────────────────

NetsTuts_R3#show ip route 192.168.10.0
% Network not in table

NetsTuts_R3#show ip route 192.168.20.0
% Network not in table
! ── R3 has NO routes back to Building A subnets ──────────────────
! ── R3 only knows OSPF routes — and R1's stubs are in OSPF,  ─────
! ── but R1's OSPF network statements don't cover Gi0/1 and Gi0/2 ?

NetsTuts_R1#show running-config | section router ospf
router ospf 1
 network 10.0.12.0 0.0.0.3 area 0
 network 1.1.1.1 0.0.0.0 area 0
! ── 192.168.10.0/24 and 192.168.20.0/24 are NOT in R1's OSPF ─────
! ── R3 cannot learn routes to Building A via OSPF ────────────────
! ── This is Fault E: R1's stub networks were removed from OSPF ───
  
Fault E is more nuanced than a simple missing static route — R1's OSPF configuration was partially broken during maintenance. The stub networks (192.168.10.0/24 and 192.168.20.0/24) were removed from R1's OSPF network statements, so R2 and R3 cannot learn routes back to Building A via OSPF. R3 has no return route to 192.168.10.0/24 or 192.168.20.0/24. Ping from Building A to Building C times out because replies from R3 have no path back. The static route on R1 (ip route 192.168.30.0 255.255.255.0 10.0.12.2) handles the forward path from R1 to R3, but the return path is broken.

Step 4 — Fix: Restore R1 OSPF Network Statements

! ── Add missing stub network statements back to R1's OSPF ─────────
NetsTuts_R1(config)#router ospf 1
NetsTuts_R1(config-router)#network 192.168.10.0 0.0.0.255 area 0
NetsTuts_R1(config-router)#network 192.168.20.0 0.0.0.255 area 0
NetsTuts_R1(config-router)#passive-interface GigabitEthernet0/1
NetsTuts_R1(config-router)#passive-interface GigabitEthernet0/2
NetsTuts_R1(config-router)#exit

! ── Remove the static route (OSPF will handle it now) ─────────────
NetsTuts_R1(config)#no ip route 192.168.30.0 255.255.255.0 10.0.12.2
NetsTuts_R1(config)#exit

! ── Wait for OSPF LSA propagation (up to 30 seconds) ─────────────
! ── Verify R1's routing table ────────────────────────────────────
NetsTuts_R1#show ip route ospf
O     192.168.30.0/24 [110/3] via 10.0.12.2, 00:00:14, Gi0/0
O     10.0.23.0/30 [110/2] via 10.0.12.2, 00:00:14, Gi0/0
O     3.3.3.3/32 [110/3] via 10.0.12.2, 00:00:14, Gi0/0

! ── Verify R3 now has return routes to Building A ─────────────────
NetsTuts_R3#show ip route ospf
O     192.168.10.0/24 [110/3] via 10.0.23.5, 00:00:14, Gi0/0
O     192.168.20.0/24 [110/3] via 10.0.23.5, 00:00:14, Gi0/0
O     10.0.12.0/30 [110/2] via 10.0.23.5, 00:00:14, Gi0/0
O     1.1.1.1/32 [110/3] via 10.0.23.5, 00:00:14, Gi0/0
  
Both R1 and R3 now have complete routing tables via OSPF. R1 learns 192.168.30.0/24 from R3 via R2. R3 learns the Building A subnets (192.168.10.0/24 and 192.168.20.0/24) from R1 via R2. The static route on R1 was removed — relying on OSPF is cleaner and self-healing. All five faults are now fixed.
Fault E Fixed ✓ — R1 OSPF network statements restored. Bidirectional OSPF routes propagated across all three routers. Layer 3 confirmed healthy.

9. Phase 5 — Full End-to-End Verification

All five faults have been fixed individually. Now we run a comprehensive end-to-end verification to confirm that the entire network is operational — not just the individual fault points. This is the final step that confirms all fixes are correct and no new faults were introduced.

Layer 1 — All Interfaces Clean

NetsTuts_SW1#show interfaces FastEthernet0/1 | include errors|collision
     0 input errors, 0 CRC
     0 output errors, 0 collisions, 0 interface resets
! ── Fa0/1: clean ─────────────────────────────────────────────────

NetsTuts_SW1#show ip interface brief | include down
! ── No down interfaces ────────────────────────────────────────────
  

Layer 2 — VLANs and Trunks Healthy

NetsTuts_SW2#show vlan brief
VLAN Name            Status    Ports
1    default          active    Gi0/1
30   Engineering      active    Fa0/1, Fa0/2
40   IoT-Lab          active    Fa0/3

NetsTuts_SW2#show interfaces trunk
Port  Mode  Encap   Status    Native vlan
Gi0/1 on    802.1q  trunking  1

Port  Vlans allowed and active in the management domain
Gi0/1 1,30,40

Port  Vlans in spanning tree forwarding state and not pruned
Gi0/1 1,30,40
! ── All VLANs active and forwarding ──────────────────────────────
  

Layer 3 — All OSPF Neighbours FULL, Complete Routing Tables

NetsTuts_R1#show ip ospf neighbor
Neighbor ID  Pri  State     Dead Time   Address      Interface
2.2.2.2       1  FULL/DR   00:00:36    10.0.12.2    GigabitEthernet0/0

NetsTuts_R2#show ip ospf neighbor
Neighbor ID  Pri  State     Dead Time   Address      Interface
1.1.1.1       1  FULL/BDR  00:00:38    10.0.12.1    GigabitEthernet0/0
3.3.3.3       1  FULL/BDR  00:00:35    10.0.23.6    GigabitEthernet0/2

NetsTuts_R3#show ip ospf neighbor
Neighbor ID  Pri  State     Dead Time   Address      Interface
2.2.2.2       1  FULL/DR   00:00:37    10.0.23.5    GigabitEthernet0/0
  

End-to-End Ping Matrix — All Paths

! ── Building A (R1) → Building C (R3 stub) ───────────────────────
NetsTuts_R1#ping 192.168.30.1 source GigabitEthernet0/1
!!!!!   ← Building A → Building C: SUCCESS

! ── Building C (R3) → Building A ────────────────────────────────
NetsTuts_R3#ping 192.168.10.1 source GigabitEthernet0/1
!!!!!   ← Building C → Building A: SUCCESS

! ── Building A → Building B (R2 stub) ────────────────────────────
NetsTuts_R1#ping 192.168.20.1 source GigabitEthernet0/1
!!!!!   ← Building A → Building B: SUCCESS (was already working)

! ── Cross-building via traceroute ────────────────────────────────
NetsTuts_R1#traceroute 192.168.30.1 source GigabitEthernet0/1
  1  10.0.12.2  1 msec
  2  10.0.23.6  2 msec
  3  192.168.30.1  3 msec
! ── Three clean hops: R1 → R2 → R3 → destination ────────────────
  

Layer 4 — DHCP Clients Getting Addresses

! ── Verify DHCP bindings for all VLANs ───────────────────────────
NetsTuts_R1#show ip dhcp binding | count
10    ← VLAN 10 (PC_MGMT and peers getting addresses from SRV1 via R1 relay)

NetsTuts_R2#show ip dhcp binding
! ── R2 is relay only — check SRV1 directly ───────────────────────
NetsTuts_SRV1#show ip dhcp binding
IP address       Hardware address    Lease expiration    Type
192.168.10.50   0100.aabb.ccdd.0001  Mar 08 2026 08:00  Automatic
192.168.20.30   0100.aabb.ccdd.0002  Mar 08 2026 09:00  Automatic
192.168.30.15   0100.aabb.ccdd.0003  Mar 08 2026 10:00  Automatic
192.168.30.16   0100.aabb.ccdd.0004  Mar 08 2026 10:05  Automatic
192.168.40.20   0100.aabb.ccdd.0005  Mar 08 2026 10:10  Automatic
! ── Clients across all four VLANs have DHCP addresses ───────────
  
All five layers of verification pass. DHCP clients in all VLANs (10, 20, 30, 40) have received addresses from SRV1. All OSPF neighbours are FULL. All routing table entries are present. All end-to-end pings succeed. The network is fully restored.

Fault Summary — Before and After

Fault Layer Root Cause Fix Applied Verification
A 1 SW1 Fa0/1 full-duplex vs PC_MGMT half-duplex NIC duplex full + speed 100 on SW1 Fa0/1 CRC and collision counters at zero after clear counters. See Troubleshooting Layer 1.
B 2 VLAN 30 missing from SW2 VLAN database vlan 30 / name Engineering on SW2 Fa0/1 and Fa0/2 show Active for VLAN 30 in show vlan brief
C 2 SW2 Gi0/1 and SW3 Gi0/1 both dynamic auto — trunk never forms switchport mode trunk + switchport nonegotiate on both ends show interfaces trunk shows trunking with VLANs 30 and 40 active. See Troubleshooting Layer 2.
D 3 R3 OSPF network statements in area 1 instead of area 0 no network ... area 1 then network ... area 0 on R3 R2 shows R3 as FULL/BDR in show ip ospf neighbor. See Troubleshooting OSPF.
E 3 R1 OSPF missing 192.168.10/20 network statements — no return routes to Building A Restored network 192.168.10.0 0.0.0.255 area 0 and 192.168.20.0 on R1 R3 shows 192.168.10.0/24 and 192.168.20.0/24 as OSPF routes. End-to-end ping succeeds. See Troubleshooting Layer 3.

Key Points & Exam Tips

  • Always work bottom-up. Fix Layer 1 before diagnosing Layer 2. Fix Layer 2 before diagnosing Layer 3. A broken trunk looks like a routing failure. A duplex mismatch looks like a VLAN problem. Lower-layer faults generate misleading symptoms at higher layers. See Troubleshooting Layer 1.
  • Verify each fix before moving on. After fixing a fault, run the relevant verification command (show interfaces, show vlan brief, show ip ospf neighbor) before moving to the next fault. One fix can unmask a new fault that was previously hidden.
  • Map symptoms to layers first. Before opening any CLI session, collect user reports and map them to the OSI model. "No IP address" → Layer 2 (VLAN/trunk) or Layer 4 (DHCP). "Slow performance with retransmits" → Layer 1 (duplex/errors). "Can reach some sites but not others" → Layer 3 (routing).
  • Ping with source addresses. Always use ping [dest] source [LAN-interface] on routers to simulate real traffic flows. A ping from the router's own management address may succeed when end-host traffic fails, giving false confidence.
  • Check both directions for routing. A ping from A to B requires routes in both directions. The most common oversight in multi-fault scenarios is fixing the forward route but forgetting the return route — resulting in timeouts rather than unreachables. See Troubleshooting Layer 3.
  • Use traceroute to locate the exact failure hop. When ping fails, traceroute identifies which hop is the last responsive one — immediately pointing to which router's routing table to investigate next.
  • On OSPF: the state where the neighbour stalls identifies the fault class. Empty = area/auth/passive/wrong-network-statement. EXSTART = MTU. FULL but no routes = network type mismatch. Flapping = timer mismatch. See Troubleshooting OSPF Neighbour Adjacency.
  • On VLAN: a port showing (Inactive) always means the VLAN does not exist in the database. An empty show interfaces trunk on an inter-switch link always means the trunk has not formed — check DTP modes. See Troubleshooting Layer 2.
  • Document every change in multi-fault scenarios. Write down what you changed, when, and what the result was. This prevents circular troubleshooting (undoing a correct fix because you forgot you made it) and provides an audit trail.
  • On the CCNA/CCNP exam: multi-fault scenarios are increasingly common. Know which symptom points to which layer, which single command most quickly confirms each layer is healthy (show ip interface brief, show vlan brief, show ip route), and which fixes are reversible (so you can undo a wrong change without making things worse).
Troubleshooting Series. This scenario drew on techniques from all five preceding guides. For deeper coverage of any individual fault type, return to the dedicated guide: Troubleshooting Layer 1, Troubleshooting Layer 2, Troubleshooting Layer 3, Troubleshooting DHCP, and Troubleshooting OSPF Neighbour Adjacency. For OSPF configuration from scratch see OSPF Single-Area Configuration and OSPF Single-Area Lab. For ACL-related issues that appear after Layers 1–3 are confirmed healthy see Extended ACL Configuration. For DHCP server and DHCP relay configuration see those dedicated labs. For static routing and trunk port configuration see the respective labs.

TEST WHAT YOU LEARNED

1. In a multi-fault scenario, an engineer notices OSPF routes are missing from R1's table. He immediately runs debug ip ospf events on R1 and R2. After 20 minutes of analysis, he finds the OSPF configuration is correct — but the routes are still missing. What critical step did he skip, and what should he have done first?

Correct answer is B. This scenario illustrates one of the most common and time-wasting mistakes in network troubleshooting — jumping to a higher-layer protocol before confirming the lower layer is healthy. OSPF Hello packets are Layer 3 IP packets delivered over Layer 2 Ethernet. If the Layer 2 path between two OSPF routers is broken (a trunk not forming, a VLAN missing, STP blocking the path), OSPF Hellos never reach their destination — the OSPF configuration on both routers may be perfect, but the neighbour never forms because the packets cannot traverse Layer 2. The correct methodology is always Layer 1 → Layer 2 → Layer 3, verifying each layer before moving up. In this specific case: show interfaces trunk to confirm inter-switch trunks are active, show vlan brief to confirm the router's VLAN exists on the switch, and a simple ping between the two router interface IPs to confirm basic IP reachability before ever touching OSPF debug. See Troubleshooting Layer 2 and Troubleshooting OSPF.

2. After fixing a trunk between SW2 and SW3 in a multi-fault scenario, VLAN 30 hosts still cannot reach their default gateway. show interfaces trunk shows VLAN 30 in Section 2 (allowed) and Section 3 (active) but absent from Section 4 (forwarding). What is the most likely cause?

Correct answer is D. The four sections of show interfaces trunk work as progressive filters. A VLAN in Section 3 (allowed and active in the management domain) means it exists in the local VLAN database and is permitted on the trunk. A VLAN dropping from Section 3 to Section 4 (forwarding state and not pruned) has only two possible causes: STP is blocking it on this specific port, or VTP pruning has removed it. Since we just fixed a trunk that was previously down, STP is the most likely cause — when a previously inactive trunk comes up, STP recalculates the topology for each VLAN. If the new trunk creates a loop, STP will block one of the ports to eliminate it. show spanning-tree vlan 30 immediately shows the port role and state — if the port shows as ALT (alternate) or shows BLK in the State column, STP is intentionally blocking it. See Troubleshooting Layer 2 for more STP diagnosis.

3. In this lab scenario, Fault E was a missing OSPF network statement on R1 — causing R3 to have no return routes to Building A. Instead of fixing the OSPF network statement, an engineer proposes adding static routes on R3 for 192.168.10.0/24 and 192.168.20.0/24. Will this work, and is it the right approach?

Correct answer is A. This question tests the difference between a fix that works and a fix that is correct. Adding static routes on R3 would technically restore the return path and end-to-end ping would succeed — so it "works." However, it is treating the symptom rather than the root cause. The root cause is that R1 stopped advertising its stub networks via OSPF, breaking the dynamic routing design intent. Adding static routes on R3 creates a hybrid routing environment that is harder to maintain: any future subnet added to Building A requires a manual static route on R3 (and potentially on other routers too). The correct engineering approach is to fix the root cause (restore R1's OSPF network statements) so the network uses its designed routing protocol throughout. See Troubleshooting Layer 3 for more on routing fault diagnosis.

4. During the end-to-end verification phase, a ping from R1 (sourced from Gi0/1, 192.168.10.1) to 192.168.30.10 (REM1 in Building C) returns all dots. A ping from R3 to 192.168.10.1 also returns all dots. Traceroute from R1 shows two hops then stops. What does this pattern indicate and how is it distinguished from a routing loop?

Correct answer is C. Interpreting ping and traceroute output correctly is fundamental to OSI-layer troubleshooting. In this scenario, dots (timeouts, not U unreachables) from both directions suggest that packets may be reaching the destination but replies are not returning. Traceroute stopping after two clean hops (R1→R2→R3) without the destination (192.168.30.10) responding confirms: the forward path R1→R2→R3 works (R3's interface responds at hop 2), but the host at 192.168.30.10 either cannot be reached by R3, or its replies have no path back. A routing loop looks completely different in traceroute — instead of a clean two-hop stop, you see the same sequence of router IPs repeating for all 30 probe attempts. An ACL (Option B) would produce A (administratively prohibited) in ping output, not dots. See Troubleshooting Layer 3 for more ping and traceroute interpretation.

5. After all five faults are fixed and end-to-end connectivity is verified, an engineer notices that ENG1 (Building B, VLAN 30) still has a 169.254.x.x address. All other hosts have DHCP addresses. The DHCP server (SRV1) has active bindings for other VLAN 30 hosts. What should the engineer check next, in order?

Correct answer is D. When a single host fails to get a DHCP address while all others in the same subnet succeed, the problem is almost certainly host-specific rather than infrastructure-wide. The systematic approach: (1) Verify the host's NIC configuration — a user may have manually set a static IP or the DHCP client service may be disabled. (2) Check the DHCP conflict table — if ENG1 was at 169.254.x.x during the fault window, it may have triggered the DHCP server's proactive ping conflict detection. show ip dhcp conflict lists all affected addresses. (3) A release/renew forces a fresh DORA exchange — if the DHCP client state machine got stuck during the fault, a fresh Discover resolves it. (4) Debug on the relay router confirms whether ENG1's Discover is being forwarded at all. See Troubleshooting DHCP for the full DHCP client diagnosis guide.

6. A junior engineer proposes fixing Fault C (SW2–SW3 trunk) by setting only SW2 to trunk mode while leaving SW3 at dynamic auto. He argues that "one side being trunk is enough to form the trunk." Is he correct, and what is the actual DTP behavior?

Correct answer is B. The DTP mode combination matrix is important for the CCNA exam. A port in trunk mode actively sends DTP advertisements. A port in dynamic auto responds to DTP — it will form a trunk if the peer is actively soliciting. So the combination of trunk + dynamic auto DOES form a trunk. The junior engineer's proposal would work technically. However, it is not the recommended configuration for three reasons: (1) DTP is a security concern; (2) the auto side is not deterministically configured; (3) operational consistency is lost. The professional standard is to explicitly set both sides to trunk + nonegotiate. Note that auto/auto NEVER forms a trunk because neither side sends active solicitations. See Trunk Port Configuration and Troubleshooting Layer 2 for more on DTP modes.

7. During troubleshooting, an engineer runs traceroute 192.168.30.1 source GigabitEthernet0/1 from R1 and sees: hop 1 = 10.0.12.2 (R2), hop 2 = 10.0.12.1 (R1), hop 3 = 10.0.12.2 (R2) ... repeating for 30 hops. Which fault does this indicate and what is the fix?

Correct answer is C. The alternating 10.0.12.2 / 10.0.12.1 / 10.0.12.2 pattern in traceroute output is the unambiguous routing loop signature. Each hop increments the TTL by 1 — the packet with TTL=1 reaches R2, TTL=2 reaches R1, TTL=3 reaches R2 again, cycling indefinitely until TTL=30 is exhausted. This specific loop (between R1 and R2 for 192.168.30.0/24 destination) would occur if R2 had a misconfigured static route pointing 192.168.30.0/24 back toward R1. Since static routes have AD=1, they override OSPF routes (AD=110). The diagnostic: show ip route 192.168.30.0 on R2 reveals whether a static or OSPF route is installed. OSPF area mismatch (Option A) would cause no route for 192.168.30.0 at all — not a loop. Layer 2 failure (Option D) would cause no hops at all, not repeating hops. See Troubleshooting Layer 3.

8. What is the correct interpretation of reliability 240/255 in show interfaces output, and at what value does it indicate a significant Layer 1 problem?

Correct answer is A. The reliability field in show interfaces output is a five-minute exponential weighted moving average (EWMA) of interface reliability, scaled from 0 to 255 where 255/255 represents a perfectly reliable link with no errors in the measurement window. The value decays over five minutes — a brief error storm will lower the reliability and it gradually recovers as error-free time accumulates. In the duplex mismatch scenario (Fault A), the reliability of 240/255 indicates the link is experiencing errors but is not catastrophically broken — consistent with occasional collisions and CRC errors from the duplex mismatch rather than a complete physical failure. For reference: txload and rxload are the transmit and receive load measurements on the same 0/255 scale — these measure bandwidth utilisation. They are separate fields from reliability in the show interfaces output. See Troubleshooting Layer 1 for more on interface counters.

9. After fixing all five faults, an engineer makes one final change: switchport trunk allowed vlan 1,30,40 on SW2's Gi0/1 (the SW2–SW3 trunk), intending to "clean up" the allowed VLAN list. What is the immediate effect and how is it fixed?

Correct answer is D. The switchport trunk allowed vlan [list] command without any keyword (add, remove, except, all) always replaces the entire allowed VLAN list with exactly the specified VLANs. This is one of the most common and dangerous mistakes on Cisco switches — an engineer "cleaning up" the trunk configuration can inadvertently lock the trunk to specific VLANs, causing all other VLANs to be silently dropped. The correct approach for "cleaning up" is either: (1) leave the default (all VLANs 1–4094) and let STP and VLAN database membership naturally limit which VLANs are active; or (2) use explicit blocked VLANs with switchport trunk allowed vlan except [blocked-list] if security policy requires restricting certain VLANs from crossing specific trunks. See Trunk Port Configuration for correct trunk VLAN management.

10. A network engineer inherits a broken network with no documentation. Users report "nothing works." What is the correct systematic first action, and what two commands provide the fastest initial triage of the entire network's health?

Correct answer is C. When inheriting an unknown broken network, the temptation is to immediately start fixing things based on assumptions — this almost always makes things worse. The correct first action is methodical triage: understand the current state before changing anything. The two commands in Option C are specifically chosen for maximum information density in minimum time. show ip interface brief on each device provides a complete interface status summary in 10–20 lines — immediately revealing which physical or logical interfaces are down. Any interface showing down/down or administratively down is a confirmed Layer 1 or configuration fault. Any interface showing up/down indicates a Layer 2 keepalive or encapsulation issue. show ip route on the core router reveals the complete routing state — if large portions of the network are missing from the routing table, it points to OSPF/EIGRP failures or missing static routes. debug all (Option B) on a production device would generate catastrophic output volume and likely crash the router's CPU, causing additional outages. See the full troubleshooting series: Layer 1, Layer 2, Layer 3.