Troubleshooting Layer 1 — Physical Connectivity Issues

Every network problem ultimately touches the physical layer — if electrons or photons are not moving correctly, nothing above it can work. Layer 1 troubleshooting covers the cables, connectors, transceivers, and interface hardware that form the physical foundation of a network. Unlike higher-layer problems where logic errors are the culprit, Layer 1 faults are almost always caused by physical conditions: a damaged cable, a bent fibre, an incorrect speed or duplex setting, a failing transceiver, or simply a connector not fully seated in its port.

The good news is that IOS exposes Layer 1 health through a rich set of counters on every interface — CRC errors, runts, giants, input errors, output drops, and the precise line-status strings that identify the exact failure state. Combined with LED indicators on the physical hardware and external cable-testing tools, most Layer 1 faults can be isolated and resolved without advanced equipment.

This guide uses show interfaces as the primary diagnostic command — understanding every counter in that output is the core skill. For the brief interface status summary used during initial triage see show ip interface brief. For structured cabling standards and connector types referenced in this guide see Structured Cabling and RJ45 Pinouts. For Layer 2 and above troubleshooting that begins once Layer 1 is confirmed healthy, see CAM Table and OSI Model.

1. Layer 1 — Core Concepts

The Two-Line Interface Status

Every IOS interface reports its condition in two lines at the top of show interfaces output — and in the Status and Protocol columns of show ip interface brief. These two values together pinpoint the layer at which the fault sits:

  GigabitEthernet0/1 is [LINE STATUS], line protocol is [PROTOCOL STATUS]

  LINE STATUS     — Physical layer (Layer 1): Is the cable connected and
                    the interface detecting an electrical/optical signal?

  PROTOCOL STATUS — Data-link layer (Layer 2): Is the interface exchanging
                    keepalives with its peer after Layer 1 is up?
  
Line Status Protocol Status Meaning Primary Layer
up up Fully operational — Layer 1 and Layer 2 healthy
up down Physical signal detected but keepalives failing — Layer 2 problem (encapsulation mismatch, no keepalives) Layer 2
down down No physical signal — cable unplugged, device off, wrong cable type, failed transceiver Layer 1
administratively down down Interface shut with shutdown command — intentional, not a fault Config
up up (looped) Loop detected on the link — cable connects back to the same device Layer 1
up up (err-disabled) Interface disabled by IOS security feature (BPDU Guard, port security, etc.) Layer 2 / Config
Triage shortcut: Always check show ip interface brief first across all interfaces to instantly spot any that are not "up/up." Then drill into individual interfaces with show interfaces [name] to read the error counters. Never skip the two-line status check — it determines which layer to troubleshoot before touching any cable.

The Layer 1 Fault Hierarchy

Layer 1 problems fall into three categories. Working through them in order avoids replacing hardware unnecessarily:

  1. CONFIGURATION FAULT  — Interface is shut, speed/duplex hardcoded incorrectly,
                            wrong media type selected (copper vs fibre SFP)
                            Fix: IOS config change — no cable work needed

  2. PHYSICAL CONNECTION  — Cable unplugged, wrong port, cable too long,
                            bent fibre, contaminated connector, wrong
                            cable category for the speed required
                            Fix: Reseat, replace, or re-run cable

  3. HARDWARE FAULT       — Failed transceiver, faulty switch/router port,
                            NIC failure on end device
                            Fix: Replace transceiver or hardware
  

Key Layer 1 Error Counter Definitions

These counters appear in show interfaces output and are the primary diagnostic data for Layer 1 faults. Understanding what each counter measures determines the correct remediation action:

Counter What It Measures Primary Cause
CRC Frames received whose cyclic redundancy check value does not match the computed value — the frame is corrupted Duplex mismatch, damaged cable, electrical interference (EMI), marginal cable quality, faulty NIC or transceiver
Input errors Total of all receive-side errors — the sum of runts, giants, CRC, no-buffer, and frame errors Umbrella counter — drill into sub-counters to find the specific fault
Runts Frames smaller than 64 bytes (including the FCS field) — below the minimum valid Ethernet frame size Duplex mismatch (half-duplex collisions truncate frames), faulty NIC sending undersized frames
Giants Frames larger than the maximum allowed size (typically 1518 bytes for standard Ethernet) Misconfigured MTU, jumbo frames arriving on a non-jumbo interface, or a faulty NIC
Frame Frames with illegal sizes or an invalid FCS combined with a non-integer byte count Usually duplex mismatch — late collisions corrupt frame boundaries
Collisions Number of Ethernet collisions — should be zero on full-duplex links Duplex mismatch — full-duplex side transmits while half-duplex side is also transmitting
Late collisions Collisions detected after the first 64 bytes of a frame have been transmitted — always a fault condition Duplex mismatch (most common), cable too long exceeding propagation delay budget, faulty NIC
Output drops Frames dropped because the output queue was full — not a Layer 1 error but indicates bandwidth saturation Link speed too low for traffic volume, or QoS misconfiguration causing queue starvation
Input queue drops Frames dropped because the input queue overflowed before the CPU could process them CPU overload, traffic burst exceeding the interface buffer
Ignored Frames dropped because the input buffer was full at the hardware level — before the software queue Sustained high-rate input traffic overwhelming hardware buffers

2. Lab Topology & Scenario

This lab presents five real-world Layer 1 fault scenarios on a common access-layer topology. Each scenario starts with a symptom report and works through the diagnostic process to root cause and fix:

    [PC1]──────Fa0/1──[NetsTuts_SW1]──Gi0/1──[NetsTuts_R1]
    [PC2]──────Fa0/2──[NetsTuts_SW1]
    [PC3]──────Fa0/3──[NetsTuts_SW1]
    [Server1]──Gi0/1──[NetsTuts_SW1]──Gi0/2──[NetsTuts_SW2]──Gi0/1──[NetsTuts_R2]
    [IP Phone]─Fa0/4──[NetsTuts_SW1]

  Fault Scenarios:
    Scenario 1 — PC1 cannot ping anything (Fa0/1 down/down)
    Scenario 2 — PC2 ping works but shows 30% packet loss (duplex mismatch)
    Scenario 3 — Server1 uplink is slow — 100 Mbps instead of 1 Gbps (speed mismatch)
    Scenario 4 — SW1–SW2 trunk has CRC errors climbing rapidly (bad cable)
    Scenario 5 — Fa0/3 shows err-disabled (port security violation)
  
Scenario Interface Symptom Root Cause
1 SW1 Fa0/1 No connectivity from PC1 Cable unplugged / interface down
2 SW1 Fa0/2 Intermittent packet loss, slow transfers Duplex mismatch — switch full, PC half
3 SW1 Gi0/1 (Server) Server throughput capped at 100 Mbps Speed mismatch — switch auto, server hardcoded 100
4 SW1 Gi0/2 (trunk) Intermittent drops, CRC climbing Damaged patch cable — physical fault
5 SW1 Fa0/3 PC3 has no network access err-disabled — port security MAC violation

3. Scenario 1 — Interface down/down (No Cable Signal)

The helpdesk reports PC1 has no network connectivity. Before touching any configuration, observe the physical hardware and then check IOS status:

Step 1 — Check LED Indicators First

LED State Colour / Pattern Meaning
Port LED — off No light No link detected — cable not connected, device powered off, or port disabled
Port LED — solid green Green Link up, no activity (or activity LED mode not selected)
Port LED — blinking green Green flashing Link up and traffic flowing
Port LED — amber Amber / orange Port blocked by STP, or initialising. On some platforms: link fault
Port LED — blinking amber Amber flashing Port error — fault detected (varies by Cisco platform)
System LED — green Green System operational
System LED — amber Amber POST failure or hardware fault

In Scenario 1 the port LED for Fa0/1 is off — confirming no link signal before any IOS command is run.

Step 2 — Check IOS Status

NetsTuts_SW1#show ip interface brief
Interface              IP-Address      OK? Method Status                Protocol
FastEthernet0/1        unassigned      YES unset  down                  down
FastEthernet0/2        unassigned      YES unset  up                    up
FastEthernet0/3        unassigned      YES unset  up                    up
GigabitEthernet0/1     unassigned      YES unset  up                    up
GigabitEthernet0/2     unassigned      YES unset  up                    up
  

Step 3 — Drill Into the Interface

NetsTuts_SW1#show interfaces FastEthernet0/1
FastEthernet0/1 is down, line protocol is down (notconnect)
  Hardware is Fast Ethernet, address is 0c1a.3b2f.0001 (bia 0c1a.3b2f.0001)
  MTU 1500 bytes, BW 10000 Kbit/sec, DLY 1000 usec,
     reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Auto-duplex, Auto-speed, media type is 10/100BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input never, output never, output hang never
  Last clearing of "show interfaces" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     0 packets input, 0 bytes, 0 no buffer
     Received 0 broadcasts (0 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
     0 packets output, 0 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier
  
Key observations: down, line protocol is down (notconnect) — the IOS keyword notconnect on a switch port means no cable is detected. The counters are all zero and "Last input never" confirms no frame has ever arrived. This is the cleanest down/down case: a missing or disconnected cable, not a configuration fault. The interface is not administratively shut (which would say administratively down) — it is genuinely not connected.

Step 4 — Physical Investigation and Fix

! ── Verify the interface is not accidentally shut ─────────────────
NetsTuts_SW1#show running-config interface FastEthernet0/1
Building configuration...

Current configuration : 48 bytes
!
interface FastEthernet0/1
 switchport mode access
 switchport access vlan 10
!

! ── No 'shutdown' command present — pure physical fault ──────────
! ── Actions: reseat cable at both ends, try a known-good cable,
!             verify PC NIC is on and operational
! ── After cable fix, port comes up automatically ──────────────────

NetsTuts_SW1#show interfaces FastEthernet0/1
FastEthernet0/1 is up, line protocol is up (connected)
  ...
  Last input 00:00:02, output 00:00:01
  
Checking show running-config interface quickly rules out an accidental shutdown command before physically touching the cable. If shutdown were present, the fix would be no shutdown in interface config mode — no cable work needed. Once the cable is reseated and the port comes up, "Last input" changes from "never" to a recent timestamp, confirming traffic is now flowing.

4. Scenario 2 — Duplex Mismatch

PC2 can ping but experiences 20–30% packet loss and file transfers are slow. The port LED is green (link is up) so the cable is not the issue. Duplex mismatch is the most common cause of this symptom pattern — one side is full-duplex, the other is half-duplex.

Understanding Duplex Mismatch

  Full-duplex side (Switch):   Transmits freely — does not listen for
                               collisions. Sends at any time.

  Half-duplex side (PC NIC):   Uses CSMA/CD — listens before transmitting.
                               While the switch is sending a frame, the PC
                               perceives the line as busy.

  Result: When the switch transmits AND the PC transmits simultaneously,
  the half-duplex PC detects a collision and backs off. The full-duplex
  switch never backs off — it just keeps sending. The frames that overlap
  are corrupted — the switch sees them as CRC errors, runts, or late
  collisions. Throughput drops to 10–40% of link capacity as retransmits
  and backoff timers consume bandwidth.
  

Diagnosing with show interfaces

NetsTuts_SW1#show interfaces FastEthernet0/2
FastEthernet0/2 is up, line protocol is up (connected)
  Hardware is Fast Ethernet, address is 0c1a.3b2f.0002
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
     reliability 253/255, txload 4/255, rxload 2/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Full-duplex, 100Mb/s, media type is 10/100BaseTX
  ...
  5 minute input rate 312000 bits/sec, 247 packets/sec
  5 minute output rate 198000 bits/sec, 156 packets/sec
     184720 packets input, 23644160 bytes, 0 no buffer
     Received 182 broadcasts (0 multicasts)
     847 runts, 0 giants, 0 throttles
     2341 input errors, 2341 CRC, 0 frame, 0 overrun, 0 ignored
     0 packets output, 0 bytes, 0 underruns
     0 output errors, 1823 collisions, 14 interface resets
     0 unknown protocol drops
     0 babbles, 312 late collision, 0 deferred
     0 lost carrier, 0 no carrier
  
The duplex mismatch signature is unmistakable here: CRC errors (2341) — corrupted frames from collisions; runts (847) — truncated frames from collisions cutting transmissions short; late collisions (312) — collisions occurring after byte 64, which is physically impossible on a correctly operating full-duplex link and is the single strongest indicator of duplex mismatch; collisions (1823) — should be zero on a full-duplex port. The interface shows Full-duplex (the switch side) while PC2's NIC negotiated to half-duplex — the mismatch is confirmed.

Root Cause and Fix

! ── Check what the switch negotiated ─────────────────────────────
NetsTuts_SW1#show interfaces FastEthernet0/2 status
Port      Name       Status       Vlan  Duplex  Speed Type
Fa0/2                connected    10    full    100   10/100BaseTX

! ── Check CDP to see what the PC reported ────────────────────────
NetsTuts_SW1#show cdp neighbors FastEthernet0/2 detail
...
  Duplex: half

! ── Root cause: PC NIC hardcoded to half-duplex,
!               switch auto-negotiated to full-duplex.
!               Fix option 1: Set both ends to auto (preferred) ───

NetsTuts_SW1(config)#interface FastEthernet0/2
NetsTuts_SW1(config-if)#duplex auto
NetsTuts_SW1(config-if)#speed auto
NetsTuts_SW1(config-if)#exit

! ── Fix option 2: Hardcode both ends to full/100 ─────────────────
! ── Only use if auto-negotiation is unreliable for this NIC ──────
NetsTuts_SW1(config)#interface FastEthernet0/2
NetsTuts_SW1(config-if)#duplex full
NetsTuts_SW1(config-if)#speed 100
NetsTuts_SW1(config-if)#exit
  
The golden rule: both ends must match. The best practice is duplex auto and speed auto on both the switch port and the NIC — 802.3 auto-negotiation will then select the highest common mode. Only hardcode speed and duplex when connecting to a device that cannot auto-negotiate reliably (some older servers or industrial devices). When hardcoding, always set both ends identically — hardcoding one side and leaving the other on auto is the primary cause of duplex mismatch.

Verifying the Fix

! ── Clear counters to get a fresh baseline after the fix ─────────
NetsTuts_SW1#clear counters FastEthernet0/2
Clear "show interface" counters on this interface [confirm]

NetsTuts_SW1#show interfaces FastEthernet0/2
FastEthernet0/2 is up, line protocol is up (connected)
  Full-duplex, 100Mb/s, media type is 10/100BaseTX
  ...
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
  
After clearing counters and waiting 2–3 minutes of normal traffic, all error counters remain at zero — the duplex mismatch is resolved. Always clear counters before a post-fix observation period so that pre-fix accumulated errors do not inflate the baseline. If errors resume after clearing, the fix was incomplete or a different fault exists on the same interface.

5. Scenario 3 — Speed Mismatch

Server1 connects to SW1 Gi0/1 but iperf testing shows throughput capped at ~94 Mbps — well below the expected 940 Mbps for a 1 Gbps link. No error counters are climbing. The server admin hardcoded the NIC to 100 Mbps/full-duplex for "stability" without matching the switch port.

Diagnosing the Speed

NetsTuts_SW1#show interfaces GigabitEthernet0/1 status
Port      Name       Status       Vlan  Duplex  Speed Type
Gi0/1                connected    20    full    100   10/100/1000BaseTX
  
The switch port is Gigabit-capable (10/100/1000BaseTX) but is operating at 100 Mbps — it auto-negotiated down to match the server NIC's hardcoded 100 Mbps. Duplex is full on both sides so there are no CRC errors — the link works correctly at 100 Mbps, just not at 1 Gbps. This is not a fault per se but a misconfiguration limiting performance.
NetsTuts_SW1#show interfaces GigabitEthernet0/1
GigabitEthernet0/1 is up, line protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 0c1a.3b2f.0101
  MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
  ...
  Full-duplex, 100Mb/s, media type is 10/100/1000BaseTX
  
BW 100000 Kbit/sec (100 Mbps) in the interface output confirms the bandwidth negotiated. This bandwidth value is also what OSPF and EIGRP use to calculate interface cost — a 100 Mbps bandwidth on a physically Gigabit link would cause incorrect metric calculation for routing protocols. Speed mismatches therefore impact both throughput and routing.

Fix — Set Both Ends to Auto or Match Explicitly

! ── On the switch — set to auto (preferred) ──────────────────────
NetsTuts_SW1(config)#interface GigabitEthernet0/1
NetsTuts_SW1(config-if)#speed auto
NetsTuts_SW1(config-if)#duplex auto
NetsTuts_SW1(config-if)#exit

! ── On Server1 — change NIC from hardcoded 100/full
!                to auto-negotiation in the OS NIC settings

! ── After both ends are on auto, verify re-negotiation ───────────
NetsTuts_SW1#show interfaces GigabitEthernet0/1 status
Port      Name       Status       Vlan  Duplex  Speed Type
Gi0/1                connected    20    a-full  a-1000 10/100/1000BaseTX
  
The a- prefix in the Duplex and Speed columns means auto-negotiateda-full and a-1000 confirm both sides negotiated to 1 Gbps full-duplex. Without the a- prefix (e.g., just full and 1000), the values are hardcoded. After fixing, re-run the iperf test to confirm throughput has increased to the expected ~940 Mbps.

6. Scenario 4 — Physical Cable Fault (Rising CRC Errors)

Users report intermittent drops crossing from SW1 to SW2 on the inter-switch trunk. Ping succeeds most of the time but occasionally drops a packet. No duplex mismatch — both sides are auto-negotiated to 1 Gbps full-duplex. CRC errors are climbing on Gi0/2.

Diagnosing with show interfaces — CRC Pattern

NetsTuts_SW1#show interfaces GigabitEthernet0/2
GigabitEthernet0/2 is up, line protocol is up (connected)
  Hardware is Gigabit Ethernet, address is 0c1a.3b2f.0201
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
     reliability 242/255, txload 3/255, rxload 3/255
  Encapsulation ARPA, loopback not set
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
  ...
  5 minute input rate 8420000 bits/sec, 2847 packets/sec
  5 minute output rate 7980000 bits/sec, 2701 packets/sec
     14820481 packets input, 18971215882 bytes
     Received 1204 broadcasts (3721 multicasts)
     0 runts, 0 giants, 0 throttles
     18420 input errors, 18420 CRC, 0 frame, 0 overrun, 0 ignored
     0 packets output, 0 bytes, 0 underruns
     0 output errors, 0 collisions, 0 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier
  
The pattern here is CRC errors climbing without late collisions, runts, or frame errors — and reliability 242/255 (255/255 is perfect, 242/255 ≈ 94.9% reliability). This combination points to a physical cable fault — not duplex mismatch (which would show late collisions) and not a software issue (which would not produce CRC errors). CRC errors without late collisions mean the bit-level signal is being corrupted in transit — a damaged cable, a poor connector crimp, too tight a bend radius on the cable, or EMI interference from a nearby power source.

CRC Error Pattern Diagnosis Reference

Error Pattern Most Likely Cause Distinguishing Feature
CRC + late collisions + runts Duplex mismatch Late collisions present — physically impossible without duplex problem
CRC only (no late collisions) Physical cable fault or EMI Clean error pattern — only bit corruption, no collision indicators
CRC on fibre interface Dirty/contaminated connector, bend radius violation, or wrong wavelength SFP Fibre cannot have duplex mismatch — CRC always means physical layer
Giants only MTU mismatch or jumbo frames No CRC or collisions — oversized frames from a misconfigured sender
Input errors increasing, output clean Problem is on the receive path — cable from the far end, or far-end transmitter Asymmetric errors point to one direction of the cable or transceiver

Cable Testing Tools

Tool Tests Use Case
Cable continuity tester Wire map — verifies all 8 conductors connect pin-to-pin and checks for opens, shorts, reversed pairs, split pairs First test for any copper cable fault — cheap and fast. Does not test signal quality
TDR (Time Domain Reflectometer) Sends a signal and measures the reflected echo — calculates distance to a fault (open, short, impedance mismatch) in metres Locating the exact position of a break or crimp fault in a long cable run — especially useful in conduit where replacement is expensive
Cable certifier (e.g., Fluke DSX) Full electrical performance — attenuation, NEXT (Near-End CrossTalk), return loss, propagation delay skew. Certifies against Cat5e/6/6A standards New cable installation verification, marginal performance investigation, warranty compliance
Optical power meter + light source Measures optical signal loss in dB across a fibre run — compares against the link budget Fibre troubleshooting — dirty connectors, excessive bend radius, degraded splice points
OTDR (Optical TDR) Optical equivalent of TDR — maps the entire fibre run showing every connector, splice, and fault with distance Long fibre runs, locating specific failure point in a multi-segment fibre path
Loopback plug Connects the TX pin to the RX pin — router or switch interface loops its own signal back to test the port transmitter and receiver Isolating a port fault — if the interface comes up with a loopback plug but not with the cable, the cable is the fault

IOS TDR Test — Cisco Switch Built-In

! ── IOS has a built-in TDR on many Catalyst switches ─────────────
! ── WARNING: takes the port offline briefly — do not run on live trunks
NetsTuts_SW1#test cable-diagnostics tdr interface GigabitEthernet0/2

! ── Wait 5–10 seconds then check results ─────────────────────────
NetsTuts_SW1#show cable-diagnostics tdr interface GigabitEthernet0/2
TDR test last run on: March 07 15:44:22

Interface Speed Local pair Pair length        Remote pair Pair status
--------- ----- ---------- ------------------ ----------- --------------------
Gi0/2     1000M Pair A     25  +/- 1  meters  Pair A      Normal
           1000M Pair B     25  +/- 1  meters  Pair B      Normal
           1000M Pair C     24  +/- 1  meters  Pair C      Normal
           1000M Pair D     4   +/- 1  meters  Pair D      Short
  
The TDR result reveals Pair D is shorted at 4 metres — a physical fault in the cable 4 metres from the switch port. Pairs A, B, and C are normal at ~25 metres. This is precisely the kind of fault that causes intermittent CRC errors — at Gigabit speeds, all four pairs are used for transmission; a shorted pair corrupts the signal. The cable must be replaced. TDR test statuses include: Normal, Open (broken conductor), Short (conductors touching), Impedance mismatch, and No cable. The distance reading points the technician exactly to where in the cable run the fault lies.

Fix — Replace the Cable

! ── After cable replacement, clear counters and monitor ──────────
NetsTuts_SW1#clear counters GigabitEthernet0/2
Clear "show interface" counters on this interface [confirm]

! ── Monitor for 10 minutes under normal traffic ──────────────────
NetsTuts_SW1#show interfaces GigabitEthernet0/2 | include CRC|error|reliability
     reliability 255/255, txload 3/255, rxload 3/255
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
  
After replacing the cable, reliability 255/255 (100%) and zero CRC errors confirm the physical fault is resolved. The | include pipe filter extracts only the relevant lines from the full show interfaces output — useful for quick repeated checks during a monitoring period.

7. Scenario 5 — err-disabled Port

PC3 has no network access. The port LED on Fa0/3 is amber. show ip interface brief shows the interface status as err-disabled. This is not a Layer 1 cable fault — IOS has disabled the port in response to a security or protocol violation. The physical layer is healthy; the port is administratively blocked by the switch itself.

Identifying the err-disabled Cause

NetsTuts_SW1#show ip interface brief | include Fa0/3
FastEthernet0/3        unassigned      YES unset  err-disabled          down

NetsTuts_SW1#show interfaces FastEthernet0/3
FastEthernet0/3 is down, line protocol is down (err-disabled)
  ...

! ── Find WHY it was err-disabled ──────────────────────────────────
NetsTuts_SW1#show errdisable recovery
ErrDisable Reason            Timer Status    Timer Interval
-----------------            -------------- --------------
arp-inspection               Disabled           300
bpduguard                    Disabled           300
channel-misconfig (STP)      Disabled           300
dhcp-rate-limit              Disabled           300
dtp-flap                     Disabled           300
link-flap                    Disabled           300
loopback                     Disabled           300
pagp-flap                    Disabled           300
port-security                Disabled           300
psecure-violation            Disabled           300
storm-control                Disabled           300
udld                         Disabled           300

NetsTuts_SW1#show port-security interface FastEthernet0/3
Port Security              : Enabled
Port Status                : Secure-shutdown
Violation Mode             : Shutdown
Aging Time                 : 0 mins
Aging Type                 : Absolute
SecureStatic Address Aging : Disabled
Maximum MAC Addresses      : 1
Total MAC Addresses        : 1
Configured MAC Addresses   : 0
Sticky MAC Addresses       : 0
Last Source Address:Vlan   : 00a1.b2c3.d4e5:10
Security Violation Count   : 1
  
The port is in Secure-shutdown — port security triggered because a new MAC address (00a1.b2c3.d4e5) arrived on a port configured for maximum 1 MAC address. This happens when a user connects a switch or hub to the port, or replaces their PC with a different NIC without clearing the sticky MAC. The violation count of 1 means it happened once — a new device plugged in. This is a security event, not a physical fault. Full context on port security violation modes is in Violation Modes.

Recovering an err-disabled Port

! ── Manual recovery — shut then no shut ──────────────────────────
! ── Only do this after investigating and resolving the cause ──────
NetsTuts_SW1(config)#interface FastEthernet0/3
NetsTuts_SW1(config-if)#shutdown

NetsTuts_SW1(config-if)#no shutdown
NetsTuts_SW1(config-if)#exit

! ── Verify recovery ───────────────────────────────────────────────
NetsTuts_SW1#show interfaces FastEthernet0/3 | include protocol
FastEthernet0/3 is up, line protocol is up (connected)

! ── Optional: Enable auto-recovery for port-security ─────────────
! ── Automatically re-enables the port after the timer expires ─────
NetsTuts_SW1(config)#errdisable recovery cause psecure-violation
NetsTuts_SW1(config)#errdisable recovery interval 30
  
The shutdown / no shutdown cycle manually clears the err-disabled state. Always investigate the cause before recovering — if the violation condition still exists (the unauthorised device is still connected), the port will immediately return to err-disabled. For port security: identify the legitimate MAC address, update the port security config to allow it (switchport port-security mac-address [mac]), then recover. errdisable recovery cause psecure-violation enables automatic recovery — the port re-enables itself after the configured interval, which is useful in environments where brief access interruptions are acceptable but constant manual intervention is not.

Common err-disabled Causes Reference

err-disabled Cause Trigger Resolution
psecure-violation MAC address violation on a port-security enabled port Identify and authorise the correct MAC, then shut/no shut. See Violation Modes
bpduguard BPDU received on a PortFast-enabled access port — rogue switch connected Remove the switch, shut/no shut. See PortFast & BPDU Guard and STP Overview
loopback A loop detected on the port — cable connects back to another port on the same switch Remove the looping cable, shut/no shut
link-flap Interface flapped (went up/down) more than 5 times in 10 seconds Investigate cable and NIC — a faulty cable causing intermittent signal loss triggers this
storm-control Broadcast, multicast, or unicast storm exceeded the configured threshold Identify the storm source (loop, misconfigured device), remove it, then recover
udld UDLD detected a unidirectional link — one direction of a fibre is broken Check both fibre strands and connectors — unidirectional links are a STP hazard

8. show interfaces — Full Counter Reference

Mastering show interfaces output is the single most important Layer 1 diagnostic skill. This annotated reference maps every significant counter to its meaning and diagnostic value:

NetsTuts_SW1#show interfaces GigabitEthernet0/1
GigabitEthernet0/1 is up, line protocol is up (connected)         ← [1]
  Hardware is Gigabit Ethernet, address is 0c1a.3b2f.0101
  MTU 1500 bytes, BW 1000000 Kbit/sec, DLY 10 usec,              ← [2]
     reliability 255/255, txload 1/255, rxload 1/255              ← [3]
  Encapsulation ARPA
  Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX          ← [4]
  ...
  Last input 00:00:00, output 00:00:00, output hang never         ← [5]
  Last clearing of "show interfaces" counters 00:42:17            ← [6]
  Input queue: 0/75/0/0 (size/max/drops/flushes)                  ← [7]
  Output queue: 0/40 (size/max)
  5 minute input rate 14000 bits/sec, 18 packets/sec              ← [8]
  5 minute output rate 12000 bits/sec, 15 packets/sec
     42817 packets input, 54807552 bytes, 0 no buffer             ← [9]
     Received 214 broadcasts (842 multicasts)
     0 runts, 0 giants, 0 throttles                               ← [10]
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored        ← [11]
     41204 packets output, 52741120 bytes, 0 underruns            ← [12]
     0 output errors, 0 collisions, 2 interface resets            ← [13]
     0 unknown protocol drops
     0 babbles, 0 late collision, 0 deferred                      ← [14]
     0 lost carrier, 0 no carrier                                  ← [15]
  
Marker Field Diagnostic Significance
[1] Line status / Protocol status The primary triage field — determines which layer the fault is at (see Section 1 table). The parenthetical (connected/notconnect/err-disabled) adds detail on switches
[2] BW (bandwidth) The negotiated or configured bandwidth in Kbit/sec. Used by routing protocols for metric calculation. A 100000 Kbit/sec value on a Gigabit interface indicates a speed mismatch
[3] reliability A 5-minute exponential average of link reliability — 255/255 is perfect. Values below 250/255 indicate ongoing errors. Watch this number after a fix to confirm improvement
[4] Duplex / Speed Current negotiated or configured duplex and speed. The most direct speed/duplex check. Prefix a- means auto-negotiated
[5] Last input / Last output "never" means no traffic has ever passed — confirms the interface is connected but idle or misconfigured. A time value shows the age of the last frame
[6] Last clearing of counters When counters were last reset with clear counters. Critical for interpreting error counts — a large CRC value means nothing without knowing the time span over which it accumulated
[7] Input queue drops Non-zero value indicates the CPU or interface buffer is being overwhelmed — packets arriving faster than they can be processed. A Layer 3 performance/capacity issue rather than Layer 1
[8] 5-minute rates Rolling average of current throughput. Compare against interface bandwidth to calculate utilisation — 14000 bits/sec on a 1 Gbps interface is 0.001% utilisation
[9] packets input / bytes Cumulative traffic since last counter clear. Non-zero confirms the interface has received traffic — rules out complete physical disconnection
[10] runts / giants Runts (under 64 bytes) indicate duplex mismatch or a faulty NIC. Giants (over 1518 bytes) indicate MTU misconfiguration or jumbo frames on a non-jumbo port
[11] input errors / CRC / frame CRC = corrupted frames. Frame = illegal frame boundaries (usually duplex mismatch). These counters combined with [14] distinguish duplex mismatch from physical cable fault
[12] output underruns Interface could not get data from the CPU fast enough to fill the transmit queue — CPU bottleneck or software issue
[13] output errors / collisions / interface resets Output errors = transmit failures. Collisions on a full-duplex port = duplex mismatch. Interface resets = the interface was reset due to errors — a rising reset counter alongside CRC errors indicates a persistent fault
[14] late collision / deferred Late collisions are the definitive duplex mismatch indicator — they cannot occur on a correctly operating full-duplex interface. Deferred = transmit was delayed because the line was busy (half-duplex only)
[15] lost carrier / no carrier Lost carrier = signal was present then disappeared (cable pulled while active, flapping transceiver). No carrier = no signal detected at all. Serial interfaces show these; Ethernet uses "notconnect" instead

9. Layer 1 Troubleshooting Methodology

Step Action Command / Tool What to Look For
1 Check physical indicators first LED on port and device Off = no link, amber = error/blocked, green = up. Physical observation before CLI
2 Get quick status across all interfaces show ip interface brief Identify any non-up/up interfaces — Status or Protocol not "up"
3 Read the two-line status on the problem interface show interfaces [name] (first line) down/down = Layer 1. up/down = Layer 2. administratively down = config
4 Check for accidental shutdown show running-config interface [name] Presence of shutdown command — fix with no shutdown
5 Read all error counters show interfaces [name] (full output) CRC, runts, late collisions → duplex mismatch or cable. CRC only → cable/EMI. Giants → MTU
6 Check speed and duplex show interfaces [name] status Confirm both sides match. a- prefix = auto-negotiated. Mismatch = hardcode both ends to match
7 Run cable diagnostics test cable-diagnostics tdr interface [name] then show cable-diagnostics tdr Open, short, impedance mismatch — note distance to fault. Replace cable if fault found
8 Check err-disabled cause show errdisable recovery + show port-security interface [name] Identify which feature triggered the shutdown — resolve cause before recovering the port
9 Clear counters and re-observe clear counters [interface] After any fix, clear counters and monitor for 5 minutes — confirm errors do not recur
10 Check reliability value trend show interfaces [name] | include reliability 255/255 = healthy. Below 240/255 = ongoing signal quality problem — cable or transceiver

Key Points & Exam Tips

  • The two-line interface status is the Layer 1 triage starting point — down/down = Layer 1 (no signal), up/down = Layer 2 (keepalive failure), administratively down/down = intentional shutdown command. Use ping to confirm end-to-end reachability once Layer 1 is up.
  • Late collisions are the definitive duplex mismatch indicator — they are physically impossible on a correctly operating full-duplex link. Any non-zero late collision counter on a switch port running full-duplex is a duplex mismatch until proven otherwise.
  • The duplex mismatch symptom pattern is: CRC errors + runts + late collisions + collisions — all rising together. Pure cable faults produce CRC errors without late collisions.
  • Best practice for speed and duplex: configure speed auto and duplex auto on both ends — 802.3 auto-negotiation selects the highest common mode. Hardcoding one side and leaving the other on auto is the primary cause of duplex mismatch.
  • The a- prefix in show interfaces status output (a-full, a-1000) means the value was auto-negotiated. Without the prefix, the value is hardcoded.
  • reliability 255/255 is perfect. Values below 250/255 in show interfaces indicate ongoing signal quality issues on the link — investigate cable and transceiver.
  • err-disabled is an IOS response to a security or protocol violation — not a hardware fault. Always identify the cause (bpduguard, port-security, loopback, udld) before recovering with shutdown / no shutdown.
  • The built-in IOS TDR (test cable-diagnostics tdr) locates physical cable faults with distance accuracy — available on most Catalyst switches. It briefly takes the port offline during the test.
  • clear counters [interface] resets all error counters to zero — always clear before a post-fix monitoring period so pre-fix accumulated errors do not mislead the assessment. Counters never reset automatically.
  • On the CCNA exam: know the down/down vs administratively down distinction, the duplex mismatch error counter signature (CRC + runts + late collisions), the meaning of all major interface counters, and the speed auto / duplex auto best practice.
Next Steps: Once Layer 1 is confirmed healthy (up/up, zero error counters), advance to Layer 2 troubleshooting — verify the MAC address table is populating correctly with show mac address-table and check VLAN assignment with show vlan. For port security configuration that causes err-disabled events see Port Security & Sticky MAC and Violation Modes. For STP-related err-disabled caused by BPDU Guard see PortFast & BPDU Guard. For cable standards and maximum distances referenced in this guide see Structured Cabling.

TEST WHAT YOU LEARNED

1. show ip interface brief shows an interface as "down / down". A second interface shows "administratively down / down". What is the key distinction between these two states, and what is the correct fix for each?

Correct answer is C. The distinction is critical for triage efficiency. "down/down" (line status down, protocol down) indicates a genuine Layer 1 fault — no electrical or optical signal is reaching the interface. The physical cable, connector, transceiver, or the remote device must be investigated. "Administratively down" appears when the interface has the shutdown command in its configuration — IOS has disabled it intentionally. The physical layer may be perfectly healthy (cable connected, device on) but IOS will not bring the interface up until no shutdown is issued. Confusing the two wastes time — an engineer who replaces a cable on an administratively down interface will see no change until they also issue no shutdown.

2. show interfaces FastEthernet0/2 shows: 3,847 CRC errors, 1,204 runts, 892 late collisions, 2,103 collisions. The port is configured as Full-duplex, 100Mb/s. What is the most likely root cause?

Correct answer is A. The presence of late collisions is the decisive factor. On a full-duplex link, a collision is physically impossible — each direction uses separate wire pairs and there is no CSMA/CD process. A late collision (collision after byte 64) occurring on a port configured as full-duplex means the port's peer is operating in half-duplex mode. The half-duplex device uses CSMA/CD, detects the full-duplex switch transmitting, and backs off — but when it does transmit, the full-duplex switch does not back off, causing the overlap that appears as a collision on the half-duplex side and a corrupted frame (CRC error, runt) on the switch. A damaged cable would produce CRC errors alone — no late collisions. MTU issues would show giants, not runts and collisions.

3. A switch port shows a-full and a-1000 in the Duplex and Speed columns of show interfaces status. A second port shows full and 1000 (no prefix). What does the difference in prefix mean, and which configuration is preferred for server connections?

Correct answer is D. The a- prefix is IOS notation for auto-negotiated values. a-full means duplex was negotiated via 802.3 auto-negotiation; full without the prefix means it was manually configured with duplex full. For server connections, hardcoding is the common enterprise practice — auto-negotiation can occasionally fail or produce unexpected results when NIC drivers are updated, the server reboots, or a hypervisor virtual NIC behaves unexpectedly. When hardcoding, the critical rule is to hardcode both ends identically — speed 1000 and duplex full on the switch port, and 1000/full in the NIC driver settings on the server. If only one end is hardcoded, the auto-negotiating end may select a different value, creating a mismatch.

4. show cable-diagnostics tdr interface GigabitEthernet0/2 shows Pair D with status "Short" at 4 metres. What does this mean and what is the appropriate action?

Correct answer is B. TDR (Time Domain Reflectometer) works by sending an electrical pulse down each wire pair and measuring the reflected echo. A short circuit (two conductors touching) reflects the pulse at the fault location, and the travel time calculates the distance. A "Short" status means the two conductors of pair D are electrically connected at approximately 4 metres from the switch — often caused by a damaged section of cable, a crushed insulation point, or a poorly crimped RJ45 connector. At Gigabit (1000BASE-T) speeds, all four pairs transmit simultaneously using a complex encoding scheme — a fault on any single pair corrupts the combined signal, causing CRC errors on the interface. The precise distance reading (4 metres) is the TDR's key value — it directs the technician to the specific section of cable to inspect or replace rather than requiring the entire run to be traced.

5. show interfaces FastEthernet0/3 shows the interface as "down, line protocol is down (err-disabled)". What is the correct sequence of steps to recover the port?

Correct answer is C. err-disabled is IOS actively protecting the network from a detected threat or fault condition — it is not a physical hardware failure. Recovering the port without resolving the cause is pointless: if a rogue switch is still connected (triggering bpduguard), or an unauthorised MAC is still plugged in (triggering port-security), the port will immediately re-enter err-disabled state the moment no shutdown is issued and the device sends its first frame. The correct workflow is always: (1) identify the specific cause using the appropriate show command, (2) eliminate the cause physically or via configuration, (3) recover the port. For port-security violations this means identifying the correct MAC address and updating the port-security config. For bpduguard this means removing the unauthorised switch. Then shutdown / no shutdown will stick.

6. An interface shows reliability 228/255 and a steadily rising CRC error count, but zero late collisions and zero runts. Both ends are confirmed full-duplex, 1 Gbps auto-negotiated. What is the most likely cause and next diagnostic step?

Correct answer is D. The diagnostic logic here is process of elimination. CRC errors indicate frames are being corrupted in transit. The two causes of CRC errors are (1) duplex mismatch and (2) physical signal quality problems. Duplex mismatch is identified by the presence of late collisions — which are absent here. Both ends are confirmed full-duplex (no mismatch). Therefore the CRC errors must be from physical signal corruption: a damaged cable, a bent fibre, a dirty optical connector, a failing transceiver, or EMI interference. Reliability 228/255 (89.4%) confirms ongoing signal quality degradation — not an occasional glitch. The TDR test is the next step for copper; an optical power meter test is the equivalent for fibre. EMI is investigated by checking whether the cable runs parallel to power cables or through areas with electrical equipment.

7. What does the "Last input never" field in show interfaces output indicate, and how is it different from "Last input 2w3d"?

Correct answer is A. "Last input" and "Last output" track the age of the most recently received and transmitted frame respectively. "Never" in the context of a switch access port confirms the port has never received a valid frame — this is expected for a notconnect port (no cable) or a newly provisioned port awaiting a device. After counters are cleared with clear counters, the timer resets to "never" until the next frame arrives. "2w3d" (2 weeks, 3 days) means there was inbound traffic at some point but not recently — useful for identifying zombie ports connected to devices that have been off for an extended time, helping with documentation audits. Comparing "Last input" against "Last output" can also reveal one-way communication — if output is recent but input is never, the link may be unidirectional (one fibre strand broken) or the remote device is not responding.

8. Which cable testing tool is most appropriate for locating the precise physical position of a break in a copper cable run that is installed in conduit inside a wall?

Correct answer is B. A continuity tester (Option A) will confirm the cable is broken — it will show an open circuit on the affected pair — but it cannot tell you where in the cable run the break is located. For an in-wall conduit installation, knowing the break is "somewhere in the cable" means opening the entire wall or pulling an entirely new cable. The TDR measures the round-trip travel time of a reflected electrical pulse to calculate distance with metre-level precision — "break at 23 metres from the switch end" tells the technician exactly where to open the wall or where in the conduit the damage occurred. An optical power meter (Option C) tests fibre, not copper. A loopback plug (Option D) tests the port hardware, not the cable run. The built-in Cisco IOS TDR (test cable-diagnostics tdr) on Catalyst switches provides this capability without requiring an external device.

9. A port shows 0 CRC errors, 0 runts, 0 late collisions, but the output drops counter is climbing steadily. What does this indicate and is it a Layer 1 problem?

Correct answer is C. Output drops occur when the interface's output queue fills up — frames are queued for transmission faster than the interface can send them, and when the queue is full, new frames are dropped. This is a capacity/QoS issue, not a physical layer fault. Zero CRC, runts, late collisions, and giants all confirm the physical layer is operating perfectly — frames that do make it through are transmitted without corruption. The output drop counter is often seen on WAN interfaces where a high-bandwidth LAN feeds into a lower-bandwidth WAN link (bandwidth mismatch), or on any interface that is genuinely saturated. The CCNA-level fix is to check interface utilisation (show interfaces 5-minute rates vs BW), consider QoS to prioritise critical traffic, or upgrade the link speed. Spanning tree blocking (Option D) would show the port in blocking state in show spanning-tree and produce no output activity — not rising output drops.

10. An engineer has fixed a duplex mismatch on Fa0/2 by setting both ends to duplex auto and speed auto. show interfaces FastEthernet0/2 still shows 3,847 CRC errors and 312 late collisions. How should this be interpreted?

Correct answer is D. IOS interface counters are cumulative — they accumulate from the last time clear counters was run (or from device boot if never cleared). They do not automatically reset when a configuration change is made, when the interface bounces, or when the duplex state changes. The 3,847 CRC errors and 312 late collisions are the historical total from before the fix was applied. To assess whether the fix worked, the counters must be cleared and then monitored for a representative period (5–10 minutes of normal traffic). If the counters remain at zero (or increment only negligibly) after clearing, the duplex mismatch is resolved. This is why clear counters is an essential post-fix step — assessing pre-fix accumulated error counts leads to incorrect conclusions about the current interface health.