Friday, September 21, 2018

Networking Basics: Routers

Route XP
Networking Basics: Routers

When looking at networking basics, understanding the way a network operates is the first step to understanding routing and switching. The network operates by connecting computers and peripherals using two pieces of equipment; switches and routers. 

Switches and routers, essential networking basics, enable the devices that are connected to your network to communicate with each other, as well as with other networks.Though they look quite similar, routers and switches perform very different functions in a network.

Routers, the second esteemed component of our networking basics and are used to tie multiple networks together. For example, if we would use a router to connect your networked computers to the Internet and thereby share an Internet connection among many users. The router will act as a correspondent, choosing the best route for your communication to travel so that you receive it quickly.

Routers analyze the data being sent over a network, change how it is packaged, and send it to another network, or over a different type of network. They connect your business to the outside world, protect your information from security threats, and can even decide which computers get priority over others

Fig 1.1- Basics of Router
Depending on your business and your networking plans, you can choose from routers that include different capabilities. These can include networking basics such as:
  • Firewall: Specialised software that examines incoming data and protects your business network against attacks
  • Virtual Private Network (VPN): A way to allow remote employees to safely access your network remotely
  • IP Phone network : Combine your company's computer and telephone network, using voice and conferencing technology, to simplify and unify your communications


There are lot of Cisco Router Models used in small, Big, Enterprise and Datacenter environment, Some of them are the below models used by the Cisco:-


Tuesday, September 11, 2018

Comparing : Li-Fi and Wifi Technology

Route XP
#Advance Technology 
#Li-FI Technology
What is “LiFi”?

Prof. Harald Haas coined the term “LiFi” at his TED Global talk to describe the high speed, bidirectional, networked and mobile wireless communication of data using light.

Li-Fi is a bidirectional, high speed and fully networked wireless communication technology similar to Wi-Fi. Coined by Prof. Harald Haas, Li-Fi is a subset of optical wireless communications (OWC) and can be a complement to RF communication (Wi-Fi or Cellular network), or a replacement in contexts of data broadcasting.

It is wireless and uses visible light communication or infra-red and near ultraviolet (instead of radio frequency waves) spectrum, part of optical wireless communications technology, which carries much more information, and has been proposed as a solution to the RF-bandwidth limitations. A complete solution includes an industry led standardization process.

Watch the below Video for more Information:
Li-fi can deliver internet access 100 times faster than traditional wi-fi, offering speeds of up to 1Gbps (gigabit per second).
It requires a light source, such as a standard LED bulb, an internet connection and a photo detector.
It was tested this week by Estonian start-up Velmenni, in Tallinn.

Velmenni used a li-fi-enabled light bulb to transmit data at speeds of 1Gbps. Laboratory tests have shown theoretical speeds of up to 224Gbps.

It was tested in an office, to allow workers to access the internet and in an industrial space, where it provided a smart lighting solution.

Fig 1.1- Lifi Vs Wifi

The term li-fi was first coined by Prof Harald Haas from Edinburgh University, who demonstrated the technology at a Ted (Technology, Entertainment and Design) conference in 2011.

Prof Haas described a future when billions of light bulbs could become wireless hotspots.
One of the big advantages of li-fi is the fact that, unlike wi-fi, it does not interfere with other radio signals, so could be utilised on aircraft and in other places where interference is an issue.
While the spectrum for radio waves is in short supply, the visible light spectrum is 10,000 times larger, meaning it is unlikely to run out any time soon.

But the technology also has its drawbacks - most notably the fact that it cannot be deployed outdoors in direct sunlight, because that would interfere with its signal.

Neither can the technology travel through walls so initial use is likely to be limited to places where it can be used to supplement wi-fi networks, such as in congested urban areas or places where wi-fi is not safe, such as hospitals.

Sunday, September 9, 2018

Cisco ASR 1002-X Basics

Route XP
Cisco ASR 1002-X

# Cisco Routers
#CCIE Candidates 
# ASR Routers Specifications
# Capacity and Utilization
# RP and ESP Processors
# RP - Route Processors
# ESP - Embedded Processors

Lets talk about the basics of Cisco ASR 1002-X Router now, Similar with the ASR 1001-X Router but the difference is the inbuilt capacity in the ASR 1002-X Routers differ from ASR 1001-X Routers.

Further we can also discuss the difference between the route processors and Embedded processors and the use of these processors in ASR Routers.

ASR routers are generally used where we have the demand of the high bandwidth, likewise we have the customer and the capacity requirement for the WAN network is more and equal to 1 Gbps and in future the capacity increase to 2,5 or 10 Gbps, then can use ASR routers.

Some of the requirement where we are providing solution in the DC and we need to provide the High end routers as CE routers, then we can have the ASR Routers.

Fig 1.1- ASR 1002-X 

Note: - We can upgrade the back-plane capacity of the ASR routers by putting the license of Embedded processor like ESP-5, ESP-10, ESP-20. So we can upgrade the Back-plane capacity up-to 20 Gbps in ASR Routers.

Further if you guys have any queries about the Cisco ASR Routers in terms of Hardware or the functionality of the routers, we feel free to contact us so that we can explain the information related to the Cisco ASR routers. 

Lets talk about the routers in details as follows:- 

Cisco ASR 1000 series Aggregation services Routers mixture more than one WAN connections and community offerings, which include encryption and traffic management, and forward them across WAN connections at line speeds from 2.5 to 2 hundred Gbps. The routers include each hardware and software program redundancy in an enterprise-leading excessive-availability design.

The today's addition to the Cisco ASR circle of relatives is the Cisco ASR 1001-X Router, a single-rack-unit (RU) router helping 2.5- to twenty-Gbps forwarding ability. Cisco ASR 1001-X Router speeds may be “became up” incrementally to as an awful lot as 20 Gbps with a simple throughput improve license, rather than having to buy additional hardware blades or new routers.

The Cisco ASR a thousand series helps Cisco IOS XE software program, a modular operating system with modular packaging, feature speed, and effective resiliency. The Cisco ASR one thousand collection Embedded offerings Processors (ESPs), that are based on Cisco Quantum float Processor technology, boost up many superior features which includes crypto-based totally access protection; community deal with Translation (NAT), thread protection with Cisco area-primarily based Firewall (ZBFW), deep packet inspection (DPI), Cisco Unified Border element (dice), and a numerous set of information middle interconnect (DCI) capabilities. those offerings are implemented in Cisco IOS XE software program without the want for extra hardware help.

Cisco ASR 1000 Routers sit at the edge of your enterprise data center or large office connecting to the WAN, as well as in service provider points of presence (POPs). The Cisco ASR 1000 Series will benefit the following types of customers:
Enterprises experiencing explosive network traffic as mobility, cloud networking, and video and collaboration usage ramp up. Cisco ASRs consolidate these various traffic streams and apply traffic management and redundancy properties to them to maintain consistent performance among enterprise sites and cloud locations.
Network service providers needing to deliver high-performance services, such as DCI and branch-office server aggregation, to business customers. Service providers can also use the multiservice routers to deploy hosted and managed services to business and multimedia services to residential customers.

Existing Cisco 7200 Series Router (End-of-Sale) customers looking for simple migration to a new multiservice platform that delivers greater performance with the same design.

Cisco Catalyst 3850 Fiber Switch Model

Route XP
Cisco Catalyst 3850 is generally new Launched Cisco Switch with the Fiber Connectivity Capabilities. It can be used in the Data-Center environment for various purposes.

Lets discuss about this switch in details as below:

The Cisco Catalyst 3850 Series is the next generation of enterprise-class stackable Ethernet and Multi-gigabit Ethernet access and aggregation layer switches that provide full convergence between wired and wireless on a single platform. Cisco’s new Unified Access Data Plane (UADP) application-specific integrated circuit (ASIC) powers the switch and enables uniform wired-wireless policy enforcement, application visibility, flexibility and application optimization. 

This convergence is built on the resilience of the new and improved Cisco StackWise-480 technology. The Cisco Catalyst 3850 Series Switches support full IEEE 802.3 at Power over Ethernet Plus (PoE+), Cisco Universal Power over Ethernet (Cisco UPOE), modular and field-replaceable network modules, RJ45 and fiber-based downlink interfaces, and redundant fans and power supplies. With speeds that reach 10Gbps, the Cisco Catalyst 3850 Multi-gigabit Ethernet Switches support current and next-generation wireless speeds and standards (including 802.11ac Wave 2) on existing cabling infrastructure

Product Overview
Integrated wireless controller capability with: 
  • Up to 40G of wireless capacity per switch (48-port models) 
  • Support for up to 50 access points and 2000 wireless clients on each switching entity (switch or stack)
  • 24 and 48 10/100/1000 data PoE+ and Cisco UPOE models with energy-efficient Ethernet (EEE) Cisco StackWise-480 technology provides scalability and resiliency with 480 Gbps of stack throughput 
  • Cisco StackPower technology provides power stacking among stack members for power redundancy Three optional uplink modules with 4 x Gigabit Ethernet, 2 x 10 Gigabit Ethernet, or 4 x 10 Gigabit Ethernet ports 
  • Dual redundant, modular power supplies and three modular fans providing redundancy Full IEEE 802.3at (PoE+) with 30W power on all ports in 1 rack unit (RU) form factor 
  • Cisco UPOE with 60W power per port in 1 rack unit (RU) form factor
  • Software support for IPv4 and IPv6 routing, multicast routing, modular quality of service (QoS), Flexible NetFlow (FNF) Version 9, and enhanced security features
  • Single universal Cisco IOS Software image across all license levels, providing an easy upgrade path for software features
  • Enhanced limited lifetime warranty (E-LLW) with next business day (NBD) advance hardware replacement and 90-day access to Cisco Technical Assistance Center (TAC) support

Network Modules
The Cisco Catalyst 3850 Series Switches support three optional network modules for uplink ports. The default switch configuration doesn’t include the uplink module. At the time of switch purchase the customer has the flexibility to choose from the network modules
  • 4 x Gigabit Ethernet with Small Form-Factor Pluggable (SFP)
  • 2 x 10 Gigabit Ethernet with SFP+ or 4 x Gigabit Ethernet with SFP
  • 4 x 10 Gigabit Ethernet with SFP+ (supported on the 48-port models only)
Fig 1.1- Cisco 3850 Switch


The C3850-NM-4-10G module is supported on the 48-port models only. The SFP+ interface supports both 10 Gigabit Ethernet and Gigabit Ethernet ports, allowing customers to use their investment in Gigabit Ethernet SFP and upgrade to 10 Gigabit Ethernet when business demands change without having to do a comprehensive upgrade of the access switch. The three network modules are hot swappable

Power over Ethernet Plus (PoE+)
In addition to PoE (IEEE 802.3af), the Cisco Catalyst 3850 Series Switches support PoE+ (IEEE 802.3at standard), which provides up to 30W of power per port. The Cisco Catalyst 3850 Series Switches can provide a lower total cost of ownership (TCO) for deployments that incorporate Cisco IP phones, Cisco Aironet wireless LAN (WLAN) access points, or any IEEE 802.3at-compliant end device. 

PoE removes the need for wall power to each PoE enabled device and eliminates the cost for additional electrical cabling and circuits that would otherwise be necessary in IP phone and WLAN deployments. Table 6 shows the power supply combinations required for different PoE needs.

Cisco Universal Power over Ethernet (UPOE)
Cisco Universal Power over Ethernet is a breakthrough technology, offering the following services and benefits:
  • 60W per port to enable a variety of end devices such as Samsung VDI client, BT IP turret systems in trading floors, Cisco Catalyst compact switches in retail/hospitality environments, personal Cisco Telepresence systems, and physical access control devices
  • High availability for power and guaranteed uninterrupted services, a requirement for critical applications (e911)
  • Lowering OpEx by providing network resiliency at lower cost by consolidating backup power into the wiring closet
  • Faster deployment of new campus access networking infrastructures by eliminating the need for a power outlet for every endpoint


Wednesday, September 5, 2018

BPDU Filter and Guard

Route XP
#Cisco #Routers
#CCNA R&S, CCNP and CCIE Students

#Cisco Systems Engineer
#Specially Routing Students
# Network Engineers
#Cisco Switching Engineers

BPDU Filter and Guard

BPDU Filtering, BPDU Guard, and Root Guard square measure s.t.p. security mechanisms. during this post i will be able to solely describe BPDU Filtering and BPDU Guard.

These a pair of options give protection against spanning-tree loops being created on ports wherever PortFast has been enabled. a tool connected to a PortFast interface isn't alleged to send BPDUs however ought to this happen BPDU Filtering and BPDU Guard give protection.

Fig 1.1-BPDU Guard

BPDU Guard and BPDU Filtering is organized in a pair of other ways, from world configuration mode or in interface configuration mode. In world configuration mode the feature (either BPDU guard or BPDU Filtering) can have impact on PortFast enabled port solely. In interface configuration mode it'll solely have an effect on  a such that port.


BPDU Guard
PortFast ought to be organized on port wherever bridging loops don't seem to be expected to make (which means no BPDUs ought to be receive on these ports), like on end-devices port sort of a single digital computer or server. 

PortFast provides fast network access by coming into directly in standard pressure forwarding state (bypassing listning and learning state). albeit PortFast will notice a bridging loop (While PortFast is enabled on a port, standard pressure remains running), it'll notice it in an exceedingly finite quantity of your time that's to mention the length of your time needed to maneuver the port through the traditional standard pressure states.

If any BPDUs  (superior to this root or not) ar received on port organized with BPDU Guard that port is place right away in errdisable state.

If configured in global configuration mode BPDU Guard will be enable on all configured PortFast ports:
Sw1(config)#spanning-tree portfast bpduguard ?
     default Enable bpdu guard by default on all portfast ports

If configured in interface configuration mode it will only be enable on the specific port:
Sw1(Config-if)#spanning-tree bpduguard ?
      disable   Disable BPDU guard for the interface
      enable    Enable BPDU guard for the interface
BPDU guard should be configured on all switchs ports where STP PortFast is enabled. This prevents any possibility that a switch will be added to the port  either intentionally or by mistake.

BPDU Filtering
BPDU Filtering allows to stop sending/receiving BPDUs on a port depending on how is configured.

If it is configured from global configuration mode BPDU Filtering will be enabled on all configured PortFast ports. No BPDUs will be sent out of that port which will hide STP  topology to end-users.  But as soon as a BPDU is received the port will lose  is PortFast status and  BPDU Filtering will be disabled. 

The port is then taking back to normal STP operation and sends/receives BPDUs. See bellow for how to configure BPDU Filtering from global configuration mode:

Sw3(Config)#spanning-tree portfast bpdufilter default

If BPDU Filtering is configured from the interface configuration mode the result is completely different as this will cause the specific port to stop sending AND receiving (BPDUs are dropped) BPDUs. Tthe port ignores any incoming BPDUs and changes to Forwarding state. this solution is not recommended as it can result in bridging loops.

Sw3(Config-if)#spanning-tree bpdufilter enable


Note: if you enable BPDU Guard on the same interface as BPDU Filtering, BPDU Guard has no effect because BPDU Filtering takes precedence over BPDU Guard. configuration of BPDU Filtering is not a recommended configuration.


BGP Unequal Load Cost Sharing

Route XP


#Cisco #Routers
#CCNA R&S, CCNP and CCIE Students
#Cisco Systems Engineer
#Specially Routing Students
# Network Engineers

BGP Unequal Load Cost Sharing

We are going to make R4 use both links when reaching prefixes on the internet.  25% of the traffic will go via AS2, and 75% of the traffic will go via AS3.  Remember that load sharing is always considered in one direction.  So we are only affecting traffic flowing from AS4 outbound to the internet.

Fig 1.1- BGP Unequal Load
Let’s start by taking a peek on R4’s BGP table for the 100.100.100.0/24 prefix.


R4#sh ip bgp 100.100.100.0
BGP routing table entry for 100.100.100.0/24, version 2
Paths: (2 available, best #2, table Default-IP-Routing-Table)
Multipath: eBGP
Flag: 0x820
  Advertised to update-groups:
     1
  3 1
    34.34.34.1 from 34.34.34.1 (3.3.3.3)
      Origin IGP, localpref 100, valid, external
  2 1
    24.24.24.1 from 24.24.24.1 (2.2.2.2)
      Origin IGP, localpref 100, valid, external, best

You can see that R4 is preferring the path via R2 (2.2.2.2) to reach this prefix.  This is because of a lower router-id (i configured #bgp bestpath compare-routerid).  First off, let’s start by making R4 use both paths.


R4(config)#router bgp 4
R4(config-router)# maximum-paths 2  

R4#sh ip bgp 100.100.100.0
BGP routing table entry for 100.100.100.0/24, version 2
Paths: (2 available, best #2, table Default-IP-Routing-Table)
Multipath: eBGP
Flag: 0x820
  Advertised to update-groups:
     1
  3 1
    34.34.34.1 from 34.34.34.1 (3.3.3.3)
      Origin IGP, localpref 100, valid, external
  2 1
    24.24.24.1 from 24.24.24.1 (2.2.2.2)
      Origin IGP, localpref 100, valid, external, best

What’s happened  here is that the AS_PATH’s, although equal length, are not identical.  This means we are still only using one of the available links.  To fix this, we just need to use the hidden command shown below.


R4(config)#router bgp 4
R4(config-router)#bgp bestpath ?
  compare-routerid  Compare router-id for identical EBGP paths
  cost-community    cost community
  med               MED attribute   

R4(config-router)#bgp bestpath as-path multipath-relax   

R4#sh ip bgp  100.100.100.0
BGP routing table entry for 100.100.100.0/24, version 3
Paths: (2 available, best #2, table Default-IP-Routing-Table)
Multipath: eBGP
Flag: 0x800
  Advertised to update-groups:
     1
  3 1
    34.34.34.1 from 34.34.34.1 (3.3.3.3)
      Origin IGP, localpref 100, valid, external, multipath
  2 1
    24.24.24.1 from 24.24.24.1 (2.2.2.2)
      Origin IGP, localpref 100, valid, external, multipath, best   

R4#show ip route 100.100.100.0
Routing entry for 100.100.100.0/24
  Known via "bgp 4", distance 20, metric 0
  Tag 2, type external
  Last update from 34.34.34.1 00:00:26 ago
  Routing Descriptor Blocks:
    34.34.34.1, from 34.34.34.1, 00:00:26 ago
      Route metric is 0, traffic share count is 1
      AS Hops 2
      Route tag 2
  * 24.24.24.1, from 24.24.24.1, 00:57:32 ago
      Route metric is 0, traffic share count is 1
      AS Hops 2
      Route tag 2

The multipath attribute in the BGP table indicates we are now using multiple paths to reach this prefix. The routing table confirms this.  All that’s left now, is to do some unequal load sharing.  Here’s how we do it.


R4(config)#int fa0/0
R4(config-if)#bandwidth 30720
R4(config-if)#int fa0/1
R4(config-if)#bandwidth 10240
R4(config-if)#
R4(config-if)#router bgp 4
R4(config-router)#bgp dmzlink-bw
R4(config-router)#neighbor 24.24.24.1 dmzlink-bw
R4(config-router)#neighbor 34.34.34.1 dmzlink-bw
R4(config-router)#end   

R4#sh ip bgp 100.100.100.0
BGP routing table entry for 100.100.100.0/24, version 5
Paths: (2 available, best #2, table Default-IP-Routing-Table)
Multipath: eBGP
Flag: 0x8000
  Advertised to update-groups:
     1
  3 1
    34.34.34.1 from 34.34.34.1 (3.3.3.3)
      Origin IGP, localpref 100, valid, external, multipath
      DMZ-Link Bw 3840 kbytes
  2 1
    24.24.24.1 from 24.24.24.1 (2.2.2.2)
      Origin IGP, localpref 100, valid, external, multipath, best
      DMZ-Link Bw 1280 kbytes   

R4#sh ip route 100.100.100.0
Routing entry for 100.100.100.0/24
  Known via "bgp 4", distance 20, metric 0
  Tag 2, type external
  Last update from 24.24.24.1 00:01:01 ago
  Routing Descriptor Blocks:
    34.34.34.1, from 34.34.34.1, 00:01:01 ago
      Route metric is 0, traffic share count is 3
      AS Hops 2
      Route tag 2
  * 24.24.24.1, from 24.24.24.1, 00:01:01 ago
      Route metric is 0, traffic share count is 1 
      AS Hops 2
      Route tag 2

The interface bandwidth command is entered in kilobits (which is why the values may have seemed to be a little weird).  Anyhow, you can see from the output of the BGP & RIB table that a portion of the traffic is now shared between R2 and R3.

Loop Guard and UDLD

Route XP
#Cisco #Routers
#CCNA R&S, CCNP and CCIE Students
#Cisco Systems Engineer
#Cisco Switching
#Spanning Tree Loop guard Feature

Loop Guard and UDLD


The STP loop guard feature provides additional protection against Layer 2 forwarding loops (STP loops). An STP loop is created when an STP blocking port in a redundant topology erroneously transitions to the forwarding state. 

This usually happens because one of the ports of a physically redundant topology (not necessarily the STP blocking port) no longer receives STP BPDUs. In its operation, STP relies on continuous reception or transmission of BPDUs based on the port role. The designated port transmits BPDUs, and the non-designated port receives BPDUs. 

When one of the ports in a physically redundant topology no longer receives BPDUs, the STP conceives that the topology is loop free. Eventually, the blocking port from the alternate or backup port becomes designated and moves to a forwarding state. This situation creates a loop. 

The loop guard feature makes additional checks. If BPDUs are not received on a non-designated port, and loop guard is enabled, that port is moved into the STP loop-inconsistent blocking state, instead of the listening / learning / forwarding state. Without the loop guard feature, the port assumes the designated port role. The port moves to the STP forwarding state and creates a loop.

Loop guard and Unidirectional Link Detection (UDLD) functionality overlap, partly in the sense that both protect against STP failures caused by unidirectional links. However, these two features differ in functionality and how they approach the problem. This table describes loop guard and UDLD functionality:


FunctionalityLoop GuardUDLD
ConfigurationPer-portPer-port
Action granularityPer-VLANPer-port
AutorecoverYesYes, with err-disable timeout feature
Protection against STP failures caused by unidirectional linksYes, when enabled on all root and alternate ports in redundant topologyYes, when enabled on all links in redundant topology
Protection against STP failures caused by problems in the software (designated switch does not send BPDU)YesNo
Protection against Miss wiring.NoYes



Loop guard and UDLD are two ways to protect your fiber cables from causing loops in the network.  In short, loop guard is a spanning-tree optimisation, and UDLD is a layer 1/2 protocol (unrelated to spanning-tree) that protects your upper layer protocols from causing loops in the network.  To explain these features clearly, see the diagrams below.  


The first diagram is the layer 2 spanning-tree topology, and the second diagram is the actual physical wiring used in the topology. You will need to use both diagrams as a reference point simultaneously in order to understand how loop guard and UDLD work in the examples I will provide.
Fig 1.1
In case you are not familiar with fiber, you need to make sure you understand the connection between Sw2 and Sw3 in the diagram on the right hand side.  This is two physical cables, one is to transmit data and the other is to receive data. These fiber cables are usually plugged into an SFP such as the one shown below, and then the SFP is inserted into the switch. On the switch, this is shown as one physical port. In my diagram, it’s shown as Gi0/1 on Sw2 and Sw3.


So then, lets start off with loop guard and its purpose. In the spanning-tree topology, Sw2’s Gi0/1 port is currently in the alternate (ALTN) discarding state (in legacy spanning-tree, this is just known as being in a blocking state). 

So Sw3 is therefore responsible for forwarding frames on the segment between Sw2 and 3. So what happens then, if the fiber cable that is currently transmitting BPDU’s from Sw3, out the Tx port, down towards Sw2’s Rx port becomes broken for some reason? It means that Sw3, although he is the designated port for the segment, is unable to send BPDU’s towards Sw2. 

Therefore Sw2 now believes there is not another switch for the segment out of his Gi0/1 interface and transitions the port into designated role/forwarding state (Remember, a port must receive BPDU’s in order to stay in a ALTN/blocking state).  When a port moves into the forwarding state, it is allowed to learn mac addresses & also send and receive user traffic. What this effectively means then, is that we have created a unidirectional loop. 

To illustrate this, assume a host connected to Sw1 sends a broadcast ARP (refer to the diagram below for this scenario, and note that both Sw2 & 3 are both designated ports for the segment between them). 

So Sw1 will forward the broadcast ARP frame out all ports except for the port the frame was received on, so Sw2 and Sw3 get the frame. Sw2 is able to forward the frame to Sw3 because his port is now designated/forwarding. Sw3, when he receives this, will forward the frame out all ports except for that on which it was received, and sends it to Sw1. 

And so on, there is an endless unidirectional loop from Sw1–>Sw2–>Sw3–>Sw1.  The key element here is to note that because Sw2 is able to send data in the forwarding plane of the switch out of Gi0/1, and no ports are in a blocking state for the segment, it means that user traffic ends up being forwarding back upstream towards the root bridge (i.e. no ports are in blocking state on the segment). 


This is what creates the loop in the network, and what UDLD and Loopguard can be used to protect against.



Fig 1.2


Enabling loopguard on Sw2 Gi0/1 stops this ALTN port erroneously transitioning into a forwarding state for the segment when BPDU’s have stopped being received on the port i.e. it stops the port going from an ALTN port to a designated port when BPDU’s are no longer received. In our scenario where the fiber connected to Sw2’s Rx port broke, it means that when BPDU’s are no longer received from Sw3, and the port gets put into loop inconsistent state (which is basically a STP blocking state) until BPDU’s are received again.

If we were to use UDLD instead of loop guard, this would effectively do the same thing. So when the fiber to Sw2’s Rx port fails, and UDLD is in aggressive mode, the port is put into error disabled. The way UDLD works out that there is a unidirectional link (i.e. just 1 part of the fiber is broke) is pretty cool. 

Each switch sends out periodic Ethernet multicast UDLD hello’s destined to 0100.0ccc.cccd and lists its own device ID, port ID, time-out value, and a bunch of other parameters. When a switch receives this UDLD frame, it does two things; it stores and caches this information from the neighbor, and it echos the same device ID and Port ID it just received in the UDLD hello back towards originating switch. 

When the originating switch sees the UDLD frame come in with his own device ID and Port ID, it knows a UDLD neighbor exists out of the interface. These multicast hellos are used to build and maintain the neighbor relationship, and are expected to be received before the time-out interval expires in order to keep the neighbor alive from a UDLD perspective.

So in my topology, when the fiber is broken on the Sw3 Tx port, UDLD identifies that we are no longer seeing a UDLD frame back in on the Gi0/1 interface (that would normally list Sw3‘s devie-ID and port-ID), and when the UDLD time-out period expires, the switch transmits 8 UDLD frames, one per second, and if no reply is received then the port goes into err-disabled. This is the default action of aggressive mode. 

In normal mode, the port just goes into the unknown state, which is designed for an “informational purpose”. In reality, that’s just useless, so use aggressive mode to prevent loops.

The key differences between UDLD and loop guard then, is that UDLD protects against mis-wiring of your fiber ports, or a physical wiring problem that would cause your upper layer protocols like spanning-tree to break. Note though, that UDLD is not a part of spanning-tree, nor does it play any part in a spanning-tree topology. 

It is merely there as a helper for spanning-tree because spanning-tree is unable to identify a fault at layer 1 like this that would cause a loop in the network. Now loop guard is a spanning-tree optimisation and its function is to stop root or ALTN ports transitioning into the designated/forwarding state. 

A lot of the time loop guard is going to kick in when there is a physical layer problem as I showed in my example, but it can also protect against some spanning-tree stupidity or moron-bad configured ACLs. For example let’s say someone accidentally went to the gi0/1 interface on Sw2 and configured #spanning-tree bpdufilter enable. 

The port would neither send or receive BPDU’s, and it would become designated and cause a loop. If loop guard was pre-configured on the port, it would just go into loop inconsistent state and be blocked. UDLD would be non the wiser, but loop guard would see this problem.

The recommended best practice is to use both UDLD and loop guard together. It’s also recommended to make sure that you tune your UDLD timers to detect a layer 1 problem faster than spanning-tree can transition a port into designated/forwarding state.

Popular Posts