Monday, February 26, 2018

Short Note on BGP Disable Connected Check

Route XP
Short Note on BGP Disable Connected Check

Today I am going to talk about BGP disable connected Check. Before we start with the discussion you should know about the BGP an exterior routing protocol. If you are aware of the BGP routing protocol then you will able to relate this article.

Lets talk about the BGP disable connected check in details.

The neighbor disable-connected-check command is used to disable the connection verification process for eBGP peering sessions that are reachable by a single hop but are configured on a loopback interface.

Disable-connected-check enables a directly connected eBGP neighbor to peer using a loopback address without adjusting the default TTL of 1. This basically means it doesn’t count the loopback IP as a hop to reach the neighbor. Because the TTL does not get adjusted, it means the neighbor must only be one router away. 

Any further, and the TTL will stop the session establishing. The difference with eBGP multihop, is that you can specify how many hops away a neighbor is allowed to be. You are actually adjusting the TTL.

Fig 1.1- BGP disable connected Check
In the diagram above, I’m going to start by configuring BGP between New Delhi Router & New York Router using loopbacks over their directly connected  interfaces with disable-connected-check.

New Delhi(config-router)#neighbor 2.2.2.2 remote-as 2
New Delhi(config-router)#neighbor 2.2.2.2 update-source lo0
New Delhi(config-router)#neighbor 2.2.2.2 disable-connected-check
New Delhi(config-router)#ip route 2.2.2.2 255.255.255.255 12.12.12.2
New Delhi(config)#

*Feb 23 00:17:22.095: %BGP-5-ADJCHANGE: neighbor 2.2.2.2 Up

New York(config)#router bgp 2
New York(config-router)#neighbor 1.1.1.1 remote-as 1
New York(config-router)#neighbor 1.1.1.1 update-source lo0
New York(config-router)#neighbor 1.1.1.1 disable-connected-check
New York(config-router)#ip route 1.1.1.1 255.255.255.255 12.12.12.1
New York(config)#
*Feb 23 00:17:22.083: %BGP-5-ADJCHANGE: neighbor 1.1.1.1 Up

As you can see, the neighbors came straight up. If I now try and using the path viaNew Delhi-Toronto-Sydney-New York, i.e. a path that is not directly connected, the neighbors will not establish a session because the ttl will only be set to 1,and therefore cause a reachability problem. This is shown below.

New York(config)#int fa0/0
New York(config-if)#shut
New York(config-if)#no ip route 1.1.1.1 255.255.255.255 12.12.12.1
New York(config-if)#ip route 1.1.1.1 255.255.255.255 24.24.24.2
New York(config)#end
New Delhi(config)#int fa0/0
New Delhi(config-if)#shut
New Delhi(config-if)#no ip route 2.2.2.2 255.255.255.255 12.12.12.2
New Delhi(config-if)#ip route 2.2.2.2 255.255.255.255 13.13.13.2
New Delhi(config-if)#do ping 2.2.2.2 so lo0

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 60/66/80 ms
*Feb 23 00:25:23.763: %BGP-5-ADJCHANGE: neighbor 2.2.2.2 Down BGP Notification sent

New Delhi(config-if)#

*Feb 23 00:25:23.763: %BGP-3-NOTIFICATION: sent to neighbor 2.2.2.2 4/0 (hold time expired) 0 bytes

New Delhi#sh ip bgp sum
BGP router identifier 1.1.1.1, local AS number 1
BGP table version is 1, main routing table version 1
Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
2.2.2.2         4     2       8      12        0    0    0 00:04:33 Active

So because the neighbor is no longer 1 hop away (regarding disable-connected-check) the session drops, a notification is sent, and the hold time expires. However if I use ebgp-multihop instead of the disable-connected check, the session will form (because we increased the TTL). This is shown below.

New Delhi(config)#router bgp 1
New Delhi(config-router)#no neighbor 2.2.2.2 disable-connected-check
New Delhi(config-router)#neighbor 2.2.2.2 ebgp-multihop 3
New York(config)#router bgp 2
New York(config-router)#no neighbor 1.1.1.1 disable-connected-check
New York(config-router)#neighbor 1.1.1.1 ebgp-multihop 3

*Mar 1 00:41:45.159: %BGP-5-ADJCHANGE: neighbor 1.1.1.1 Up

New York#sh ip bgp sum
BGP router identifier 2.2.2.2, local AS number 2
BGP table version is 1, main routing table version 1
Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
1.1.1.1         4     1      27      31        1    0    0 00:15:02      0

In conclusion, if you want to use the disable-connected-check feature, then ensure the neighbor is directly connected. Otherwise, you need to use ebgp-multihop, or ttl-security to establish the session.


Thursday, February 22, 2018

Understanding 802.1X

Route XP
Understanding 802.1X

Today I am going to talk about the 802.1X which is a authentication protocol over the Local Area Network. We will cover 802.1X and will cover EAP over LAN as well.

What is 802.1X protocol ?
The 802.1X is a specification that defines EAP (Extensible Authentication Protocol) over LAN. This is also known as EAPOL. EAP is an authentication framework with supports multiple authentication methods. It is defined in RFC 3748. EAP defines three terminologies:
  • Supplicant: A device (usually workstation) that requests access to the LAN and switch services. The workstation running the IEEE802.1X-compliant client software is called the Supplicant.
  • Authenticator: A device (a switch or a wireless access point) that controls the physical access to the network based on the authentication status of the Supplicant. The Authenticator requests the identity from the Supplicant, verifies that information with the Authentication Server and relays the response to the Supplicant. The Authenticator includes the RADIUS Client. The EAP messages are encapsulated and decapsulated by the Authenticator while interacting with the Authentication Server.
  • Authentication Server: A device that performs the actual authentication of the Supplicant. The Authentication Server validates the identity of the Supplicant and notifies the Authenticator whether the Supplicant is allowed to use the LAN and switch services.
The type of EAP method used will be decided between the Supplicant and the Authentication Server. Some of these methods are- LEAP, EAP-TLS, EAP-MD5, EAP-FAST, EAP-GTC, PEAP, etc.

EAP over LAN (EAPOL)
EAPOL is a method to transport EAP packets between Supplicant and an Authenticator directly over LAN MAC service (both wired and wireless). There are 5 types of EAPOL message and not all EAPOL frames carry EAP messages; they are used for administrative tasks:

  • EAPOL-Start: When the Supplicant first connects to the LAN, it does not know the MAC address of the Authenticator (if any). By sending the EAPOL-Start message to a multicast group, the Supplicant can find out if there is any Authenticator present.
  • EAPOL-Key: Using this message type, the Authenticator sends encryption (and other) keys to the Supplicant once it has decided to admit it to the network. 
  • EAPOL-Packet: This EAPOL frame is used to send actual EAP messages. It is simply a container to send EAP message across LAN.
  • EAPOL-Logoff: This message indicates that the Supplicant wishes to be disconnected from the network. 
  • EAPOL-Encapsulated-ASF-Alert: This is provided for use by Alert Standard Forum (ASF) to allow alerts to be forwarded through a port that is in Unauthorized state.
All EAPOL frames have Ether Type of 0x888E.

Authentication Process and Message Exchange
During bootup, if the Supplicant does not receive EAP-Request/Identity message from the Authenticator, the Supplicant initiates authentication by sending theEAPOL-Start frame, which prompts the router to request the Supplicant's identity.

If the Authenticator port connected to the Supplicant is not configured with dot1x port-control auto command, the Authenticator will not allow any EAPOL frames to pass through it and the port will remain in Unauthorized state.

The Supplicant and the Authenticator begin the conversation by negotiating the use of EAP. Once EAP is negotiated, the Authenticator sends an EAP-Request/Identity message to the Supplicant. The Supplicant supplies the EAP-Response/Identity message indicating to the Authenticator that it should proceed with authentication.

The Authenticator acts as a pass-through and encapsulates the EAP-Response within an EAP-message attribute sent to the Authentication Server (RADIUS Server) within a RADIUS Access-Request message.

On receiving an Access-Request message, the RADIUS server responds with an Access-Challenge message containing EAP-Message attribute. If the RADIUS server does not support EAP, it sends an Access-Reject message.






The Authenticator receives the Access-Challenge message, decapsulates the packet and passes onto the Supplicant as an EAP-Request/Auth message. The Supplicant responds back with an EAP-Response/Auth message to the Authenticator. The Authenticator encapsulates it with an Access-Request packet containing EAP-Message attributes and passes onto the RADIUS Server. The RADIUS Server decapsulates the packet and obtains the EAP-Message attribute. It responds back with an Access-Accept packet. The Authenticator decapsulates and forwards the EAP-Success message to the Supplicant. 

The authentication process at this stage is completed and the port state changes to Authorized. The port state changes to Unauthorized when the link state on the port changes from UP to DOWN, or, the Authenticator receives an EAPOL-Logoff message.

Sunday, February 18, 2018

WAN Optimizer: Riverbed Steel Head CX

Route XP
WAN Optimizer: Riverbed Steel Head CX
Today I am going to talk about the WAN Optimiser in which i will take the OEM Riverbed and their product Steel Head CX. Let's talk about the Steel Head CX in details.

Steel Head CX brings sophisticated optimization to hybrid networks. Steel Head software program promises accelerated performance throughout the complete community - as though the packages were acting regionally. while many commercial enterprise-essential programs are transferring to public clouds for SaaS, others, for compliance motives, remain on premise in company statistics centers or on non-public clouds. Steel Head CX is to be had in an expansion of shape elements/models, consisting of appliances, digital software, and cloud times.

"Optimisation solution that accelerates the transfer of data and applications over your hybrid network"


With SteelHead CX you get:
  • Subscription-based cloud solution that accelerates 90% of major cloud providers and has been certified for Microsoft Azure, and VMware vCloud Air
  • Multiple hybrid network optimization techniques, including data center consolidation, data reduction and application streamlining
  • Support for up to 1 million connections through an intelligent and scale-as-you-grow performance architecture
  • IT control with quality of service (QoS), path selection and secure transport features
  • Increased visibility with end-user monitoring for all optimized traffic on premise - now including optimized web and SaaS applications
  • Dynamic selection of the best application path based on network availability
  • Streamlined QoS configuration with per-site exceptions as needed, using centralized, business intent-based policies and application grouping
Fig 1.1 WAN Optimizer: Riverbed Steel Head CX-NB
Fig 1.1 WAN Optimizer: Riverbed Steel Head CX-NB


SteelHead CX:
  • Delivers application acceleration in every type of cloud environment, including the majority of public cloud environments with support for Amazon Web Services (AWS), Microsoft Azure (certified), and VMware ESX-based clouds and vCloud Air (certified)
  • Ensures the best performance for the largest number of applications on premise, in the cloud and SaaS
  • Increases application and data transfer performance up to 100x
  • Reduces bandwidth utilization by up to 99%, thus deferring costly network bandwidth upgrades
  • Secures all traffic between SteelHead solutions across private and Internet links with standards-based encryption
  • Increases throughput and the number of connections in a single box by up to 50%
  • Accelerates both replication and backup from the branch — or between data centers — by 10x or more
  • Eliminates the need for additional bandwidth or longer backup windows

Key Benefits

Increase performance up to 100x
  • Determine the optimal path for each application to speed delivery across the WAN and other networks
  • Reduce response times to make deploying virtual servers, desktops, and applications faster and easier than ever
Reduce costs and improve productivity
  • Reduce bandwidth utilization by up to 60-95%, thus deferring costly network bandwidth upgrades
  • Support up to one million connections
  • Increase throughput and number of connections in a single box by up to 50%
  • Balance traffic across private and public links
Gain visibility and control
  • Deploy in minutes without changes to applications, users, routers, or other IT infrastructure
  • Automatically discover and report on applications in use across your organization
  • Perform ongoing management through the powerful web-based and command line interfaces, including in-depth reporting and NetFlow exporting
  • Improve data protection efforts with a faster WAN. Eliminate your remote backup infrastructure and improve replication up to 10 times and more, while reducing the cost and complexity of your WAN infrastructure

SteelHead CX appliance Features

Multi-tiered optimization
  • Includes the Riverbed Optimization System (RiOS), which delivers LAN-like performance across the WAN
  • Accelerates all TCP applications to increase performance up to 100x
  • Combines data reduction, as well as TCP, UDP, and application-level protocol optimization across the WAN for an unparalleled multi-tiered optimization strategy
Support for large-scale deployments
  • Supports up to one million connections through an intelligent and scale-as-you-grow performance architecture
  • Improves ROI with up to 50% more connections and throughput from a single appliance
Faster, more cost-effective disaster recovery
  • Accelerates both replication and backup from the branch — or between data centers — by 10x or more
  • Eliminates the need for additional bandwidth or longer backup windows
Industry-leading performance
  • Features the highest-capacity all SSD-based WAN optimization controller
  • Includes a full range of SSD-based appliances ideal for large branch offices and data centers
Built-in visibility and packet capture
  • Turns on the embedded Cascade Shark to
 gather packet and flow data from the branch — with no additional instrumentation required
  • Provides remote-site troubleshooting and application-level visibility without affecting core WAN optimization — at no additional cost
Riverbed SteelHead CX Models and specifications:-

Branch Office and Mid-size Offices:-

Fig 1.2 WAN Optimizer: Riverbed Steel Head CX-NB
Fig 1.2 WAN Optimizer: Riverbed Steel Head CX-NB


Large and Data-center Offices:-
Fig 1.3 WAN Optimizer: Riverbed Steel Head CX-NB
Fig 1.3 WAN Optimizer: Riverbed Steel Head CX-NB


Will discuss every model later on.

Thursday, February 15, 2018

Cisco ASA CX 5500-X Series

Route XP
They combine proven stateful inspection firewall features with the ASA CX Context-Aware suite of next-generation firewall services for networks of all sizes: small and midsize businesses with one or more locations, large enterprises, service providers, and mission-critical data centers. The Cisco ASA CX firewalls deliver:

   Scalable performance
   Industry-leading service flexibility
   Modular scalability
   Feature extensible
   Low deployment and operational costs


Fig 1.1 -Cisco ASA CX 5500-X Series
Fig 1.1 -Cisco ASA CX 5500-X Series

Features and Benefits
Available in a wide range of sizes, Cisco ASA CX models provide the same level of security that protects the networks of some of the largest and most security-conscious companies in the world. They also provide Cisco ASA CX series next-generation firewall services, which include Cisco Application Visibility and Control (AVC), web security, botnet filtering, and intrusion prevention, so you can add these security features to new applications and devices in your network.
Cisco ASA CX 5500-X Series Next-Generation Firewalls for small offices and branch locations protect critical assets in several ways:
   Exceptional next-generation firewall services provide the visibility and detailed control that your enterprise needs to safely take advantage of new applications and devices.
   Cisco AVC controls specific behaviors within allowed micro applications.
   Cisco Web Security Essentials (WSE) restricts web and web application use based on the reputation
of a site.
   Broad and deep network security through an array of integrated cloud- and software-based next-generation firewall services is backed by Cisco Security Intelligence Operations (SIO).
   A highly effective intrusion prevention system (IPS) is provided with Cisco Global Correlation.
   A high-performance VPN and always-on remote access are included.
   Additional security services can be implemented quickly and easily in response to changing needs.

Fig 1.2 -Cisco ASA CX 5500-X Series
Fig 1.2 -Cisco ASA CX 5500-X Series

Cisco ASA CX 5500-X Models
The Cisco ASA 5512-X, 5515-X, 5525-X, 5545-X, and 5555-X CX Series Adaptive Security Appliances combine the most widely deployed stateful inspection firewall in the industry with a comprehensive suite of next-generation network security services for comprehensive security without compromise. They provide multiple security services and redundant power supplies and support consistent security enforcement throughout your organization. 

In addition to comprehensive stateful inspection firewall capabilities, optional features include integrated cloud- and software-based security services, Cisco AVC, Cisco WSE, Cisco Cloud Web Security (CWS), and IPS. 

Cisco ASA 5555-X Unboxing


These models vary in their performance and throughput capabilities and in the services and number of users that can be supported by each model. Depending on the customer requirements and performance needs, these firewalls can be deployed at small office, Internet edge, and data center locations.

This ASA CX series of next-generation firewalls is built on the same proven security platform as the rest of the Cisco ASA family of firewalls and delivers exceptional application visibility and control along with superior performance and operational efficiency. These firewalls provide next-generation services that make it possible to take advantage of new applications and devices without compromising security. Unlike other firewalls, the Cisco ASA 5500-X Series keeps pace with rapidly evolving needs by offering end-to-end network intelligence gained by combining the visibility of local traffic with in-depth global network intelligence.

Using Cisco ASA Software Release 9.0 and later, customers can combine up to 16 Cisco ASA 5585-X firewall modules in a single cluster for up to 640 Gbps of throughput, 2 million connections per second, and more than 100 million concurrent connections . This “pay as you grow” model enables organizations to purchase what they need today and dynamically add more when their performance needs grow. To protect high-performance data centers from internal and external threats, the cluster can be augmented by adding IPS modules.

Demonstration of IPS Performance for the Data Center with Clustered ASA5585-X



Clustering Technology with the 5585X
Cisco ASA software clustering delivers a consistent scaling factor, irrespective of the number of units in the cluster, for a linear and predictable increase in performance. Complexity is reduced, as no changes are required to existing Layer 2 and Layer 3 networks. Support for data center designs based on the Cisco Catalyst 6500 Series Virtual Switching System (VSS) and the Cisco virtual Port Channel (vPC) as well as the Link Aggregation Control Protocol (LACP) provides high availability (HA) with better network integration.

For operational efficiency, Cisco ASA clusters are easy to manage and troubleshoot. Policies pushed to the master node are replicated across all the units within the cluster. The health, performance, and capacity statistics of the entire cluster, as well as individual units within the cluster, can be assessed from a single management console. Hitless software upgrades are supported for ease of device updates.

Clustering supports HA in both active/active and active/passive modes. All units in the cluster actively pass traffic, and all connection information is replicated to at least one other unit in the cluster to support N+1 HA. In addition, single and multiple contexts are supported, along with routed and transparent modes. A single configuration is maintained across all units in the cluster using automatic configuration sync. Clusterwide statistics are provided to track resource usage.


Cisco ACI- Spine-Leaf Architecture

Route XP
Today I am going to talk about the Cisco ACI architecture which is new Technology Architecture from Cisco side in Data Center space.

New Technology in data-center comes into picture in the form of spine-leaf topology where we can have the east west traffic to be propagate in the equidistance.

How Spine-leaf topology describe, lets have a look below.

Starting for the Spine-Leaf Topology:-
Spine-Leaf topologies are based at the near community structure. The time period originates from Charles Clos at Bell Laboratories, who posted a paper in 1953 describing a mathematical theory of a multi pathing, non-blockading, more than one-level community topology wherein to replace smartphone calls.

Thought-Process:-
These days, Clos’ original thoughts on layout are implemented to the modern spine-Leaf topology. spine-leaf is typically deployed as two layers: spines (like an aggregation layer), and leaves (like an get right of entry to layer). spine-leaf topologies provide excessive-bandwidth, low-latency, non-blocking server-to-server connectivity.


Fig 1.1 Spine-Leaf Structure (Networks-Baseline )
Fig 1.1 Spine-Leaf Structure (Networks-Baseline )

What it makes the difference:-
Leaf (aggregation) switches are what offer devices get entry  to the material (the network of spine and Leaf switches) and are generally deployed on the top of the rack. typically, gadgets connect with the Leaf switches. gadgets may include servers, Layer four - 7 services (firewalls and cargo balancers), and WAN or net routers.

Leaf switches do not connect with different leaf switches (until jogging vPC in standalone NX-OS mode). however, each leaf should hook up with each spine in a full mesh. some ports on the leaf can be used for cease devices (commonly 10 Gigabits), and some ports might be used for the spine connections (commonly forty Gigabits).


Fig 1.2 Stages of the Leaf-Spine Network( Networks-baseline)
Fig 1.2 Stages of the Leaf-Spine Network( Networks-baseline)

Spine Topology:-
Spine (aggregation) switches are used to hook up with all Leaf switches, and are typically deployed at the stop or middle of the row. spine switches do not connect with different backbone switches. Spines function backbone interconnects for Leaf switches. typically, spines best connect with leaves, but when integrating a Cisco Nexus 9000 transfer into an current surroundings it's miles perfectly applicable to connect other switches, services, or devices to the spines.

All devices connected to the cloth are an same range of hops away from one another. This gives you predictable latency and high bandwidth among servers. The diagram in determine 6 indicates a easy two-tier design.


Fig 1.3 Design in Data-center ( Networks-Baseline )
Fig 1.3 Design in Data-center ( Networks-Baseline )

How we achieve this:-
With Leaf-spine configurations, all gadgets are exactly the equal quantity of segments away and comprise a predictable and consistent quantity of put off or latency for touring statistics. this is possible because of the brand new topology design that has best two layers, the Leaf layer and backbone layer.

The Leaf layer includes access switches that connect with devices like servers, firewalls, load balancers, and side routers. The backbone layer which is called as spine (made of switches that perform routing) is the spine of the network, where each Leaf switch is interconnected with each and each backbone transfer.

Fig 1.4 Layer 3 Spine-Leaf Fabric
Fig 1.4 Layer 3 Spine-Leaf Fabric


Equidistance
To allow for the predictable distance between devices on this -layered design, dynamic Layer three routing is used to interconnect the layers. Dynamic routing allows the exceptional direction to be determined and altered primarily based on responses to community trade. This type of network is for records center architectures with a focal point on “East-West” network site visitors. “East-West” visitors carries information designed to travel within the statistics middle itself and now not outdoor to a one-of-a-kind site or network.  

This new method is a method to the intrinsic barriers of Spanning Tree with the capacity to utilize different networking protocols and methodologies to obtain a dynamic community.
Fig 1.5 Core Fabric ( Networks-Baseline)
Fig 1.5 Core Fabric ( Networks-Baseline)

 Rest of the Story:-
With Leaf-spine, the network makes use of Layer three routing. All routes are configured in an active country via using identical-value Multipath (ECMP). This lets in all connections to be applied on the equal time while still last solid and averting loops within the network.

With traditional Layer 2 switching protocols like Spanning Tree on three-tiered networks, it ought to be configured on all devices efficaciously and all the assumptions that Spanning Tree Protocol (STP) is predicated on need to be taken into account (one of the smooth errors to make when configuring STP is with mislabeling device priorities that could lead to an inefficient route setup). The removal of STP between the get entry to and Aggregation layers in lieu of Layer three routing consequences in a miles greater strong surroundings.

Every other gain is the convenience of adding additional hardware and capability. when oversubscription of links occurs (which means that more visitors is generated than may be aggregated onto the lively link at one time), the capacity to make bigger potential is simple. an additional spine switch may be added and uplinks can be prolonged to each Leaf transfer, ensuing inside the addition of interlayer bandwidth and reduction of the oversubscription.

Whilst device port potential turns into an issue, a new Leaf switch may be added by way of connecting it to each spine and adding the community configuration to the switch. the convenience of growth optimizes the IT department’s procedure of scaling the community with out dealing with or disrupting the Layer 2 switching protocols.

Leaf-Spine Worries:

The alternative principal drawback comes from the use of Layer three routing. This eliminates the spanning of VLANs (digital LAN) throughout a network. VLANs in a Leaf-spine network are localized to each person Leaf switch; any VLAN segments which are left on a Leaf switch are not reachable through the alternative Leafs. this could create troubles with a scenario inclusive of guest virtual system mobility inside a statistics middle.

Leaf-Spine Cases:
web scale packages where server area within the network is static could benefit from the implementation of Leaf-backbone. the use of Layer 3 routing among layers of the structure does no longer avoid net scale programs because they do not require server mobility. The removal of Spanning Tree Protocol (STP) results in a greater stable and dependable community overall performance of East-West traffic flows. Scalability of the structure is likewise improved.

Organization packages leveraging cellular digital machines (e.g. vMotion) create an trouble while a server wishes to be supportable anywhere inside the records middle.  the use of Layer three routing and shortage of VLANs extending among Leafs breaks this requirement. 

To paintings round this trouble, an answer such as software program defined Networking (SDN) may be employed, which creates a virtual Layer 2 above/on pinnacle of the Leaf-backbone network.  This lets in servers to transport around in the environment with impunity at no detriment to “East-West” overall performance, scalability, and stability attributes of a Leaf-backbone community topology.

Monday, February 5, 2018

Introduction to WAN optimization

Route XP
Introduction to WAN optimization 


An excellent WAN optimization answer will permit you to prioritize site visitors, and guarantee a certain amount of to be had bandwidth for task vital applications. complete WAN optimization answers allow a commercial enterprise to do a whole lot extra than virtually queue the horrific traffic. they are able to block unwanted (in and outbound) traffic, permit it at sure time for the duration of the day, give precedence to positive hosts, and put in force many different associated policies. they will optimize the real visitors as properly, providing decrease latency and higher throughput for the maximum essential packages. 

WAN optimization, additionally called WAN acceleration, is the category of technology and techniques used to maximize the efficiency of records go with the flow across a extensive area network (WAN). In an organization WAN, the aim of optimization is to growth the velocity of access to critical packages and facts.

Fig 1.1-WAN optimization (NB)
Fig 1.1-WAN optimization (NB)

WAN performance issues
Let’s start by surveying two major causes of WAN performance problems:
  • Latency: This is the back-and-forth time resulting from chatty applications and protocols, made worse by distance over the WAN. One server sends packets, asks if the other server received it, the other server answers, and back and forth they go. This type of repeated communications can happen 2,000 to 3,000 times just to send a single 60MB Microsoft PowerPoint file. A somewhat simple transaction can introduce latency from 20 ms to 1,200 ms per single file transaction.
  • TCP window size: Adding more bandwidth won’t necessarily improve WAN performance. Your TCP window size limits throughput for each packet transmission. While more bandwidth may give you a bigger overall pipe to handle more transactions, each specific transaction can only go through a smaller pipe, and that often slows application performance over the WAN.
Fig 1.2-WAN optimization (NB)
Fig 1.2-WAN optimization (NB)

Data streamlining

  • Don’t resend redundant data: A process known as data de-duplication removes bytes from the WAN. Data that is accessed repeatedly by users over the WAN is not repeatedly resent. Instead, small 16-byte references are sent to let SteelHead know that this data has already been sent and can be reassembled locally. Once sent, this data never needs to be sent again.
  • Scalable data referencing looks at data packets: Let’s say a user downloads a document from a file server. At the sending and receiving locations, Steel Head sees the file and breaks the document into packets and stores them. Then the user modifies the document and emails it back to 10 colleagues at the file’s original location. In this case the only data sent over the WAN are the small changes made to the document and the 16-byte references that tells the Steel Head device at the other end how to reassemble the document.
  • Steel Head cares about data: Data is data to Steel Head, no matter what format or application it comes from. That means far less of it needs to be sent across the WAN. As an example, imagine how many times the words “the” and “a” appear in files from various applications. Steel Head doesn’t care; these bytes look the same and therefore need not be sent. This type of de-duplication can remove 65–95% of bytes from being transmitted over the WAN.
Below is the example of the data-stream using riverbed Steel head devices

Fig 1.3-WAN optimization (NB)
Fig 1.3-WAN optimization (NB)

Transport streamlining

  • The fastest round trip is the one you never make: Transport streamlining makes TCP more efficient, which means fewer round trips and data per trip. For example, traditional TCP does what’s known as a “slow start process,” where it sends information in small chunks and keeps sending increasingly larger chunks until the receiving server can’t handle the chunk size. Then it starts again back at square one and repeats the process. Transport streamlining avoids the restart and just looks for the optimal packet size and sends packets only in that size.
  • Combine data streamlining with transport streamlining for staggering WAN efficiency: Thanks to transport streamlining you’ll be making fewer round trips (up to 98% reduction), and you’ll be sending more data per trip. This adds up to a much higher throughput. But it’s bigger than that because a single packet can virtually carry megabytes of data by repacking the payload with 16-byte references as opposed to repetitive data.
Application streamlining


Lastly, application streamlining is specially tuned for a growing list of application protocols including CIFS, HTTP, HTTPS, MAPI, NFS, and SQL. These specific modules understand the chattiness of each protocol and work to keep the conversation on the LAN, where chattiness is not a factor and therefore creates no latency, before making transmissions over the WAN.

Friday, February 2, 2018

MPLS: Control Mode (Ordered LSP Control and Independent)

Route XP
MPLS: Control Mode (Ordered LSP Control and Independent)

Today I am going to talk about the MPLS control mode which consists of MPLS ordered LSP control and independent mode. Most of you guys are already knew about it but some of you are starters in MPLS or Service Provider domains.

Let's talk about the MPLS control Plane one by one.


Control Modes

The distinction between the ordered and independent control modes is, in practice, likely to be a lot less than people have made it out to be in theory. With specific exceptions (for instance, traffic engineering tunnels, discussed later), choice of control mode is local rather than network wide. In addition, certain behaviors associated with a strict interpretation of control mode can result in pathological misbehavior within the network.

Ordered LSP Control

In ordered control mode, LSP setup is initiated at one point and propagates from there toward a termination point. In the case where LSP setup is initiated at an ingress, label requests are propagated all the way to an egress; label mappings are then returned until a label mapping arrives at the ingress. In the case where LSP setup is initiated at an egress, label mappings are propagated all the way to ingress points. 


Fig 1.1- MPLS Push POP
A feature of ordered control is that an LSP is not completely set up until the associated messages have propagated from end to end—hence, data is not sent on the LSP until it is known to be loop free. A severe disadvantage shows up in a purist implementation of ordered control mode in the following case. 

Assume that an LSR is the egress for a (potentially large) set of LSPs. This LSR now discovers a new peer that is downstream of it with respect to some or all of the set of LSPs for which the LSR is the current egress. If the local LSR simply adds the new LSR as an egress without somehow ascertaining that this LSR does not carry the LSP into a merge point upstream of the local LSR, it may introduce a loop into an LSP assumed to be loop free. If, on the other hand, it withdraws all label mappings upstream, it may produce a significant network outage and will result in a lot of LSP control activity, both of which might be unnecessary. 

For example, in the case where a downstream routing peer has just had MPLS enabled but is otherwise the same as it was previously, it is unlikely that forwarding will actually change.


This mode uses a simple rule.  Only bind a label to a FEC if it is the egress LSR, or the router received a label binding for a FEC from the next hop router.  The diagram below illustrates this process.
Although all the other routers may have a entry in the RIB for 10.1.1.1/32, the only router that is allowed to bind a label to it is the Egress LSR.  Only when this router distributes this binding through LDP or TDP towards LSR3 is he allowed to then build his own label binding for this particular FEC (and that is if the Egress-LSR is the actual next hop i.e. the best path for this FEC).  LSR3 will then distribute it’s own label binding for this FEC towards LSR2.  LSR2 will then bind a label for this FEC so long as LSR3 is the next hop for it.  The process continues until it reaches the ingress LSR.

Independent LSP Control

Independent control mode is the mode in use when an LSR
  • Has reason to believe that it will get label mappings from downstream peers for a specific FEC
  • Distributes labels for that FEC to its upstream peers irrespective of whether it has received the expected label mappings from downstream 
In this case, the LSR sending the label mapping includes a hop count that reflects the fact that it is not the egress and has not received label mappings (directly or indirectly) from an LSR that is. The special hop-count value of zero (unknown hop count) is used to indicate this case.

Upstream LSRs may or may not start to use the label mappings thus provided. Using the LSP is probably not advisable, because the LSR providing the label mapping may elect to discard packets (while waiting to receive label mappings from downstream peers), and the LSP is not proven to be loop free (until a label mapping is propagated from downstream with a known hop count).


In effect, if an LSP is never used until a label mapping for the LSP containing a known hop count is received at the ingress to the LSP, the network is behaving as if ordered control were in use for all LSRs along the given LSP.

All about Cisco Catalyst 4500 Series

Route XP
All about Cisco Catalyst 4500 Series

Today I am going to talk about the Cisco Catalyst 4500 Switch. The Cisco catalyst 4500 switch is widely used in the enterprise network as a Access switch. While in some of the small sites which can be branch or remote sites, it can be used as Core switch.

Cisco catalyst is an old catalyst with the best features while Cisco came up with the Cisco 9400 chassis based catalyst switch with the more features.

Let's not talk about Cisco 9400 catalyst switch right now, Let's talk about Cisco 4500 Catalyst switch as of now.

Cisco Catalyst 4500 Series Switches enable borderless unified wired and wireless networks, providing high performance, mobile, and secure user experiences through Layer 2-4 switching. Enabling security, mobility, application performance, video, and energy savings over your network infrastructure, the Cisco Catalyst 4500 Switch supports resiliency, virtualization, and automation, further improving the ease of network use. Cisco Catalyst 4500 Series Switches provide borderless performance, scalability, and services with reduced total cost of ownership (TCO) and superior investment protection. 




The Cisco Catalyst 4500 Switch delivers predictable and scalable high performance, with advanced dynamic quality of service (QoS) capabilities and configuration flexibility for deploying borderless networks. Integrated resiliency features in both hardware and software maximize network availability, helping to ensure workforce productivity, profitability, and customer success. Its centralized, innovative, and flexible system design helps ensure smooth migration to wire-speed IPv6 and 10 Gigabit Ethernet (GE). The forward and backward compatibility between generations of the Cisco Catalyst 4500 Series extends deployment life, providing exceptional investment protection, while reducing the (TCO). The Cisco Catalyst 4500E Series is a high-performance, next-generation extension to the Cisco Catalyst 4500 Series. The new E-Series is composed of the Cisco Catalyst 4500E Series supervisor engines, E-Series line cards, and E-Series chassis, which are designed for a high-performance, mobile, and secure user experience with superior backward and forward compatibility, delivering exceptional investment protection for organizations of all sizes.





Cisco Catalyst 4500E Series Switches with Supervisor 8-E offer exceptional scalability and comprehensive features to meet current and future network demands. This unit:-



  • Delivers up to 928 Gbps system bandwidth and nonblocking 48 Gbps per slot
  • Has eight nonblocking 10 Gigabit Ethernet uplinks with a field-programmable gate array to support next-generation protocols
  • Can scale up to 384 Gigabit copper ports, 200 Gigabit fiber ports, or 104 10-Gigabit ports in non-virtual switching system (VSS) mode
  • Offers 116 Universal Power of Ethernet (UPOE) (60 W), 240 PoE+ (30 W), and 384 PoE (15 W) with 9000 W power supply


Supervisor 8-E is compatible with all Cisco Catalyst 4500E chassis, line cards, and power supplies to help make full use of your investment. You also get:

  • Industry-leading investment protection with more than 10 years of backward compatibility for line cards
  • A software lifecycle of greater than three years with extended maintenance releases
  • Forward compatibility for new supervisors to easily deploy future innovations

Cisco Catalyst 4500-E Series and Classic Line Cards 

The Cisco Catalyst 4500 Series offers two classes of line cards: classic and E-Series. Classic line cards provide 6 gigabits of switching capacity per slot. E-Series line cards provide increased switching capacity per slot. This increase in per-slot switching capacity with the E-Series line cards requires the Cisco Catalyst 4500E Series chassis and the Cisco Catalyst 4500E Series Supervisor. 

Two types of E-Series line cards are available based on the per-slot switching capacity. E-Series line cards numbered 47xx operate at 48 gigabits per slot, while E-Series line cards numbered 46xx operate at 24 gigabits per slot. Classic line cards may be deployed in both classic and E-Series chassis with either classic Cisco Catalyst 4500 Series supervisor engines or with the Cisco Catalyst 4500E Series Supervisor Engine. 

With the E-Series supervisor engine, the per-slot switching capacity for classic line cards remains at 6 gigabits per slot. However, because of the centralized switching architecture of the Cisco Catalyst 4500, the classic line cards will adopt all of the new E-Series supervisor engine features such as eight queues per port, dynamic QoS, and hardware-based IPv6 routing. 

For more feature details, refer to the E-Series supervisor engine data sheet. Classic line cards and E-Series line cards may be mixed and matched within a Cisco Catalyst 4500E Series chassis with no performance degradation: classic line cards will operate at 6 gigabits per slot, and E-Series line cards operate at either 48 gigabits per slot or 24 gigabits per slot based on whether they belong to the 47xx or 46xx family of line cards. Table 1 summarizes the chassis and supervisor support for both classic and E-Series line cards.


Power over Ethernet on Cisco Catalyst 4500E 

The Cisco Catalyst 4500E Series offers line cards, power supplies, and accessories required to deploy and operate standards-based Power over Ethernet/Power over Ethernet Plus (PoE/PoEP) and Universal POEP (UPOE). PoE provides power over 100 m of standard unshielded twisted-pair (UTP) cables when an IEEE 802.3af/at-compliant or Cisco pre-standard powered device is attached to the PoE/PoEP and UPOE line-card port. 

Instead of requiring wall power, attached devices such as IP phones, wireless base stations, video cameras, and other IEEE-compliant appliances can use power provided from the Cisco Catalyst 4500 Series PoE/PoEP and UPOE line cards. 

For the regular DC (Direct Current) device that doesn't support PoE/PoEP natively, Cisco Catalyst 4500 Series provides the UPOE Power Splitter that enables an UPOE port to power a 12V DC power device and another PoE/PoEP appliance. This capability gives network administrators centralized control over power and eliminates the need to install outlets in ceilings and other out-of-the-way places where a powered device can be installed. Table 2 shows the PoE options for Cisco Catalyst 4500 Series line cards. 

Although all references to “PoE/PoEP/UPOE,” “inline power,” and “voice” power supplies and line cards are synonymous, there are currently four versions: Cisco prestandard, IEEE 802.3af compliant, IEEE 802.3at compliant, and UPOE. Every Cisco Catalyst 4500 Series chassis and PoE power supply supports the IEEE 802.3af/at standard and the Cisco prestandard power implementation, helping ensure backward compatibility with existing devices powered by Cisco. UPOE line cards require E series chassis. 

All IEEE 802.3af/at-compliant and UPOE line cards can distinguish an IEEE or Cisco prestandard powered device from an unpowered network interface card (NIC), helping ensure power is applied only when an appropriate device is connected.

Cisco Catalyst 4500E Series and Classic Gigabit Ethernet Copper Line Cards

The Cisco Catalyst 4500E Series 48-port Gigabit Ethernet line cards provide high-performance 10/100/1000 switching. Two types of E-Series line cards are available, based on the per-slot bandwidth: 47xx line cards that drive 48 Gbps per slot, and 46xx line cards that drive 24 Gbps per slot. 

The Cisco Catalyst 4500 48-port 10/100/1000 PoEP E-Series 47xx line card provides standard IEEE 802.3at PoEP support on all 48 ports simultaneously. All series 47xx line cards support standard IEEE 802.1AE encryption and Cisco TrustSec™ in hardware. The Cisco Catalyst 4500 48-port 10/100/1000/multigigabit line cards are available in five versions: Data Only, PoE, PoEP, UPOE, and UPOE with multigigabit support for 802.11ac Wave2.



Popular Posts