Network hardware knowledge – have a look under the hood

Network hardware knowledge – have a look under the hood

I have interviewed a lot of candidates for network engineering positions over the past few years [1] . The technical interviews can cover a broad range of topics.  But one topic frequently causes friction with candidates;  hardware knowledge.
Almost everyone who has worked for a large enterprise for a few years has interacted with a 6500 at some point or time, and frequently ‘Cisco-6500’ appears on many resumes as a skill.  However,  when you ask people how to choose the right line card for a 6500 there has been many a pregnant pause.
I don’t expect people to know the chassis inside and out.  But if you list it on your CV and don’t know that there are different series of line cards, with different backplane or bus connections;  then you don’t really have 6500 platform knowledge.  Lack of platform knowledge isn’t an absolute showstopper, but it helps me gauge your level of networking experience.
In fact some people get quite annoyed with you for asking ‘irrelevant’ or ‘nerdy’ questions about networking hardware.  In a few cases they are right.  If your network never pushes a router past any of it’s many resource constraints, well…  you may not need that hardware knowledge.   But I would argue that if you’re not curious as to what limits might eventually cause your network to keel over and die, then you may be in the wrong job.  Also, how can you properly choose the right hardware platform for a design if you don’t know it’s capabilities and it’s limits.
If you don’t start looking at the internals of your hardware (before you buy!) then I’m sorry to say that you will look like a complete tool a few times too many in your career.  Remember that a large chassis-based router is a networking abstraction.  Stuffed inside that large chassis is a complex network of interconnects and routing/switching components, each with their own limits and constraints.  The complexity isn’t removed because it’s in a chassis, it’s just shrink-wrapped and hidden from you.
Naturally,  resource constraints and limits still exist within the chassis.  When you hit them it is always surprising, painful and time-consuming.    I’ll share a few examples that I’ve been bitten by in the past to help illustrate my point.

  • 4900M – This is a fixed/modular hybrid switch.  The fixed module provides 8 x 10 Gbps ports into an 80Gbps backplane.  The modular slots (2 & 3) can each terminate 8 x 10Gbps interfaces but they modules have only 40Gbps into the backplane, so the modular ports are 2:1 oversubscribed.  Okay lesson learned check your port-to-backplane oversubscription ratios (for all ports!) against your design requirements.
  • 6708 line card – Sometimes you need to dig deeper.   The 6708 line card for the 6500 provides  8 * 10Gbps ports with a a 40Gbps backplane connection.  So we know that this is a 2:1 oversubscribed line card.  I want to deploy 4 x 10Gbps load-balancers at line rate.   No problem, I’ve learned my lesson, I’ll just use the first four ports and let the other ports empty.  Nope!!!   The ports are arranged internally in four pairs of port-groups, with each port having a 10Gps connection to a fabric chip.  Unless you pair them wisely, you could drop 50% of your 40Gbps of offered traffic on the floor.   The point here is that you really need to look at the card layout to see the bottleneck:  see Figure 21 for ASIC layout.
  • ASA 5550 – Maximising throughput.  Getting good throughput across a  firewalls has been a big problem for many years.  There was always a specific way to connect you ports to an ASA/PIX to maximise your throughput.  Most people found this after their firewalls melted.  To their credit, Cisco are now doing a good job of documenting the issue and telling people how to properly connect their ports in their quick start guides.
  • Cisco Nexus – F2 line cards need their own VDC.   I had learned this from my cisco account team, but it was also mentioned recently in the excellent packet pushers Episode 106 – Nexus buyers guide.  If you’re doing a greenfield install this no problem, just stick in a default VDC.   But, if you were planning a gradual migration from M series line cards, this is a  real pain.  You need to burn physical 10Gbps ports to interconnect VDCs on the Nexus.   Again, if you know this upfront you’re golden.  If you learn it when your hardware arrives, start hacking your design.

I’ve listed four examples here, but there’s no way to know every hardware limit and caveat in advance. The main take-away here is that you know that all hardware platforms have internal limits.  You should be digging deep into data sheets, and grilling your vendors about the limits of the hardware before you buy.

So… what hardware design or compatibility ‘features’ have you been stung by in the past? Please add it to the comments and help others avoid the pain 🙂
[1]  I have to admit that I actually enjoy performing interviews.   They take a lot of time away from project delivery but interviews are a great way to keep your social and technical skills sharp.  If you’re not interviewing your potential colleagues, you really should get involved. 

2 thoughts on “Network hardware knowledge – have a look under the hood

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.