Data center cooling and large chassis
As a network engineer you need to be aware of the data-center environment where your chosen device will be deployed. A huge wedge of the cost of running a datacenter are spent trying to keep it cool. So preserving hot-aisle and cold-aisle airflow containment is a big deal for your data-center manager. But it’s pretty easy to order hardware that messes with the datacenter airflow. You need to watch for context and read the fine print to avoid unneeded data-center headaches.
Nailing down the true speed of a 10GbE link can be tricky. For a start you to define ‘speed’ and ‘capacity’. Ivan Pepelnjak offers a nice summary in this post. Then there are little surprises. A former colleague of mine Fred Westermark first introduced me to the Ethernet interframe gap. I had never heard of this before and felt a bit cheated to be honest. Since when do ‘bits’ need a rest. Pfff.
I read an article by Greg Ferro about twenty-percent-growth recently. Greg makes the point that most network growth forecasts are grossly overoptimistic. However, my experience in the service provider world is that ‘the business’ underestimates growth in most cases.
Network engineers have a fiscal responsibility not to gold-plate their network designs; network gear is just too damn expensive. But you can over-optimise for cost. It is incredibly frustrating to overhaul and scale-up a network within a year of the initial deployment. The end-result is additional capital cost, more engineer effort and resultant opportunity costs.