Wired Enterprise Trends

I was most likely about four years ago when several of us at UltraVista having a conversation and speculating on the lifespan of a wired network in an Enterprise campus. We almost unanimously agreed it would all be driven by the Apps. Well, thanks to carrier networks being what they are, content, cloud and application providers typically develop to the lowest common denominator bandwidth, the cellular mobile networks. Those are networks that struggle to pull off double digit Mbps performance yet we are demanding 1Gbps to the desktop. The days of fat client server apps being the bandwidth driver are trending down as SaaS and cloudy like application delivery rapidly grows. Is this the death of the wired Enterprise network?

networking-wired-networks 440x587

The traditional wired edge, does not allow for over-subscription of the network, it is one port and one host. Wireless networking allows for over-subscription since the air is a shared medium that all clients attach to. Don't let shared medium scare you like it used to me. 802.3ac has begun to solve the wireless duplexing problem and the 5GHz spectrum has helped with client density issues as compared to 2.4GHz of the old. Recently, the wireless vendors have been demoing 600Mbps client throughputs on 802.11ac on gear beginning to ship now. That performance will begin to tip the scale of this idea that we must wire every nook and cranny of the enterprise regardless of actual consumption.

Already Happening

If an enterprise starts thinking like a carrier, soon it will realize that one of the number one ways service providers make money is from over subscription. Let's forget networks for a second, let's think about the corporate VM farm. That shows cost saving by taking a number of standalone physical boxes and collapses them into one physical box. Lets say that physical server has 64 GB of memory and you deploy 50 virtual machines with up to 2Gb of RAM. That works because we can effectively predict that each one of those virtual machines will not be using %100 of allocated memory at all times. Now you can effectively manage your resource pools in a central pile by over-subscription and make sure you are actually using what you buy. Instead of 50 physical servers all using %5 of its resources (aka memory, compute, storage) the virtual infrastructure is now able to allocate excess capacity and scale up and out centrally. In the enterprise that is cost avoidance, in a provider business model that is more revenue.The compute and storage world has these tools that allow for this in an orchestrated fashion, networking, not so much.

Aligning with Business

Providers allocate and sell more than they have, while banking on the odds that the customer will not actually use all of what they have purchased and leased. Providers do this fairly well on expensive long haul link that are pricey to own and operate, in the Enterprise we typically do a horrible job at this. We often build the biggest pipes we can get away with and life cycle gear because the vendors come out with a new a speed or EOL a switch. That's not good enough. We want our clients to start thinking in terms of blending business in with IT. Avoiding intelligence in the access edge of the network allows for much longer life-cycles. Some of our prominent clients leave some 3500xl in production on purpose, 100 Mb PoE serves 99% of average workers just fine. The laptop they are plugging in likely has I/O off the board around 100 Mbps. We had to convince aour clients many times over upgrading a switch with no value just because it was an EOL switch. Internet content while rapidly increasing is often more people doing something rather than higher per flow usage. Ten years ago employee did not go to research every topic or subject on Internet, they do now. The other increase is the actually device count, which brings us to the meat.

Wireless Growth

What is driving speeds and power over Ethernet (PoE) requirements on the edge? One thing, wireless. Almost 100% growth year to year in some places is not even close. Uplinks of 1 Gb are required for today's access points and then gets oversubscribed. That will cost money since the end point count is growing rapidly. That cost will need to be offset. The port density in a communications closet should start decreasing assuming BYOD and mobile clients continue to trend up as the primary work device. Let's compare wired and wireless networks topology in the picture below.

networking-wireless-networkvs-wired-network 444x354

The wired network on the left has a distributed switch at each department level. At the office level, there are number of edge/access switches connecting the end user computing devices thru wired cabinets infrastructure. These switches are 48 x 1Gb ports south of that have an average of 2 Mbps transmit and 7Mbps receive and over 30 days it peaked at 34Mbps received. The oversubscription is 1 to 48 1Gb.

On the righ site there is a wireless network, but there is just one edge/access switch. This is a 1Gb uplink of a typical BDF (Building Distribution), there are 48 x 1Gb ports south of that have an average of 10.5 Mbps transmit and 37 Mbps receive and over 30 days it peaked at 240Mbps transmitted. That's a 1 to probably 300 (mix of 100/1000 Mb) over-subscription. Sounds crazy but is it? Traffic is purely north south, typically Internet bound or ERP and fat client email would be local.

Part of the SDN Landscape

We at UltraVista are fully anticipating this rapid increase in 802.11 clients and the growing commitment to actually taking the demand for more flexible and OPEN networks in the battle cry of SDN to begin to consolidate the wireless and wired architecture into a controller driven model. We need the same operational ease that wireless operators have on the wireless network. We need the bump in the wire through some for of a centralized control plane which can still be deployed in a modular fashion for scale just as wireless networks are today. Go look for a Gartner magic quadrant on wireless, guess what, there isn't one any more, it has been replaced by "Magic Quadrant for the Wired and Wireless LAN Access Infrastructure". The end is near for the two different worlds and hopefully technologies between the wired and wireless network. Wireless manufacturers are beginning to get into the wired business, that should be a pretty good indicator alone. So as controllers start dropping in as SoCs on our switches, we hope to see the integration of open technologies to encourage interop and offer flexibility. The vendors that get greedy after vendor locks would be taking a risk. Standardization between controller and switch.

networking-network-switch

Distributed Yes, But How Much Makes Sense?

Aruba Networks, Cisco and others have begun pushing controller intelligence and control plane to the AP. We think that has value when there is limited BW between controller and AP to avoid hairpins. Just as importantly if not more in fiber rich enterprises is the need to apply coherent ubiquitous policy in management, security and quality that comes from the control plane. As speeds continue to increase as will distribution to scale but that can done in a modular fashion to avoid the cost every point in the network being this fully featured device that is sitting at %5 utilization. Centralization brings a cost benefit, whether better management or less silicon needed to service the applications on the edge. It is a distributed systems theory problem, not a put a man on the moon problem or collide photons and take pictures at the same time problem. Choosing the best widget technologically is only part of the solution, consumers designing architectures that fit both business and technological needs and product development by vendors to be flexible enough to provide the building blocks for a lasting and scalable solution is the path to excellence.

Things to Question

Before going further, there are some reasonable things that need to be emphasized:

  •  We believe that an entroprise really should not go after that 3 or 4dB improvement at a large CapEx to get shiny new Category 6a wiring in for the average office worker in their cube. Category 5 still supports Gigabit Ethernet.
  • Even if the IT departmenet is lucky enough to have a hardware refresh budget, they should not revisit the port counts in the communications closet every quarter. Odds are a good chunk of those ports the deskotop services deparment has been replaced with a laptops that has built in wireless.
  • When and if BYOD and telecommuting becomes the norm this will further decrease the number of wired ports.
  •  IT should let the facts drive the inbound/oubound bandwidth increase not the overzealos vendor. Before upgredaing for the sake of upgrading, IT should start watching the traffic and trends, set thresholds.
  • Incorporate Enterprise and cloud applications, VoIP and emerging wireless like 802.11ac become part of the capacity planning forecasting.

Wireless Reliability

It has been a fact for a number of years that wireless is reliable enough at the enterprise level. Today I am more confident in stable device drivers and improved spectrums. That said troubleshooting broken microwaves with spectrometers is a mess. I see hospitals deliver drugs and monitor patient vital signs on wireless networks 24x7x365, scary but it works. When it doesn't it rarely has anything to do with the lack of wires and more to do with lack of QA in a vendors code or poorly written device drivers on the end device.

We will start seeing GigE get saturated between traditional distribution and access for typical 1Gb uplinks to a building or large floor in a campus. If your budget is tight and fiber is already pulled in a bundle with plenty of capacity Link Aggregation Groups (LAG) will give you double the capacity to give 10Gb cost to continue to come down.

Be open minded, not preaching here btw, I love routine and dislike change as much as anyone but betting against technology, progress and change is a gamble. Someday sooner rather than later the business will say we want to cut a million dollars in cabling to that building, operate on facts. Don't worry we will always have wired networks, something has to backhaul wireless traffic those wired networks will be much more software defined vis a vi SDN just as the wireless industry has been doing for a number of years now. Or, just ask the carriers that continue to under provision to cell towers to take profit.

Most people we have been talking until recently considered the wired days in the Enterprise as being numbered. How long who knows but BYOD, mobility and amazing problem solving of physics problems in the finite wireless spectrum lead me to believe it is not measured in double digit years. This is just some musing, I get paid to do whats best for my organization not build the best network in history, sometimes good enough is just that and the reality of budgets force prioritization.
But recently, there is quite turnaround that very few could have predicted, let alone take advantage of. While cloud computing is suited to many tasks — including getting enterprise off the ground or running a high-traffic website — it doesn't make sense for others, particularly well-established organizations that require high-performance executions. Consequently, there is a realization that some tasks are best handled on in-house hardware. Things like databases — that need really high performance, in terms of reading and writing to memory — really belong on bare-metal servers or private setups. The second determining element is the cost of the cloud computing. If an Enterprise is going to spend $25,000 a month on Amazon or Rackspace virtual servers (and that is $300,000 a year), for just $100,000 the company could buy all the physical servers it needed for the job — and those servers would last for at least five years. The Enterprise will add more machines over that time, as testing needs continue to grow, but its server costs won't come anywhere close to the fees it was paying virtual servers.

Go to top