During the planning stages, datacenter consolidation conversations typically focus on quantifying IT gains and resolving any immediate deployment obstacles. Often overlooked in this discussion is how the new ultra-high density environment will be effectively monitored.
Here’s a look at the primary paths of datacenter consolidation, and an exploration of the monitoring challenges and considerations for each environment. In working with other IT teams and consultants, you are the voice for ensuring visibility into applications and great end-user experience is incorporated into the initial consolidation plan.
Consolidation Options
There are two paths to consolidation: internally hosted or outsourced (think cloud or third-party provider). Let's look at each from the monitoring perspective.
Internally-Hosted Consolidations
In-house hosting typically involves combining physically distributed computing and storage assets into a concentrated, blade-server based, virtualized environment. Add capabilities such as utilizing vMotion, and you have the makings of a monitoring black hole. How so, you may ask?
To begin, think of the classic three-layer architecture: core, distribution, and access. In a consolidated setting, this distributed design physically collapses into a wall of racks. When highly virtualized servers are thrown in the mix, it makes things more interesting from a monitoring perspective, as user traffic appears to disappear. In this potentially hidden, virtualized realm, a lot of action occurs at the application level unbeknownst to the engineer. For example, in a virtualized multi-tier app, the web frontend makes supporting calls to databases and other applications, and then generates a service response. However, the only thing seen by the external monitoring device is the response traveling back to the user. Unfortunately, tracking the data only as it enters and leaves the virtual environment makes for poor monitoring.
To solve this riddle, you'll need to regain all your previous monitoring points at the core, access, and distribution layers. As you stand in front of your gleaming new racks, it's all still there—the trick is to locate physically where these three logical constructs exist within the mass of devices and determine how best to extract the relevant data for your monitoring tools. Similarly, you’ll need to determine this at the virtualized server level with the added complexity of multi-tier apps abstracted and running on and between these devices. You will need sufficient information about the application architecture to effectively place instrumentation to quantify the application health and status. In the phrasing of a network designer, your legacy north-south (access-distribution-core) flow has a second dimension of east-west flow across virtualized servers.
Visibility is usually achieved via a combination of SPAN port and TAPs (discrete or virtual). Your unique IT implementation and application tier structure will dictate how this is best achieved for effective instrumentation. For example, you may decide to access network data at the top-of-rack switch point or within the core with a TAP. Alternatively, if traffic loads are not excessive, using the integrated SPAN capabilities of switch vendors may work. For views into virtual inter-server traffic, many vendors also offer virtual SPANs or TAPs with their solutions; probes running on an individual VM are yet another means to assessing this status.
Compounding the monitoring obstacles, the primary inbound and outbound connections of these dense entities are often heavily utilized 10 Gb or 40 Gb links capable of overloading all but the fastest packet-capture appliances. In addition, for larger datacenters, network packet brokers are often required to realistically manage the many instrumentation points. Lastly, if you’re utilizing vMotion, another consideration is whether to monitor vMotion events via the vCenter console API to maximize application visibility when a service is provisioned to a different physical server.
Fortunately, if you are employing polling technologies as part of your resource monitoring, the option to interrogate networked attached devices remains available after consolidation. If not, this is usually an excellent time to consider adding the functionality for its deep IT infrastructure awareness and ability to cross-correlate with packet-based performance metrics.
Externally-Hosted Consolidations
Should you choose to outsource your IT infrastructure, your monitoring abilities will be dependent on your provider. If they allow you to install instrumentation—not likely—then everything mentioned above for internal hosting would apply. Much more common, however, is to be restricted to a customer portal or an API that provides a view of select operating metrics. The depth and breadth of what is presented is highly dependent on the service provider. Regardless, it is important to leverage what is available and to augment with other information including SLA agreements with the Internet service vendor. Monitoring the ingress point of application and service conversations as they arrive at your facility is also a best practice, and can be used to validate whatever agreements are in place with the providers.
Conclusion
Datacenter consolidation offers significant value for business and is either already part of your life or probably will be soon. The good news is by performing careful due diligence, your monitoring capabilities can remain robust and your application and service delivery levels exceptional. Keep network service in sight and in mind.