Anyone who has tracked the network and application performance monitoring space over the past few years knows that User Experience is a hot topic. But what does it really mean?
Understanding, click-for-click, what the users are doing, how long they have to wait, whether they are experiencing error messages, web pages that fail to load properly, or other connectivity issues are some of the elements that can positively and negatively impact User Experience.
With mission-critical corporate applications, there are three main obstacles. The first obstacle to visibility and transparency is the popularity of home-grown applications, which are present in nearly all enterprise companies. In many instances, they represent the company’s “secret sauce” - something that allows the company to execute faster, stronger, and better than their competitors. In other instances, home grown applications are out-of-the-box applications that have been modified to fit the company’s business, operational or product models. Microsoft SQL is a great example of that.
Performance tools are designed for out-of-the-box application analytics and not for home-grown. Collecting statistics like response time by application can help pinpoint user experience issues.
The second obstacle is multi-tier applications. Tracking user experience from one tier to another is extremely difficult because the platforms that make up the multi-tier applications do not do a good job of communicating and corresponding connections.
Performance tools can monitor individual platforms (web front end, middleware and database back end) and provide tiered metrics. The ability to reconstruct can also provide a piece of the User Experience puzzle. The visibility and performance challenge of watching thousands of individual connections in a typical multi-tier application remains the priority for next generation performance tools.
The third and final obstacle is understanding your tools and to avoid finger pointing. Tools provide a wealth of information but they still largely depend on their users to interpret the results. Why is the application sending out multiple ACKs and re-ACKs to a single request? Is it supposed to do that? Is the application written efficiently? Is it the network or the application’s fault? Understanding the norm is a big part of defining user experience.
Documenting historical performance is also a highly recommended best practice. You can spot user trends that will assist with capacity planning and ensuring that there is plenty of bandwidth to go around. The network gets blamed first. We know that. So why not be proactive and eliminate it as a possible detriment to user experience?