Data Center

Len Rosenthal (US) - Your Data Commutes too: The Case for True Real-time Performance Monitoring

Storage and storage network-related outages and performance incidents are rarely caused by a single issue. They are typically the result of multiple, seemingly unrelated issues in the data center. The net result is that the Storage Area Network (SAN) infrastructure, running in a somewhat vulnerable state, is not able to deal with the additional issues caused by virtualization, which results in outages or brownouts. The ability to have full visibility into how your data commutes in real-time helps IT managers proactively identify risk areas that can easily result in business-impacting application performance problems.

Silicon Valley, a phrase coined in 1971, is home to many of the world's most innovative companies, but it's also home to a lot of aging IT infrastructure. Physical devices such as servers, network switches, cables, and storage devices degrade over time and will ultimately fail. The good news is these devices all send out subtle transmission errors, including aborts and retries, that can be detected and monitored over time. If these errors can be immediately compared to normal states, which are defined in predetermined thresholds, performance problems can be predicted with a high level of accuracy. The key is having a true real-time performance monitoring solution that can see all transmissions between servers, switches and storage arrays. Achieving sub-second, real-time performance monitoring requires a non-intrusive hardware-based approach.


The alternative to true real-time solutions are approaches that rely on averages. Nearly all performance monitoring products today are software-based and poll devices every 5 to 20 minutes. The more frequently they poll, the more CPU cycles they consume and the more performance degrades. As a result, these solutions average the collected data and draw conclusions based on the averages. This may be fine in data centers that don't run performance sensitive or business-critical applications, but for applications that can't be down, even for a second, this software-based approach is not viable.

Think about your commute to work. If I give you data for your commute from the Golden Gate Bridge to Palo Alto that states the average weekday commute time between 5am and 10am is 50 minutes and you have to be at an important company meeting at 9am, you would leave your house no later than 8:10am. If you had real-time data on the commute, you would know that, due to the traffic conditions happening right now, to arrive by 9am, the commute will actually take you 75 minutes and you will have to leave your house at 7:45am, at the latest, to be on time. The average commute data provided a clue about how long the commute would take, but only real-time data will ensure you plan accordingly and arrive on time.

The same is true for application response time, especially in today's complex virtualized and cloud environments. Looking at averages as opposed to real-time data will only guarantee that performance bottlenecks will occur as the transmission errors go undetected. As applications are increasingly virtualized, the need for true real-time infrastructure monitoring solutions accelerates. With real-time performance monitoring solutions, your data won't get caught in traffic.

By Len Rosenthal, vp of marketing, Virtual Instruments


« Roel Castelein (Asia) - Does Cloud Computing Spell the End of Piracy in Emerging Markets?


David Emm (Global) - Why SMEs Can no Longer Afford to Ignore IT Security »


Do you think your smartphone is making you a workaholic?