The biggest problem
What is causing the biggest problem in performance delays between end user and all the components in the middle and how can I resolve these kind of problems.
The first problem is the complexity of an application landscape. In the past, one box was responsible for all end users and detecting performance delays and diagnostics of those kind of problems were related to only a few components.
The next phase was client server architecture and performance problems diagnostics started to be a challenge. Not one server is involved in a transaction, but many servers. Now we have not only many servers but locations are different as well: some servers at multinationals are in Europe while others are across the globe.
Layers of complexity
Today we have managed to add more layers to complexity. It started with cloud and added another layer to performance complexity and if you think this is it, hybrid showed up another layer, and yet we are not finished; docker, containers, elastics etc.
Now you have a performance problem. Thousands of combinations can be the root cause of the problem. To make it a little bit more complex; yes, as performance architect you like complexity. So let us check the area of parameter analytics.
Parameter analytics ?
Why parameter analytics? If you install a software component and say, you have 100 users working on the system. You have several parameter values responsible for the performance of an application. 2 years later you have 500 users. Do you think somebody has changed the values of this parameters? No, your admin people are not trained for these kind of settings.
Consider the total number of applications running on your system. What do you think of all the parameters involved in all your applications? Suppose you have 1000 applications running. Every application has on average 5 parameters responsible for the performance. This adds up to 5000 parameter values. Every application has on average 3 servers running: a web server, a database server and an application server. So 5,000 times 3 is 15,000 possibilities of performance delays. I can continue this, but you get the picture of the complexity.
Capable of detecting wrong values
Next, how can I diagnose and resolve these problems? A lot of software vendors offer sophisticated tools to diagnose, but are they capable of detecting wrong values? They can do a lot, but a big part is consultancy to adjust the monitors. For example Oracle automatic memory management is turned on. This means Oracle will pre-allocate all the memory if you don’t put limitations on it in the parameter values. The result of this memory issue is a red alert in the monitoring tools.
Based on this we started 4 years ago with the development of an AI (artificial intelligence) system capable of doing analytics of parameters. And yes: parameter analytics is new to performance improvement. ITPA is capable of detecting and resolving problems. We have proven this at several companies with our solution. One example response times from 8 sec to 0.1 sec in 2 weeks, another one no downtime and helpdesk calls reduced with 60%.
This kind of numbers can help your business because performance is not only IT related, but business impact and damage related. Today most companies are not looking to business impact only; IT is not functioning well and we need to change this, IT today is the backbone for most organisations and therefore has enormous business impact.
by: Tjeerd Saijoen, CEO ITPA Group