Skip to main content

ITRS named a Visionary in the 2024 Gartner® Magic Quadrant™ for Digital Experience Monitoring

["Capacity Planner"]["Product Information"]"0""0"
Futuristic image of person in suit looking at IT estate components and their connectivity

Insights into capacity management & planning

Costs and Outages

Analysts have estimated that 30% of outages are caused by lack of capacity and that globally there is $30 Bn of over spend on IT capacity. The issue is how to find the ‘hot spots’ and the ‘cold spots’. Capacity Planning and Management is the first logical step to making sure that your business doesn’t become a number in those statistics.

At ITRS we regularly meet people who need our help to overcome problems like the ones below:

  • Inability to predict the load that a given level of business activity will have on the product IT systems
  • Wasted capacity in the IT estate because it is hard to identify over-capacity over the business cycles
  • Inability to do ‘what if’ scenarios before making a significant change. From as simple as upgrading a server or virtualising smaller servers, to predicting the impact on capacity of a fail over to DR (most checks on DR are purely functional, not capacity)
  • Understanding the cause and remedial actions following an outage under high load conditions.

IT Operational Analytics

These issues, combined with cost pressures, means that IT professionals are increasingly looking at big data driven capacity management and planning tools, which can model application/business activities as well as capture infrastructure load data. Gartner are calling this part of ITOA, AIOps (Artificial Intelligence Operations), where sophisticated models and predictive techniques are used to help the IT professional answer the questions above.

Improvements in data storage at scale means that companies can now capture large amounts of highly granular data to help make the models very accurate at times of short burst, high load situations (the worst kind of capacity issue to understand). Multi – minute or multi-hour averages will lose the important information which is needed to see what happens when spikey loads are hitting the systems.

The models need to be sophisticated, and not just simple linear regression. They need to learn pattern caused by the business cycle, watch for anomalies which deviate from the expected pattern, and use that data to inform the models which can identify potential hot spots and bottlenecks in the IT estate.

‘What if’ scenarios

The best capacity planning tools allow the user to model changes to the current production estate, from as simple as upgrading a server, to as complex as modelling fail over to a whole DR data centre (and no, DR is never identical to production). These complex models of potential changes allow the IT support staff to work out what changes they might want to make, without having to actually make them to see the impact. This allows modelling of recovery of spare capacity through downsizing of servers, or virtualisation, through to modelling a 2x or 3x trading day when market news generates significantly more trading activity.

The bottom line

There is no new issue which Capacity Planning is the answer to, but rather the financial world’s solution to old issues is not sustainable. We’re pleased to see our customers arming themselves with technology that scales; and using it to collect the information they need to distribute their resource across their infrastructure. This information has never been richer than with the combination of Geneos’ data collection and the Insights Capacity Planner, produced in conjunction with Sumerian. As ITRS expands their work in capacity planning and management we hope to change the behaviours of stockpiling resource unsustainably and inform businesses on where their technology priorities should lie.