The spectrum of corporate risk scenarios is constantly widening and challenging IT’s ability to comply with corporate business continuity policies, sector-driven standards like ISO 22301-2012 and disaster preparedness legislation. Society’s ever increasing reliance on IT for its well-being makes it compulsory for organizations of all sizes to demonstrably be able to recover from disruptions and disasters in their IT infrastructure. Disaster Recovery (DR), is a C-level concern.
While virtualization, private clouds and computing fabrics are transforming datacenters and enabling the delivery of IT as a Service (ITaaS), the rate of change is now accelerating, sometimes out of control. Hardware upgrades, software updates, middleware patches, malware protection mechanisms and new components are being introduced at an unprecedented rate, making it all but impossible to gauge the risk exposure of individual changes.
In parallel to the more dynamic infrastructure, n-tier application complexity is increasing geometrically as the number of software components that collaborate in the provision of an IT Service continues to grow. Service Orientated Architectures (SOA) and SaaS are fueling interdependency and the risk that a component failure a snowball effect leading to severe service disruption.
Assuring recovery can only be done through regular testing. Legacy mechanisms for DR testing are mostly manual, therefore very expensive and infrequent – yearly, for most organizations. A new paradigm is called for, one where DR exercising is fully automated and iterative, with daily or hourly cycles.