How to not fear change
Recently Gartner Research VP, Jim Sinur, wrote in his blog about how 'Process Should Enable Change without Pain, but …'. He observes that in "...todays world, change will be come the norm, so we will all get a good dose of it going forward. 2013 will be a transition year to something new for processes and complex business outcomes."
Sinur shared how change is faced in IT, describing the feeling that overwhelms IT professionals saying "Now change almost always creates a moment of fear and even some pain."
With the pace of change growing, and seeing the way it impacts IT operations, this made us think about how IT teams need new, smoother ways to reduce the fear of change and to painlessly support change for the business.
When an incident occurs, can you quickly know "what changed"?
While application performance and availability have never been more important to business success, agile development practices and rapid business change have outstripped IT Operations teams' ability to keep pace. Today IT executives and professionals are seeking to align IT Operations more closely with agile development and business initiatives.
Are you consistently applying changes?
I introduced one of our customers to a whole new approach to managing IT, the idea of collecting configuration changes, analyzing them for impact, and making the IT ops team aware of the corollary effects of these changes. The plan was that every morning they would verify that all the changes that were moved to production during the night were applied consistently on all the target servers exactly as verified in pre-production or staging.
How can you update ad hoc changes?
So the question is how will you manage one-off changes or changes that do not follow policy like this? Also, how can you discover and identify changes that have occurred in a network that may? It's simple to correct a change in one system. However, how can you validate your systems' configurations, and then update or correct any ad-hoc changes that were made? The problem is complex, and difficult to resolve. Identifying these changes in a timely manner before it has impacted the application or soon after, reduces the risks around business continuity.
Are you locking down change instead of progressing?
This issue of ensuring IT performance can sometimes lead organizations to lock down their environments with excessive controls. This means having all changes specifically identified, tested, monitored and, finally, approved.
How do you handle constant change requests?
End users today expect nothing less than a system that is always available. This is put at risk when IT operations is under pressure to release changes, that need to be accepted and deployed, while frequently IT Operations is even unaware of the background, content and impact. Release, Deployment and IT Operations teams are faced with the added challenge of ensuring accurate error-free application deployments and software deployments and appropriate system configuration during promotion and deployment, taking into account system configurations that are inherently different between pre-prod and production. Introduction of changes into IT infrastructure is today considered to be one of the leading causes of system downtime, with as much as 10% of all changes having to be rolled back due to irreconcilable issues.
Is change causing drift?
Whether environment changes are planned or not, IT analytics can make sure that environment performs as expected and is available. By analyzing detected drift against the planned changes, you can identify when your environment is slipping off course, and quickly take corrective actions.
In the dark about change?
Without setting a reasonable amount of control to changes of critical processes, businesses are left in the dark about the actual standing and availability of the support infrastructure. Why? This is because today's dynamic IT ecosystems are extremely complex.