By Eric Kraieski, VP of Software Business Unit, Transitional Data Services
As businesses exploit digital technologies to deploy disruptive strategies, the IT team plays a critical role in making these transitions happen rapidly and securely. This creates a new dynamic between business and IT and increases the complexity of connections between people, organizations, technology data, application architecture, and security models. As this powerful digital adoption wave accelerates, IT managers must not only keep pace with the usual demands but anticipate constant change and establish a platform that is agile, flexible and, most importantly, resilient.
Agility and flexibility don’t come without a price, though. In fact, unless properly planned, each new technology introduced only adds complexity to the mix: more specialized staff to train and manage, more systems to maintain, and so on. For example, companies bringing a cloud solution into an IT infrastructure must execute a number of steps and include different stakeholders, and will likely need to bring on new IT staff or provide extensive training to existing staff. As the environment becomes more distributed, it also becomes more amorphous and vulnerable. Add in containers, software-defined data centers, machine learning and AI – what could possibly go wrong?
More than 80 percent of Fortune 500 companies still have legacy applications running some critical business functions. Designed to run on premises, these applications are typically tightly coupled with the system’s service components, often with large numbers of critical dependencies between these components. This differs greatly from the loosely coupled architectures of modern, software-as-a-service environments in which components stand independently and are resilient to changes in the behavior of other components with which they may share information.
Without a clear picture of all the relationships and dependencies, each new technology adds the potential risk of unplanned outages or regulatory compliance violations. Proper application-to-infrastructure mapping (including virtual, cloud and SaaS), however, will allow you to better understand the full impact of transitions and recovery plans, making it easier to prioritize upgrade and migration options.
Of course, these technologies and tools are useful in building scalable and dynamic hybrid cloud environments. Many are even vital. However, today these tools are siloed, only serving certain functions, and they often have limited interoperability.
Unfortunately, there are no standards for easily leveraging information gathered from sibling systems. How do your tools for ITSM, auto-discovery, provisioning, workload migration and database migration know what to do, and when? You may have your VMware or cloud environment fully covered, but how do you ensure that information required by an app in VMware or the cloud can access information or services from another application on a mainframe? How is this information resilience assured?
There are a few key things to consider that will help to overcome these barriers and build resiliency into your increasingly complex architecture:
Resilience planning without an understanding of this kind of interplay between applications means that the number of things that may potentially throw a wrench in your critical operations are larger, sometimes by orders of magnitude. Using this “vault of truth” as a single source of information will help …
"The big, one-stop-shop providers just can't keep up with this pace of change." goo.gl/fb/Ew3Lq2
March 22 2019 @ 20:35:09 UTC