blog

For IT Resiliency, Focus on Applications, Not Just Workloads

Data Center Woman

TDS' Eric KraieskiBy Eric Kraieski, VP of Software Business Unit, Transitional Data Services

As businesses exploit digital technologies to deploy disruptive strategies, the IT team plays a critical role in making these transitions happen rapidly and securely. This creates a new dynamic between business and IT and increases the complexity of connections between people, organizations, technology data, application architecture, and security models. As this powerful digital adoption wave accelerates, IT managers must not only keep pace with the usual demands but anticipate constant change and establish a platform that is agile, flexible and, most importantly, resilient.

Agility and flexibility don’t come without a price, though. In fact, unless properly planned, each new technology introduced only adds complexity to the mix: more specialized staff to train and manage, more systems to maintain, and so on. For example, companies bringing a cloud solution into an IT infrastructure must execute a number of steps and include different stakeholders, and will likely need to bring on new IT staff or provide extensive training to existing staff. As the environment becomes more distributed, it also becomes more amorphous and vulnerable. Add in containers, software-defined data centers, machine learning and AI – what could possibly go wrong?

More than 80 percent of Fortune 500 companies still have legacy applications running some critical business functions. Designed to run on premises, these applications are typically tightly coupled with the system’s service components, often with large numbers of critical dependencies between these components. This differs greatly from the loosely coupled architectures of modern, software-as-a-service environments in which components stand independently and are resilient to changes in the behavior of other components with which they may share information.

Without a clear picture of all the relationships and dependencies, each new technology adds the potential risk of unplanned outages or regulatory compliance violations. Proper application-to-infrastructure mapping (including virtual, cloud and SaaS), however, will allow you to better understand the full impact of transitions and recovery plans, making it easier to prioritize upgrade and migration options.

Of course, these technologies and tools are useful in building scalable and dynamic hybrid cloud environments. Many are even vital. However, today these tools are siloed, only serving certain functions, and they often have limited interoperability.

Building a ‘Vault of Truth’ to Ensure Information Resilience

Unfortunately, there are no standards for easily leveraging information gathered from sibling systems. How do your tools for ITSM, auto-discovery, provisioning, workload migration and database migration know what to do, and when? You may have your VMware or cloud environment fully covered, but how do you ensure that information required by an app in VMware or the cloud can access information or services from another application on a mainframe? How is this information resilience assured?

There are a few key things to consider that will help to overcome these barriers and build resiliency into your increasingly complex architecture:

  • Make sure you have a vision of your applications and their dependencies, including your application-to-application, application-to-services, and application-to-infrastructure dependencies. Often just recovering the primary application that failed is not enough.
  • A clear depiction of upstream and downstream relationships is required to fully recover and communicate impact to stakeholders. For example, let’s say you’ve recovered your primary application in the public cloud but you were unaware that a SaaS provider you used for a web service is part of your payment flow and needed to be notified of an IP address change. So, you aren’t really recovered! A clear picture of the overall impact of an operational issue such as a failed server will save you critical time and reduce the impact if the event happens.
  • Combine human wisdom with machine knowledge. Regardless of how you gather the information (ITSM, auto-discovery or spreadsheets), cross-validate it with the application installer, operator, or deployment team that knows everything that was required to install the application. Validating the data and application dependencies with subject matter experts mitigates the risk of missing key relationships and dependencies, enabling teams to make better decisions. This validation with application SMEs can be a critical factor in successful IT migration, cloud migration, failover, recovery and disaster recovery events.

From Disaster Recovery to Business Resiliency

Resilience planning without an understanding of this kind of interplay between applications means that the number of things that may potentially throw a wrench in your critical operations are larger, sometimes by orders of magnitude. Using this “vault of truth” as a single source of information will help …

Pages:  1 2 Next


Leave a comment

Your email address will not be published. Required fields are marked *

Polls

How do you approach customer MPLS networks in the age of SD-WAN?

View Results

Loading ... Loading ...
The ID is: 110553