Slowdown is the New Outage (SINTO)

This is a copy of an original post on the AppDynamics blog here.



The Strategic Brief:

With ‘Orange Is The New Black’ (OITNB) wrapping its final season, let’s reclaim the title formula ‘x is the new y’ with SINTO. This post explores tracing, monitoring, observability and business awareness. By understanding the difference in these four methods, you’ll be ready to drive agile applications, gain funding for lowering technical debt, and focus on customer retention.


Sunset Photo

Common application outage sources have been addressed by implementing Agile, DevOps and CI/CD processes. The resulting increase in system uptime allows site reliability engineers (SREs) to move their focus onto tuning performance, and for good reason. While outage-driven news headlines can cause stock prices to plummet short term, the performance-driven reputation loss is a slow burn for longer-term customer loss.

Whether accessed via web browsers, smart phones or Internet of Things devices, slowdowns drive customers to abandon shopping carts and consider competitors. Slowdowns lead to reputation loss for enterprises—a loss that may even flow to an engineer’s career. If you were considering hiring an SRE, how much weight would you give to the company’s reputation for poor or unpredictable customer experiences?

As high blood pressure is a silent killer of humanity, slowdown is the silent killer of reputations.

Slowdowns vs Outages

Consider the significant differences between outages and slowdowns:


Slowdowns are commonly the result of resource constraint. Either you don’t have enough of the resource, or you’re using the resource poorly, causing contention. If you have too many network transactions on a narrow bandwidth, or if system memory is filled with unnecessary locked pages, a slowdown could result. In a prior life while managing hospital data centers, I saw invalid HL7 messages generate recurring error records into message queues and choking inter-hospital communications. Nurses had to run between laboratories and wards with results as the needless error messages caused a slowdown in the genuine laboratory results getting through. We know outages lose customers, but when there are no outages, what will drive customer loss?

Slowdown is the new outage. #slowdownisthenewoutage #SINTO

Insight vs Observability

DevOps methodologies came with a minimum requirement for monitoring application performance in production.

In turn, SRE comes with the requirement for observability—the capacity to reach into the code and answer an unpredictable question.

While observability supports diagnosis, insight is needed for resolution. SRE implementations create a team of engineers delivering a platform of products and processes for developer usage to ensure highest availability. In addition, SRE moves the focus from reaction to proaction, generating a requirement for spotting the initial predictors of slowdown. This creates the need for a way to observe what code is doing while running production. Observable metrics need context to become actionable insight.

AIOps delivers the ML-driven automatic baselines and contextual correlation to allow SRE teams to engage preemptively (which in turn improves business outcomes, as Garter’s AIOps paper reports). Once a predictor anomaly is triggered, the SRE team can respond by updating a SQL query, coding a new function call, or scaling up resources to prevent the slowdown from escalating into a threat to the business. Post-response, the SRE team can then pass the details back to the application owners for longer-term resolutions.

While dtrace or manual breakpoints may be great for single applications on single machines,  they will “often fall short while debugging distributed systems as a whole,” notes Cindy Sridharan in Distributed Systems Observability. When trying to diagnose a complete customer experience relying on multiple business transactions in distributed multi-cloud production applications, observability falls short of insight. The good news is that if you have implemented monitoring as part of your DevOps rollout, the APM used to react to outages can be expanded to observe and diagnose slowdowns.

Finding Insight on Top of Observability

Neither monitoring nor observability is an end unto itself. For slowdown detection, we must see the broader picture of the total user experience. We must be able to take a step back from our usual I-shaped technical silos and apply T-shaped skills to seek insight into the causes of slowdowns.

Supporting observability can overload applications with additional code for metric creation capturable by APM. Observability only requires the individual metrics be present within the code without correlating them into the overall customer experience.

Delivering insight requires several key functions:

  • Baselines identifying normal performance
  • Segmented metrics of customer business transactions to identify weak points
  • Levers to isolate code portions within the production environment
  • Common trusted metric sources that span technology silos
  • Overhead minimization when performance is normal
  • Noise filtering from using ML-trained filters for anomaly detection

 

Creating observability within each application individually incurs technical debt, while an SRE-supporting APM solution can deliver observability across multiple applications. Moving to a DevOps or SRE model is problematic when you lack an understanding of how to observe and gain insight from metrics. Read more on how APM applies to DevOps.

Remember, it is the metric you don’t watch that bites you.

5 Critical Metrics When Deciding What To Automate In AIOps

This is a copy of an original post on the Forbes blog here.



The Strategic Brief:

What are the best ways to apply AIOps in your IT environment? Here are five key metrics to consider.


Flagpole photo

We automate for three benefits: to improve responsiveness, remove drudgery, and deliver consistent results. But automation has consequences, too. As you automate you’re potentially creating technical debt. The automated procedure must be kept up to date whenever you update the systems it automates. If it impacts, say, the network and you change your networking vendor, you’ll have to update the automation and the scripts around it. That’s why it’s important to assess what you need (and don’t need) to automate.

You may wish you could create an all-encompassing automation platform. However, automating reactions to production anomalies may include some major resolution tasks, like a rebuild or recovery of a database. Based on my consulting work, I’ve developed five criteria that I use when working with clients to help them decide what to automate in their IT environments.

Five Criteria for Assessing What to Automate in AIOps

1) Frequency

Will it take longer to implement the automation than to respond manually to events?

The straw that broke the camel’s back applies frequently to IT anomalies. A first step in an automation assessment is to identify how often the triggering event or anomaly has, or may, occur. There’s no point in automating the reaction to a one-off event. On the other hand, even though this may be the first time the anomaly has reached a crisis point, it may have occurred before.

When an issue finally comes to your attention—when something breaks—it’s often just the final straw in a series of events, like when a system overloads after coming close many times in prior weeks or months. A query language built into your performance monitor is a powerful support feature, as it allows you to quickly search for times when you came close to an anomaly in the past. Once you know what metrics lead up to the anomaly, you can query to find out how often the event occurs.

2) Impact

Are you automating the solution to a major issue? If the anomaly has an insignificant impact on your overall enterprise, incurring the technical debt of an automated response isn’t the answer. And if the problem is just a temporary slowdown and the response you would automate has high risk, then automation isn’t a go either.

So ask yourself: What’s the cost to the business?

Conversely, if you’re dealing with a dinosaur-extinction type of impact—one that, say, could cost the business millions of dollars in lost sales—you’ll definitely need to automate a response so that your customers never take the hit. In fact, the anomaly will be fixed before your customers are even aware of it. That’s where tracking business transactions will enable you to correlate the business impact with the organizational value.

3) Coverage

Coverage describes the proportion of real-world process that can actually be automated. If the automated task requires a manual step in the middle, such as unplugging a cable or having to contact your cloud provider, automating other parts of the procedure may not improve reaction times at all.

But if you’re sure the automation will cover the entire solution—I’m thinking of simple things here like boosting network bandwidth—then obviously automation is both easy and the right way to go. Scoring this metric should be binary: either the process can be fully automated or can’t be automated at all.

4) Probability

The probability of successful automation measures the accuracy of the reactive procedure. There are two sides to this metric: the uniqueness of the trigger, and the certainty of the reaction’s outcome. The triggering anomaly must be unique enough to identify that the reactive procedure is definitely the best way to address the event. Accurate root cause analysis (RCA) is critical and one of the significant benefits of applying machine learning or AIOps. However, an accurate RCA is only part of the solution, as the automated reactive procedure must predictably generate the same results in the same way each time.

5) Latency

One of the benefits of automation is improved responsiveness, and there’s a correlation between the value of automation and latency—the time an automated reaction will take to complete. Low-impact reactions, such as those that boost network bandwidth or increase the server or container pool, are perfect for automatic reactions. With these reactions the anomaly is often resolved before a human can even type in the necessary commands, and you avoid operator errors that can occur in manual responses.

Reactions that may take multiple hours to complete require caution. Do you really want to automatically start a multi-hour database rebuild or recovery, knowing that it will impact the production environment while it runs? You can still automate the commands to avoid operation error, but when the latency is long, you may wish to put an authorisation step into the automated reaction.

If an anomaly is happening often, and the automated reaction will resolve the anomaly faster than you can type, automate it!

The AIOps Features That Matter Most

When I work with clients, we assign a score to each of the key metrics. With some clients I have applied weightings to each metric to help balance business value against opportunity cost and technical debt. Totaling these scores not only helps us decide if something should be automated, but also with prioritizing the creation of reactive procedures based on business needs.

For effective business applications, you’ll need an application performance management (APM) solution with these required AIOps features:

  • Machine learning-driven anomaly detection and root cause analysis
  • Automated responses
  • Third-party integration capability

Your APM solution should also allow you to select automation procedures with built-in query languages and business transaction awareness. The ultimate goal here is to balance your efforts between automating the most valuable metrics, and freeing up your time to move from reactive to preemptive architecture and infrastructure reviews.

Successfully Deploying AIOps, Part 3: The AIOps Apprenticeship

This is a copy of an original post on the AppDynamics blog here.



The Strategic Brief:

By augmenting operations teams, AIOps enables organizations to preemptively ensure that applications, architectures and infrastructures are ready for rapid digital transformation.



Part one of our series on deploying AIOPs identified how an anomaly breaks into two broad areas: problem time and solution time. Part two described the first deployment phase, which focuses on reducing problem time. With trust in the AIOps systems growing, we’re now ready for part three: taking on solution time by automating actions.

French Clock

© 2019 Marco Coulter

Applying AIOps to Mean Time to Fix (MTTFix)

The power of AIOps comes from continuous enhancement of machine learning powered by improved algorithms and training data, combined with the decreasing cost of processing power. A measured example was Googles project for accurately reading street address numbers from its street image systems—a necessity in countries where address numbers don’t run sequentially but rather are based on the age of the buildings. Humans examining photos of street numbers have an accuracy of 98%. Back in 2011, the available algorithms and training data produced a trained model with 91% accuracy. By 2013, improvements and retraining boosted this number to 97.5%. Not bad, though humans still had the edge. In 2015, the latest ML models passed human capability at 98.1%. This potential for continuous enhancement makes AIOps a significant benefit for operational response times.

You Already Trust AI/ML with Your Life

If you’ve flown commercially in the past decade, you’ve trusted the autopilot for part of that flight. At some major airports, even the landings are automated, though taxiing is still left to pilots. Despite already trusting AI/ML to this extent, enterprises need more time to trust AI/ML in newer fields such as AIOps. Let’s discuss how to build that trust.

Apprenticeships allow new employees to learn from experienced workers and avoid making dangerous mistakes. They’ve been used for ages in multiple professions; even police departments have a new academy graduate ride along with a veteran officer. In machine learning, ML frameworks need to see meaningful quantities of data in order to train themselves and create nested neural networks that form classification models. By treating automation in AIOps like an apprenticeship, you can build trust and gradually weave AIOps into a production environment.

By this stage, you should already be reducing problem time by deploying AIOps, which delivers significant benefits before adding automation to the mix. These advantages include the ability to train the model with live data, as well as observe the outcomes of baselining. This is the first step towards building trust in AIOps.

Stage One: AIOps-Guided Operations Response

With AIOps in place, operators can address anomalies immediately. At this stage, operations teams are still reviewing anomaly alerts to ensure their validity. Operations is also parsing the root cause(s) identified by AIOps to select the correct issue to address. While remediation is manual at this stage, you should already have a method of tracking common remediations.

In stage one, your operations teams oversee the AIOps system and simultaneously collect data to help determine where auto-remediation is acceptable and necessary.

Stage Two: Automate Low Risk

Automated computer operations began around 1964 with IBM’s OS/360 operating system allowing operators to combine multiple individual commands into a single script, thus automating multiple manual steps into a single command. Initially, the goal was to identify specific, recurring manual tasks and figure out how to automate them. While this approach delivered a short-term benefit, building isolated, automated processes incurred technical debt, both for future updates and eventual integration across multiple domains. Ultimately it became clear that a platform approach to automation could reduce potential tech debt.

Automation in the modern enterprise should be tackled like a microservices architecture: Use a single domain’s management tool to automate small actions, and make these services available to complex, cross-domain remediations. This approach allows your investment in automation to align with the lifespan of the single domain. If your infrastructure moves VMs to containers, the automated services you created for networking or storage are still valid.

You will not automate every single task. Selecting what to automate can be tricky, so when deciding whether to fully automate an anomaly resolution, use these five questions to identify the potential value:

  • Frequency: Does the anomaly resolution occur often enough to warrant automation?
  • Impact: Are you automating the solution to a major issue?
  • Coverage: What proportion of the real-world process can be automated?
  • Probability: Does the process always produce the desired result, or can it be impacted by environmentals?
  • Latency: Will automating the task achieve a faster resolution?

Existing standard operating procedures (SOPs) are a great place to start. With SOPs, you’ve already decided how you want a task performed, have documented the process, and likely have some form of automation (scripts, etc.) in place. Another early focus is to address resource constraints by adding front-end web servers when traffic is high, or by increasing network bandwidth. Growing available resources is low risk compared to restarting applications. While bandwidth expansion may impact your budget, it’s unlikely to break your apps. And by automating resource constraint remediations, you’re adding a rapid response capability to operations.

In stage two, you augment your operations teams with automated tasks that can be triggered in response to AIOps-identified anomalies.

Stage Three: Connect Visibility to Action (Trust!)

As you start to use automated root cause analysis (RCA), it’s critical to understand the probability concept of machine learning. Surprisingly, for a classical computer technology, ML does not output a binary, 0 or 1 result, but rather produces statistical likelihoods or probabilities of the outcome. The reason this outcome sometimes looks definitive is that a coder or “builder” (the latter if you’re AWS’s Andy Jassy) has decided an acceptable probability will be chosen as the definitive result. But under the covers of ML, there is always a percentage likelihood. The nature of ML means that RCA sometimes will result in a selection of a few probable causes. Over time, the system will train itself on more data and probabilities and grow more accurate, leading to single outcomes where the root cause is clear.

Once trust in RCA is established (stage one), and remediation actions are automated (stage two), it’s time to remove the manual operator from the middle. The low-risk remediations identified in stage two can now be connected to the specific root cause as a fully automated action.

The benefits of automated operations are often listed as cost reduction, productivity, availability, reliability and performance. While all of these apply, there’s also the significant benefit of expertise time. “The main upshot of automation is more free time to spend on improving other parts of the infrastructure,” according to Google’s SRE project. The less time your experts spend in MTTR steps, the more time they can spend on preemption rather than reaction.

Similar to DevOps, AIOps will require a new mindset. After a successful AIOps deployment, your team will be ready to transition from its existing siloed capabilities. Each team member’s current specialization(s) will need to be accompanied with broader skills in other operational silos.

AIOps augments each operations team, including ITOps, DevOps and SRE. By giving each team ample time to move into preemptive mode, AIOps ensures that applications, architectures and infrastructures are ready for the rapid transformations required by today’s business.

Successfully Deploying AIOps, Part 2: Automating Problem Time

This is a copy of an original post on the AppDynamics blog here.



The Strategic Brief:

Built-in AI/ML—such as in AppDynamics APM—delivers value by activating the cognitive engine of AIOps to address anomalies.



Asian Clock 1

© 2017 Marco Coulter

In part one of our Successfully Deploying AIOps series, we identified how an anomaly breaks into two broad areas: problem time and solution time. The first phase in deploying AIOps focuses on reducing problem time, with some benefit in solution time as well. This simply requires turning on machine learning within an AIOps-powered APM solution. Existing operations processes will still be defining, selecting and implementing anomaly rectifications. When you automate problem time, solution time commences much sooner, significantly reducing an anomaly’s impact.

AIOps: Not Just for Production

Anomalies in test and quality assurance (QA) environments cost the enterprise time and resources. AIOps can deliver significant benefits here. Applying the anomaly resolution processes seen in production will assist developers navigating the deployment cycle.

Test and QA environments are expected to identify problems before production deployment. Agile and DevOps approaches have introduced rapid, automated building and testing of applications. Though mean time to resolution (MTTR) is commonly not measured in test and QA environments (which aren’t as critical as those supporting customers), the benefits to time and resources still pay off.

Beginning your deployment in test and QA environments allows a lower-risk, yet still valuable, introduction to AIOps. These pre-production environments have less business impact, as they are not visited by customers. Understanding performance changes between application updates is critical to successful deployment. Remember, as the test and QA environments will not have the production workload available, it’s best to recreate simulated workloads through synthetics testing.

With trust in AIOps built from first applying AIOps to mean time to detect (MTTD), mean time to know (MTTK) and mean time to verify (MTTV) in your test and QA environments, your next step will be to apply these benefits to production. Let’s analyze where you’ll find these initial benefits.

Apply AI/ML to Detection (MTTD)

An anomaly deviates from what is expected or normal. Detecting an anomaly requires a definition of “normal” and a monitoring of live, streaming metrics to see when they become abnormal. A crashing application is clearly an anomaly, as is one that responds poorly or inconsistently after an update.

With legacy monitoring tools, defining “normal” was no easy task. Manually setting thresholds required operations or SRE professionals to guesstimate thresholds for all metrics measured by applications, frameworks, containers, databases, operating systems, virtual machines, hypervisors and underlying storage.

AIOps removes the stress of threshold-setting by letting machine learning baseline your environment. AI/ML applies mathematical algorithms to different data features seeking correlations. With AppDynamics, for example, you simply run APM for a week. AppDynamics observes your application over time and creates baselines, with ML observing existing behavioral metrics and defining a range of normal behavior with time-based and contextual correlation. Time-based correlation removes alerts related to the normal flow of business—for example, the login spike that occurs each morning as the workday begins; or the Black Friday or Guanggun Jie traffic spikes driven by cultural events. Contextual correlation pairs metrics that track together, enabling anomaly identification and alerts later when the metrics don’t track together.

AIOps will define “normal” by letting built-in ML watch the application and automatically create a baseline. So again, install APM and let it run. If you have specific KPIs, you can add these on top of the automatic baselines as health rules. With baselines defining normal, AIOps will watch metric streams in real time, with the model tuned to identify anomalies in real time, too.

Apply AI/ML to Root Cause Analysis (MTTK)

The first step to legacy root cause analysis (RCA) is to recreate the timeline: When did the anomaly begin, and what significant events occurred afterward? You could search manually through error logs to uncover the time of the first error. This can be misleading, however, as sometimes the first error is an outcome, not a cause (e.g., a crash caused by a memory overrun is the result of a memory leak running for a period of time before the crash).

In the midst of an anomaly, multiple signifiers often will indicate fault. Logs will show screeds of errors caused by stress introduced by the fault, but fail to identify the underlying defect. The operational challenge is unpacking the layers of resultant faults to identify root cause. By pinpointing this cause, we can move onto identifying the required fix or reconfiguration to resolve the issue.

AIOps creates this anomaly timeline automatically. It observes data streams in real time and uses historical and contextual correlation to identify the anomaly’s origin, as well as any important state changes during the anomaly. Even with a complete timeline, it’s still a challenge to reduce the overall noise level. AIOps addresses this by correlating across domains to filter out symptoms from possible causes.

There’s a good reason why AIOps’ RCA output may not always identify a single cause. Trained AI/ML models do not always produce a zero or one outcome, but rather work in a world of probabilities or likelihoods. The output of a self-taught ML algorithm will be a percentage likelihood that the resulting classification is accurate. As more data is fed to the algorithm, these outcome percentages may change if new data makes a specific output classification more likely. Early snapshots may indicate a priority list of probable causes that later refine down to a single cause, as more data runs through the ML models.

RCA is one area where AI/ML delivers the most value, and the time spent on RCA is the mean time to know (MTTK). While operations is working on RCA, the anomaly is still impacting customers. The pressure to conclude RCA quickly is why war rooms get filled with every possible I-shaped professional (a deep expert in a particular silo of skills) in order to eliminate the noise and get to the signal.

Apply AI/ML to Verification

Mean time to verify (MTTV) is the remaining MTTR portion automated in phase one of an AIOps rollout. An anomaly concludes when the environment returns to normal, or even to a new normal. The same ML mechanisms used for detection will minimize MTTV, as baselines already provide the definition of normal you’re seeking to regain. ML models monitoring live ETL streams of metrics from all sources provide rapid identification when the status returns to normal and the anomaly is over.

Later in your rollout when AIOps is powering fully automated responses, this rapid observation and response is critical, as anomalies are resolved without human intervention.  Part three of this series will discuss connecting this visibility and insight to action.

Successfully Deploying AIOps, Part 1: Deconstructing MTTR

This is a copy of an original post on the AppDynamics blog here.



The Strategic Brief:

Quantifying the value of successful AIOps deployment requires tracking subsidiary metrics within the industry default of mean time to resolution (MTTR). This post breaks out the metrics that form MTTR and divides them into two categories: problem and solution.



Somewhere between waking up today and reading this blog post, AI/ML has done something for you. Maybe Netflix suggested a show, or DuckDuckGo recommended a website. Perhaps it was your photos application asking you to confirm the tag of a specific friend in your latest photo. In short, AI/ML is already embedded into our lives.

The quantity of metrics in development, operations and infrastructure makes development and operations a perfect partner for machine learning. With this general acceptance of AI/ML, it is surprising that organizations are lagging in implementing machine learning in operations automation, according to Gartner.

The level of responsibility you will assign to AIOps and automation comes from two factors:

  • The level of business risk in the automated action
  • The observed success of AI/ML matching real world experiences

The good news is this is not new territory; there is a tried-and-true path for automating operations that can easily be adjusted for AIOps.

It Feels Like Operations is the Last to Know

The primary goal of the operations team is to keep business applications functional for enterprise customers or users. They design, “rack and stack,” monitor performance, and support infrastructure, operating systems, cloud providers and more. But their ability to focus on this prime directive is undermined by application anomalies that consume time and resources, reducing team bandwidth for preemptive work.

An anomaly deviates from what is expected or normal. A crashing application is clearly an anomaly, yet so too is one that was updated and now responds poorly or inconsistently. Detecting an anomaly requires a definition of “normal,” accompanied with monitoring of live streaming metrics to spot when the environment exhibits abnormal behaviour.

The majority of enterprises are alerted to an anomaly by users or non-IT teams before IT detects the problem, according to a recent AppDynamics survey of 6,000 global IT leaders. This disappointing outcome can be traced to three trends:

  • Exponential growth of uncorrelated log and metric data triggered by DevOps and Continuous Integration and Continuous Delivery (CI/CD) in the process of automating the build and deployment of applications.
  • Exploding application architecture complexity with service architectures, multi-cloud, serverless, isolation of system logic and system state—all adding dynamic qualities defying static or human visualization.
  • Siloed IT operations and operational data within infrastructure teams.

Complexity and data growth overload development, operations and SRE professionals with data rather than insight, while siloed data prevents each team from seeing the full application anomaly picture.

Enterprises adopted agile development methods in the early 2000s to wash away the time and expense of waterfall approaches. This focus on speed came with technical debt and lower reliability. In the mid-2000s manual builds and testing were identified as the impediment leading to DevOps, and later to CI/CD.

DevOps allowed development to survive agile and extreme approaches by transforming development—and particularly by automating testing and deployment—while leaving production operations basically unchanged. The operator’s role in maintaining highly available and consistent applications still consisted of waiting for someone or something to tell them a problem existed, after which they would manually push through a solution. Standard operating procedures (SOPs) were introduced to prevent the operator from accidentally making a situation worse for recurring repairs. There were pockets of successful automation (e.g., tuning the network) but mostly the entire response was still reactive. AIOps is now stepping up to allow operations to survive in this complex environment, as DevOps did for the agile transformation.

Reacting to Anomalies

DevOps automation removed a portion of production issues. But in the real world there’s always the unpredictable SQL query, API call, or even the forklift driving through the network cable. The good news is that the lean manufacturing approach that inspired DevOps can be applied to incident management.

To understand how to deploy AIOps, we need to break down the “assembly line” used to address an anomaly. The time spent reacting to an anomaly can be broken into two key areas: problem time and solution time.

Problem time: The period when the anomaly has not yet being addressed.

Anomaly management begins with time spent detecting a problem. The AppDynamics survey found that 58% of enterprises still find out about performance issues or full outages from their users. Calls arrive and service tickets get created, triggering professionals to examine whether there really is a problem or just user error. Once an anomaly is accepted as real, the next step generally is to create a war room (physical or Slack channel), enabling all the stakeholders to begin root cause analysis (RCA). This analysis requires visibility into the current and historical system to answer questions like:

  • How do we recreate the timeline?
  • When did things last work normally or when did the anomaly began?
  • How are the application and underlying systems currently structured?
  • What has changed since then?
  • Are all the errors in the logs the result of one or multiple problems?
  • What can we correlate?
  • Who is impacted?
  • Which change is most likely to have caused this event?

Answering these questions leads to the root cause. During this investigative work, the anomaly is still active and users are still impacted. While the war room is working tirelessly, no action to actually rectify the anomaly has begun.

Solution time: The time spent resolving the issues and verifying return-to-normal state.

With the root cause and impact identified, incident management finally crosses over to spending time on the actual solution. The questions in this phase are:

  • What will fix the issue?
  • Where are these changes to be made?
  • Who will make them?
  • How will we record them?
  • What side effects could there be?
  • When will we do this?
  • How will we know it is fixed?
  • Was it fixed?

Solution time is where we solve the incident rather than merely understanding it. Mean time to resolution (MTTR) is the key metric we use to measure the operational response to application anomalies. After deploying the fix and verifying return-to-normal state, we get to go home and sleep.

Deconstructing MTTR

MTTR originated in the hardware world as “mean time to repair”— the full time from error detection to hardware replacement and reinstatement into full service (e.g., swapping out a hard drive and rebuilding the data stored on it). In the software world, MTTR is the time from software running abnormally (an anomaly) to the time when the software has been verified as functioning normally.

Measuring the value of AIOps requires breaking MTTR into subset components. Different phases in deploying AIOps will improve different portions of MTTR. Tracking these subdivisions before and after deployment allows the value of AIOps to be justified throughout.

With this understanding and measurement of existing processes, the strategic adoption of AIOps can begin, which we discuss in part two of this series.

Is Your IT Workforce Ready For AIOps?

This is a copy of an original post on the Forbes blog here.



The Strategic Brief:

AIOps will change the way organizations operate.

In the AIOps-enabled enterprise, where artificial intelligence and machine learning automate tasks to augment technology operations teams, businesses undergo a monumental shift that enables them to be more proactive, predictive and ultimately preemptive.



IMG_6312 2

Along the journey to the AIOps-enabled enterprise, the skills needed in your ITOps, DevOps, and site reliability engineering (SRE) teams will also evolve, requiring skills in customization, integration, automation, and governance. Most organizations aren’t ready for this seismic shift, however. A recent survey of 6,000 IT professionals shows the vast majority of global enterprises have yet to start an AIOps strategy.

The Current State of Operations

Let’s examine how we got to today’s IT operations organization. Specifically, I mean the people monitoring and managing the production environment, whether or not they have “operations” in their title.

In the last decade, the drive to agile and DevOps solutions moved operations towards development, creating the new skill set requirement of release engineering (RelEng), which is responsible for automating application deployment and providing structure for the software development lifecycle (SDLC). This required connecting the dots across domains (server, network, database, frameworks, code dependencies, and so on), and began changing development and operations from I-shaped professionals (deeply skilled in one area) to T-shaped professionals (skilled in one area but also knowledgeable in other domains).

You may notice that RelEng focuses on SDLC tasks such as automating builds, tests and QA— essentially automating all the work for deploying an application into production. In the past, DevOps failed to pay equal attention to the operations effort needed during the production lifespan of an application. AIOps addresses this DevOps weakness by applying AI/ML to anomaly detection, root cause analysis, resolution and verification, and by driving automation of anomaly resolutions. This means the AIOps enterprise will require a different skill set from ITOps, DeVops and SRE professionals.

The New Skill Profile for AIOps

AIOps is reducing the shelf life of two operational skills: the Sherlock Holmes-esque investigative skill that is the heart of root cause analysis, and the experience-based knowledge that lives within an individual. Instead, AIOps will identify or short-list the root cause, and resolvable actions will be captured and automated where warranted. When a clear root cause is found and a matching automated resolution in place, AIOps will be able to address the issue without human interaction.

Similar to cloud services, AIOps will require skills in customization, integration, automation, and governance. While team members with specialist skills will still have value, AIOps will encourage learning and collaboration with other disciplines, and allow you to measure how IT capability and growth are helping to achieve a goal. This represents a shift from the I-shaped and T-shaped specialist to a full-fledged versatilist.

The AIOps professional is a cross-domain expert who uses domain-specific skills to control a progressively widening scope of coverage, and who is equally at ease communicating the technical and business impacts of an issue.

Capability Levels Track Transition to AIOps

To align your team with the AIOps profile, define an alternate career path for them. IT professionals may see their careers tied to a siloed technology certification, and consider time spent learning other domains as coming at the expense of their specialization. You can delineate an alternate path by assessing their current skills, setting goals for the level your enterprise requires, and then building training and incentive programs to transition them into the new skill set.

A simple, six-level scale (based loosely on Bloom’s taxonomy used in education to assess learning effectiveness) can be used for assessment and goal-setting. Each domain’s skills can be measured against the individual’s capability.

The Six Levels Of IT Capability

  1. Awareness: The most basic level; professionals are aware that the technology or practice is in use somewhere in your enterprise.
  2. Understanding: The ability to understand where the technology or practice is used in the enterprise, and which team to contact if anything needs to be done with it.
  3. Applying: Performing basic tasks to manage the technology or practice, with a standard operating procedure (SOP) providing guidance.
  4. Analyzing: Knowing how to view related measures in an application performance monitoring (APM) solution and describe the cross-domain integration present for the technology or practice.
  5. Automating: Defining, creating and deploying automated processes for the technology or practice, allowing automatic resolution of anomalies by AIOps.
  6. Architecture: Designing and enacting an architecture for new implementations of the technology or practice. There may be vendor or institutional certifications available at this level.

The above capability scale can be applied across specialized technologies and more general practice and soft-skill areas. The technologies you assess, which will depend on what is used in your enterprise, may include: AWS, Azure, containers, microservices, Kubernetes, databases, network, infrastructure hardware, embedded frameworks, cloud service providers, APM tools, management tools, and more.

In addition, you will need to add categories for non-technical areas including:

  • Sharing: to incent the capture of knowledge for automation
  • Security: while this may appear as a technology, security is also a process and a behaviour that overlaps with governance
  • Programming: assessing the ability to create automation scripts and actions, including knowledge of language and usage of APIs
  • Governance: understanding where the technology sits within industry regulation and best practices

You can deploy AIOps without waiting for your skills transition to complete, as the technology provides significant benefits immediately. To realize the full value of AIOps, it’s essential to move your existing teams to a new skills profile. This transition can occur during your AIOps deployment. By using capability levels, goals and incentives, you’ll gain a clear path for growth, allowing teams to help your AIOps deployment succeed.

Augmenting operations teams via AIOps frees up time for team members. This time can be used to extend capabilities across domains and into the business, transforming professionals’ skills to fit the new AIOPs profile. Just as the business organization evolved to support citizen technologists and citizen data scientists, IT must evolve to support citizen business evangelists and automation strategists.

Nine Essential Skillsets for Competitive Digital Transformation

This is a copy of an original post on the AppDynamics blog here.



The Strategic Brief:

If you’re reading this, there’s a good chance you’re an Agent of Transformation ready to change the world. As your enterprise pivots towards AIOps, your team must accumulate the right skills to embrace digital transformation while innovating at scale.



Street Art in Cartagena, Colombia
Large and midsize enterprises successful at competitive transformation have one characteristic in common: careful team-building around both soft and technical skills. Let’s examine how you should think about your digital transformation team (even though it may not be called that). Since there are many books on building agile teams, squads and dojos, this post will focus on the soft skill mix that a majority of IT executives say is the roadblock to successful competitive digital transformation.

Application creation is facing accelerating waves of change. The World Economic Forum asserts we are entering the fourth industrial revolution, even as the third chugs along. Surviving concurrent revolutions requires our digital transformation approach to be as agile as our development methodology. Your transformation must result in a digitally competitive enterprise. The skills needed can be broken into three categories, each with three sub-categories.

 

Skills to Survive

Consider the bare minimum set of skills required for DevOps projects to avoid failure. These fall into three general subcategories: organizational, business and technology.

Organizational

Organizational people line up the dominos for other participants to knock over. They ensure decisions are made and the work gets done as expected. These are skills or titles that DevOps practitioners will be well familiar with, including Scrum Master, Project Manager, Squad leader, and Technical Architect. Without these skills, effort tends to run overtime and wanders away from original goals.

Business

Business people bring the reality check from the real world. They ensure that technical success will have business relevance, and that the business is ready for transformed business models and processes. Look for titles like Product Owner, Business Systems Expert, and Business Line Owner. As more digital natives enter your enterprise, expect a higher level of digital awareness and creativity from those bringing your business skills into the team.

Technology

Technology people build the complex clock and keep it ticking. Here you seek technology-specific skills such as TensorFlow, Kubernetes, or JavaScript that are needed by the specific architecture. On top of these siloed skills, look for general process experience as in DevOps, quality assurance, security, infrastructure, or integration.

These three groups are the essentials—the survival skills—for digital processes to exist and thus are the minimum set needed for digital transformation. Any enterprise going through this transformation has these skillsets—in some shape or form with engineering and organizational skills—in its transformation teams. However, once your business transformation introduces artificial intelligence as part of the architecture, you will need to think differently about the skills needed for success.

Skills for Machine Learning

The machine learning (ML) statistical revolution is changing the world. To embrace this change, enterprises must engage ML in two main ways: as a black box encapsulated within a vendor’s product; or custom-built for competitive advantage.

Application Performance Management (APM) is a good example
of the black box approach where AIOps or Cognitive Services
are delivered by your vendor, and the skills listed under
machine
learning are not required.

When encapsulated, the needed skills are housed within the software vendor rather than in your organization, and the vendor will select the optimal algorithms and training frameworks for each type of data and specific use case. For targeted solutions like DevOps, the encapsulated approach is best.

However, you may be surprised by some of the skills required for your business to build out a data science team and gain competitive advantage from machine learning. Research from Accenture and MIT broke the skills surrounding artificial intelligence into three categories: trainers, explainers and sustainers. (The Jobs That Artificial Intelligence Will Create)

Trainers

Trainers are what we see commonly in AI today. They match models and frameworks to specific tasks, and identify and label training data. Trainers help models look beyond the literal into areas such as how to mimic human behavior, whether in speech or driving reactions. In London, a team is trying to teach chatbots about irony and sarcasm so they can interact with humans more effectively.

Explainers

As AI gets more advanced, the layers of neural networks creating answers will exceed simple explanations. Explainers will provide non-technical explanations of how the AI algorithms interpret inputs and how conclusions are reached. This will be essential to attain compliance, or to address legal concerns about bias in the machine. If you create AI to approve mortgages, for instance, how will you establish the AI is not inflicting bias based on gender or creed? The explainer will play a necessary role.

Sustainers

Someone needs to ensure the AI systems are operating as designed ethically, financially and effectively. The sustainers will monitor for and react to unintended outcomes from the “black box.” If the AI is selecting inventory and setting prices, a sustainer will ensure there is no resulting price-gouging on consumer necessities—thus avoiding customer revolt.

The machine learning marketplace is the opposite of the gig economy. In the gig economy, skills are a commodity, like driving a vehicle. You can swap cars and still be a skilled driver. In contrast, the needed skills for ML may change with every new type of data. When your competitive digital transformation seeks customer facial recognition as shoppers walk in the store, you will likely apply Tensorflow and hire for those skills. Next, the business may want to recommend adjacent products to a customer. The optimal algorithm will be a decision tree, and now you’ll need to hire for that skill. Later you may need email text inference, which requires skills in text tokenizing and stemming before the email data can be fed into Tensorflow. You end up using different languages and frameworks for each new use case. Even within a single use case, the optimal algorithm may change over time as particular frameworks improve for specific tasks.

For the technical hire, you should qualify on aptitude rather than skills. Find the right person, then train them. The apprenticeship approach of giving workers time to learn shows you value your people, which enhances loyalty. You either accept apprenticeship as a cost, or you will need to hire an army of individuals. With AI/ML, you will initially hire the trainers that select and code models. As you do, consider who will grow into the explainers and sustainers.

Regardless of whether your transformation includes machine learning, there are additional skills you’ll need to attain competitive business transformation.

Skills to Compete

Now we are getting into a different mind space altogether. Inclusiveness and variety are now stated goals for leading competitive companies. News headlines have multiple examples where applications failed embarrassingly due to the lack of variety, digital awareness and experience in the transformation team. Even an automatic soap dispenser can have bias if it delivers foam to light-skinned hands but not into the hands of people of color. In this real-world example, the dispenser registered light reflected off caucasian skin, but the Fitzpatrick scale tells us you need a stronger light to trigger the sensor for people of color. A broader team or testing regimen would have identified the problem before release. Similarly, Amazon immediately cancelled a machine learning project once aware of the inherent bias of its trained model. Amazon, hoping to better prioritise future applicants, trained a ML model with resumes from previously successful candidates. Unfortunately, the trained model kept selecting males because most of the successful resumes in the past decade had been predominantly male.

For competitive digital transformation, add these three new groups of skills to your requirements:

Culture

Firstly, look at your overall culture and diversity. Without considering culture, you may easily leave your reputation in tatters as in the examples above. Seek out variety in gender. Combine millennials with baby boomers and mix digital natives with digital immigrants. Even variation in birthplace and societal culture creates the variety of viewpoints needed to ward off potential bias. Hearing different voices will help identify gaps in testing criteria and in training data sets.

Dexterity

The second set of skills leads to “digital dexterity.” Remember, you want the benefits of digital transformation to be experienced by the largest number of people across your organization. This effort involves evangelizing the changes to the entire organization through training and communication. Ensure that all those using technology feel completely comfortable and skilled with the technology. Identify an ambassador to the executive team, someone outside the regular reporting structure. Look for a person on the fast path to leadership—maybe recently out of college—and assigned a mentor from the executive level. This ambassador will communicate important achievements and crucial requirements when needed. Also, look for an internal VC. Sometimes the executive sponsor of the transformation is not the same person as the budgetary sponsor. Ensure someone has the skills to build a VC-like pitch for continued funding.

Experience

Today’s app-driven world makes User Experience (UX) and Customer Experience (CX) critical. These are terms not equivalent, as UX is an app category focusing on human interaction with technology, while CX goes beyond the application to the full interaction a human will have with your organization. Are people walking in a door, or onto a factory floor, or calling via phone to reach your digitally transformed technology? What happens after they exit the website or application? Owning these experiences is as critical to successful competitive digital transformation as understanding the experiences offered by your competitors. It’s essential to correlate user and customer experience to application performance and business impact.

The best way to understand the strengths of your team for competitive digital transformation is to create a simple table of skills mentioned above as rows, and team candidates as columns. As you build out the team, check off the skills. In essence, any skill not provided by the team will need to be provided by you as the Agent of Transformation.