If you’ve recently attended a webinar on artificial intelligence (AI) or machine learning (ML), you’ve probably heard that these technologies are sweeping the globe, and that, due to AI, we’ll be able to point software at a website, press “go,” and obtain performance test results.

A healthy dose of skepticism goes a long way with most software claims. However, the issue remains: Are there AI/ML applications that have an impact on performance engineering? Are any of these being utilized successfully in the wild? And, if so, what are they and how might they be replicated in our area?

Here are six ways AI and machine learning are altering performance engineering, presented as questions that AI and machine learning can answer.

Where is our performance in production headed?

It’s easy to become overwhelmed by the volume of data in production logs. Even in the aggregate, it’s difficult to tell the difference between an API response and an image file or other piece of static material. As a result, the average time it takes to serve requests might be deceptive. To an IT manager, a list of service delays can represent something, but not much. The median time is likely to be more instructive. And the debate continues.

Is a glance at today’s production performance, on the other hand, as useful as understanding where we’re going? The most basic use of machine learning is to forecast trends using data sources. Trends may assist executives to determine if a situation is under control, improving, or deteriorating, as well as how quickly it is improving or deteriorating. A machine learning chart might forecast performance issues, allowing the team to intervene. An ounce of prevention is worth a pound of cure, as Benjamin Franklin once stated.

Also, before releasing the patch to production, it may be feasible to test it. It’s fantastic to be able to test a repair, but it’s even better to be able to do it without affecting a single live user.

What are our users doing?

In the past, a lot of performance testing was based on guessing. Testers make educated guesses about what users will do, often aided by a record. If the log is arranged in a particularly useful way, the tester may be able to develop a Ruby script to find out which activities occur most frequently, how often, and for how long. Building enough of these tools to acquire a complete picture of what is occurring, or, more likely, what was happening yesterday, can take a long time. These tools become outdated when new features and APIs with new URLs are implemented.

When employing unsupervised machine learning, distributed search engine tools flip. You put all of your logs on a shared network drive, the machine learning program indexes them, and then you can search using a language that is near to English. The natural-language processing (NLP) program then tries to decipher what you’re saying and responds.

I’ve used these kinds of technologies to track down individual customer mistakes in production, as well as to determine how frequently a particular action occurs over a given time period. Performance engineers may use something like a Google search engine to ask today’s queries, possibly with a few keystrokes.

Beyond a search engine, looking through logs using an ML tool, maybe supplemented by a few keywords or groups, is the next stage. That might be in the cards.

How do we make realistic data?

Traditional performance tools need the tester to execute some action via the user interface, such as logging in, doing a product search, clicking on the product, adding it to the cart, and logging out. The packets will be recorded and then played back by the tool.

However, many minor features in the packets may be altered the next time the tool is run. In order to have a genuine simulation, UserID, SessionID, security codes, ProductID, CacheIDs, and any unique codes or timestamp-based codes will need to be changed. According to Gil Kedem, product manager for Micro Focus’ LoadRunner family of products, the latest generation of load test tools can infer meaning from changing fields and adjust the values supplied back to match the ones that are changing.

What does our data mean?

Performance testers that have a lot of experience look at percentages rather than averages and means. For example, the worst 10% of performance might be caused by a system defect that causes a state that always performs slowly. It might also be a dial-up customer in Alaska trying to download a large file.

A histogram, which is a map of the distribution or speed of performance for a feature, function, or system, is one technique to find out. Another is to determine if the sluggish performance occurs during peak loads or during the transition period when the system is attempting to expand capacity with more cloud servers that are not yet operational.

While this job is the bread and butter of performance analysis, it may be time-consuming and difficult to do, according to Kedem. Unstructured ML might assist answer those queries by connecting the links between time, usage, frequency, and performance. More immediately, Kedem envisions tools that can assist us to filter out such uncommon and sluggish situations so we can determine if they are noise or worth investigating.

Did we really fix it?

Unstructured machine learning can find out what factors in the production environment caused a failure. Capacity and performance engineers can use this information to construct test scenarios that mimic the situation. Yes, you may run the new code under the scenario to see it pass, but you can also run the old code under the scenario to watch it fail if you choose. This type of two-factor test verifies that a modification genuinely addresses the problems it was intended to tackle.

I was doubtful, but Andreas Grabner, a DevOps activist at Dynatrace, told me that these kinds of functionalities are being implemented into performance test and analysis tools. Grabner, an active member of the performance community, is most known for the “performance test roadshow” he undertook in a dozen locations prior to COVID, where he ran performance test analyses in front of an audience.

He offered actionable capacity and system insights in a couple of hours if he received an invitation and the necessary permissions.

Is our system resilient when subsystems fail?

Many of us are aware of Netflix’s Chaos Monkey tool, which purposefully shuts down production instances on the belief that every system should be redundant. The redundancy should, in fact, be redundant. As a result, if a system is shut down, it should have no effect on its performance.

Management can tell whether redundancy is insufficient by monitoring performance while tests are performing. The ensuing outage may only last five seconds, but it provides Netflix engineers with the knowledge they need to rectify the problem, ensuring that when that subsystem truly goes down, traffic will be able to route around it.

Picking systems at random and destroying them is possibly a form of AI. When most of us say “AI,” we’re talking about more than an “if” statement; we’re talking about the ability to learn. Chaos Monkey can come up with more intricate tactics for breaking things by noting what is failing and what is succeeding, a kind of genuine AI that can improve the customer experience today.

For more info: https://www.qaaas.co.uk/testing-services/

Also Read: https://www.guru99.com/software-testing.html

Leave a Reply

Your email address will not be published. Required fields are marked *