Like any other success story, when it comes to Performance Testing our story mixes people and processes altogether. Different companies might be built upon different cultures so they implement various mixtures of people and processes – versed teams that have dedicated performance testers might use only guidelines as a process while the more agile teams which rotate the performance tester role between team members, would probably need more detailed processes, checklists, so that the entire performance testing flow would be consistent from one sprint to another and the results offer the same level of confidence.
The next few lines will focus more on the new Agile like workflow but will also highlight the strong base of the entire process which has been built across many years and involved many skilled and experiences people.
In many cases, the performance testing process would have looked like the one below: a performance testing team (perfqa team) that was giving the final signoff before a product went live:
Even though the perfqa team would have been involved in the early stages of a product, through the perfqa kick-off, we were not being part of the sprints, not sitting next to the developers and not even led by the same delivery manager that was driving the product implementation and the development team. All the above have generated a few inconsistencies with the agile workflow that wasguiding the development teams:
At some point in time, the perfqa process started to change and tried to solve the above issues. As such, the performance champions concept has been born. This whole new thing was not something extraordinary; it was more like a community whose members were developers or testers from different delivery team. These performance champions were now being directly involved in the perfqa workflow, thus bringing the performance testing and performance testing responsibility closer to the delivery teams whilst the original perfqa team was mentoring, coaching and training the performance champions.
As a supporter – and in some sort driver – of the performance champions concept, I will present a few of my thoughts like Clint Eastwood would have presented them:
As development, functional testing, project management, business analyst, devops are all part of the team; adding the performance testing to this would close the loop making the delivery team fully responsible of the product they are delivering – which is also empowering for people.
Performance testing can now be part of the sprint planning, and can be managed however it suits better for each team.
New challenges are now available, which will help people to expand the field of expertise – performance testing strategies, tools, tuning and monitoring.
It is worth considering the difference between an expert on a certain field (ie: performance testing expert) and a more versatile person who is responsible for different technologies (ie: development and performance testing or functional testing and performance testing).
As performance testing resides now in each team’s responsibility, they will eventually adapt it for their own needs and believes – which might not sound as an ugly thing but after all, at the end of the day we would all like to have a consistent understanding over the performance of a component (100 transactions per second tps would mean the same for all of us) and we would all want to have an integrated performing product, not a bunch of great performing components.
Performance testing environments will now need to be maintained by different parts of the organization.
The supporters of the performance champions concept/community would need to, at least try to, change the above – of course we won’t try to change what is already working fine, The good.
The expertise can consist of the following:
Performance testing environment: Environment preparations should be addressed at the earliest possible, especially if there is no dedicated owner. The team using the environment should allocate time for refreshing the component to be tested and its dependencies. The performance environment might be a scale of production or a disaster recovery environment.
The testing model: Usually we suggest using the pareto principlewhich would basically translate to: 20% of the application flows would generate 80% of the load. Now this is the part of the application we should focus on froma performance testing perspective. Modeling the real life user flows (like: at any given time 10% of users or logging in, 20% are loading the home page, etc. ) is quite important when understanding the performance behavior of an application that supports ~60k tps at peak.
The testing strategy: Load, capacity, scalability and soak testing are all critical together with the never-dying question: “How much can it cope with?” This flow should naturally start with load testing and understanding the application performance under normal load, than progress to a capacity test which should generate application tuning (like GC tuning) and reveal the application breaking point and more import the cause for the breaking point (is it CPU starving, is it memory, is it a thread pool, is it a bad db query). Then invest in scalability testing whether it is vertical scaling like stock exchange systems (someone working in the stock exchangeindustry presented their way of scaling the infrastructure – they would always buy the biggest and the powerful servers on the market) or horizontal scaling which again should trigger scalability optimization as most of the applications would not scale 100% with the addition of an extra server (have seen most of them scaling ~80%, so if one server would cope ~100tps, two would cope ~180tps). Nevertheless, an 8hrs soak test would probably reveal stability issues and possible memory leaks (even small ones).
Component testing vs integration testing: Usually the time limitations would force passing by the component performance testing, but stubbing all the dependencies and load testing the component might reveal performance issues in the early stages of the development.