EDITING BOARD
RO
EN
×
▼ BROWSE ISSUES ▼
Issue 50

Hyper-productive Teams

Mădălin Ilie
Cluj Java Discipline Lead
@Endava



MANAGEMENT

Ahyper-productive team is the dream of every team leader. How cool would it be to apply a 10-step algorithm and create a team 10x more productive? Or tweak the team communication a little bit and get the team to the productivity nirvana? As we all know, things are not that simple. This paper will detail ways in which you can create a balanced and motivated team that knows its weaknesses and its strengths.

Before we start, I would like to clarify some things in order to set the right expectations. As mentioned above, you will not find any recipe on how to get to a hyper-productive team in 10 steps. If I possessed such an algorithm, I would be more famous than I am now. And, most importantly, this paper is targeting leaders and not managers. (If you have doubts about the difference between these 2 concepts, please find a visual representation below).

I do not know if you remember the Lost TV-Series, which was very cool some 8-10 years ago. Each episode started with the following disclaimer: "Viewer discretion is advised". This means that you are about to watch something that some people may object to, or which they may find disturbing or offensive. You take an informed decision if you continue to watch the show.

The same thing holds for the current paper: "Reader discretion is advised". This paper will contain explicit language like performance, productivity, lack of skills, metrics, etc. and some people may consider this to be from the "we don't talk about this" category.

When we talk about quality in software projects, we can tackle this issue from different perspectives. We can talk about process quality (Are you Agile or not? ), we can talk about testing quality, code quality, design and architecture quality. However, we rarely speak about people quality. And this is because, when speaking about individual performance, things get very personal. It is much simpler to discuss abstract and impersonal things like the ones mentioned above, but very hard to have concrete discussions about individual skills.

One of the most revealing book I've read about people performance in creative areas, like software development, is Daniel H. Pink's Drive. In a nutshell, the book compares Motivation 1.0 aka Carrots and Sticks - the simplest and most used way of motivating people: you do something good, you get a reward, you do something bad and you are punished; and Motivation 2.0 which is more complex and is based on the following 3 concepts: Mastery, Autonomy and Purpose. For creative work, after a certain point onwards, no matter how big the reward, you will not get the results that match it, but, on the contrary, increasing the reward tends to have negative effects on the outcome. This is why the focus should be on creating a working environment that favors collaboration, acknowledges people's expertise, offers people space to solve tasks autonomously and also offers a higher meaning to the work they do. Although the book is very resourceful in offering concrete examples on why Motivation 2.0 works, and why Motivation 1.0 is deprecated, it does not talk too much about creating a Motivation 2.0 work environment.

When speaking about team performance, there is also The Bus Factor: the number of people on your team who need to be "run over by a bus" before the project is in trouble. The more the factor is closer to 1, the greater the risk of not delivering the project on time.

Next, we will discuss how we can create a friendly Motivation 2.0 work environment so that we get to a Bus Factor far away from 1. However, instead of discussing subjective perceptions and heuristic ways, we will discuss objective evidence, mechanics-based, on how to create it.

Last year, at QCon, I attended a very interesting talk called "Treat your code as a crime scene". The author, Adam Tornhill, says that we should treat the version control system (git, svn, tfs) as a crime scene for the code itself and get valuable information regarding the code's behavior. This information can be used to take different decisions like: what is the best area to start refactoring the code base, where is the codebase heading or how is the team influencing the code design. The entire presentation strictly targeted code behavior. However, I also think that the version control system holds valuable information related to people's behavior.

Furthermore, we have a lot of systems (Jira, Sonarqube, Curcible, Jenkins) that produce valuable metrics. If we gather the metrics in one place, we will have even more valuable and more accurate information related to people's skills and behavior.

This paper focuses on how we can correlate the metrics and how we can identify team members' dysfunctionalities so that we can influence people's personal development into the right direction. I called these dysfunctionalities syndromes, so that I can present them into a "clinical" easy-to-track format.

The Runaway Syndrome or How to Identify Basic Gaps

Symptoms:

Diagnose:

Let us take an example in order to see how important context is. We consider unit testing and a fictional team called Transformers.

Identifying unit tests in the version control system can easily be done by name conventions: test classes either start or finish with Test. Mining the version control system we discover that Bumblebee wrote 6 unit tests over the past 4 weeks. What do you say? Is it good or bad?

As already mentioned, the context is essential. At first sight we might think that 6 unit tests is way too little for a 4 weeks period. It is like 1 test every 3 days. But what if the rest of the team members didn't write any test? This will make Bumblebee a unit testing champion!

The same principle applies to CSS, Javascript, SQL, configuration scripts or any other particular activity.

Going back to the Bus Factor, if the entire SQL code is written by a single person, what do you think will happen if that person leaves the team?

The same principles applies to even more complicated interactions like component specialization or business areas.

I Love My Job Syndrome or Monitoring Productivity

Metrics monitoring does not always have to search for gaps. We can also focus on positive areas like the current syndrome will.

Symptoms:

Diagnose:

Let us consider that Bumblebee wrote 200 lines of code (LOC) over the past 4 weeks. Can he be thought as a productive team member or not? We already learned our lesson - everything must be put in context. If the rest of the team members added 700 LOC, then Bumblebee is not in such a good position anymore.

Or is he? What if the rest of the team members only added boilerplate code and Bumblebee's 200 LOC are pure business logic?

But how can we measure this? Pretty simple. We can query the code quality monitoring systems like Sonarqube that offer this kind of information. So, we also add this on the diagnose section.

The Perfectionist Syndrome or Monitoring Refactoring

Symptoms:

Diagnose:

If we take Bumblebee's case again, let us consider he is huge fan of refactoring. Mining the version control system, we discover that the majority of the commits over the past 4 weeks involved significant code refactoring. Do you think this is a good thing for the project? We might say yes: through these refactoring activities the code quality increased, the code is easier to extend, understand. But, again, let us see the entire context. What if, over the past 4 weeks, the number of bugs tripled and this is one of the main causes? Do we have enough automated tests to validate such refactoring activities? Do we have enough business knowledge in the affected areas? Context is essential!

The I Know Better Syndrome or Monitoring Code Quality

Symptoms:

Diagnose:

Let us assume that, over the past 2 weeks, Bumblebee received 20 defects in Crucible and 40 violations in Sonarqube for the committed code and the rest of the team members received none. Obviously, this does not look good for Bumblebee. However, considering the context, what if the reason for this is that everyone else are writing boilerplate code and the code written by Bumblebee is pure business logic? At least Bumblebee does more.

I Don't Do That Syndrome or Reducing Commodity

Good is the biggest enemy of Great. Being in the comfort zone is nice and warm and very hard for many people to get out of it and try new things.

Symptoms:

Diagnose:

If Bumblebee does all the nasty merges inside the team and an evil decepticon will eliminate Bumblebee, the project will be in big trouble. The same holds for similar activities like: deployment, performance testing, using specific tools, etc.

I'm Too Good to Be True Syndrome or Monitoring Over-Confidence

Symptoms:

Diagnose:

Some examples of over-confidence:

This type of behavior is not very beneficial for the team. You should identify it and influence the team into eliminating this type of behavior.

The We Are Agile Syndrome or Monitoring Commitment

Symptoms:

Diagnose:

Let us assume that Transformers delivered 6 story points in the current sprint. How does this sound?

A manager for whom 6 is just a (small) number might think that the team delivered too little. After all, the other team delivered 25 story points in their last sprint. We all know that the whole point of using story points is so that we cannot compare them across teams. But the manager might also be right. What if Transformers are constantly delivering 20 story points, and in this sprint they just delivered 6. There must be a root cause for that: impediments, lack of knowledge, etc. Again, context is the key.

The We Are Agile Syndrome or Monitoring Estimation Accuracy

Symptoms:

Diagnose:

Few teams are looking retrospectively at the accuracy of their estimation, especially in the context where story points are an abstract metric. Maybe it is not always worth detailing why a user story has 3 story points and the other 5, but I think it is important to aim at a certain predictability.

Let us assume we have 2 use stories, both estimated at 5 story points. For the first user story, we modified 2 DB scripts, added 3 CSS classes and 1 Java class, and for the second we just added an IF statement. Can they be compared? Maybe that IF statement consisted of very thoughtful investigation comparable to the effort for implementing the first story, but even this thing hides something: the team does not have enough business knowledge, technical knowledge or the team member that implemented the task did not benefit from the help of the other team members in time. There is always an explanation and I think it is important to have this conversation from time to time.

The Full Stack Developer Syndrome or Knowledge is Power

Symptoms:

Diagnose:

This syndrome is in the positive area. This type of behavior should be recognized and encouraged. If Bumblebee modified all types of files in the current sprint (SQL, UI, deployment, etc) and committed changes to the majority of the components, it is very clear that he is a valuable person who must be encouraged and who must be, at the same time, a role model for the rest of the team members.

The Quantum Syndrome or Monitoring Fragility

Symptoms:

Diagnose:

Once the fragile application zones have been identified, concrete actions can be undertaken for remediation. Those classes can be marked with high priority when doing Code Review, or decisions can be taken around increasing the unit test coverage or the functional coverage (for the business areas that use those classes). The decision should be focused on maximizing the benefits for the project and for the team.

These are the 10 pathological syndromes that I thought of, that can be relatively easy to diagnose using the information we have in the tools we use. Maybe you are wondering how these aspects relate to Mastery, Autonomy and Purpose? I will definitely not recommend using these metrics for measuring people's individual performance. You can end-up having endless discussions about what is the average number of unit tests per developer, individual development plans and other managerial stuff. These metrics should server the leaders to influence the team into the right direction.

Using these objective indicators, team leaders can create a work environment where team members can work autonomously, a work environment that can offer them the opportunity to excel in areas they master. Let us take the example of unit tests. We notice that 2 team members are not very comfortable with writing unit tests, but we have 1 team member mastering this. Instead of having not-very-pleasant discussions and explain those 2 that they are not doing very well and how they should be better, we can ask the more knowledgeable member to create a practical training for the team. In this way, we recognize the person that will hold the training as an expert (Mastery) and increase the others autonomy (Autonomy). Everyone wins.

For the Purpose bit, things are a little more complicated. For outsourcing companies, it is very difficult to connect the team to the end goal. There are many levels in between. Even if I do not think they can reach the same level of connectivity to the end goal as a start-up, I think there are concrete actions that can be implemented in order to reduce this gap. It is very important for the team to understand the project's vision and be connected to this vision. It is also very important for the team to understand what happens after the project goes live. They should get periodic updates about how the functionality is used. This will boost their confidence and the team will feel that their effort meant something.

Of course, in practice, things are not that easy, but this is a solid base you can built on that offers objective indicators that you can use to head the team into the right direction.

In the end it is all about the team you want to create.

VIDEO: ISSUE 109 LAUNCH EVENT

Sponsors

  • Accenture
  • BT Code Crafters
  • Accesa
  • Bosch
  • Betfair
  • MHP
  • BoatyardX
  • .msg systems
  • P3 group
  • Ing Hubs
  • Cognizant Softvision
  • Colors in projects

VIDEO: ISSUE 109 LAUNCH EVENT