Drivers of Change in an Agile Environment


I am going through the material in Chapter 6 of the Agile Practice Guide, which focuses on organizational support of an agile project.

Here are the changes particularly associated with agile projects, and why they have to be considered in any change management approach.

  • Accelerated delivery–Agile is an approach that focuses on delivering project outputs early and often to the customer (the “incremental” part of agile) and giving feedback from the customer back to the team on a regular basis (the “iterative” part of agile).   On the organizational side, it is important to discover and deliver the features of the product to the customer.   But as the pace of delivery increases, it is important on the customer’s side to be able to accept the delivery and validate that it aligns with the project objectives, or point out where it doesn’t align in as clear a manner as possible.
  • Learning curve–when an organization is just beginning to use agile approaches, there is a high degree of change when learning to accept the culture of agile.  Agile will require more frequent hand-offs between teams, departments, and even vendors.   Change management techniques can help address the hurdles of transitioning to agile approaches.

So it is not just a learning curve to learn the culture of agile, but an acceleration of already existing communications.

The problem in many organizations is that agile is seen as something that replaces traditional project management.   A better way to see this is that it transcends and includes a lot of the traditional structures in the organization, and can end up transforming them.   An example is with the “lessons learned” process, which in traditional project management was typically done at the end of a project (at least according to the 5th Edition of the PMBOK Guide).   When agile came along, this lessons learned process was still done, but not at the end of the project, but rather at the end of each iteration.   Any lessons learned were applied directly to the project at hand, and not at some hypothetical project in the future that may or may not take place.   This accelerated improvement not only made the project better, but those “best practices” that developed during the project could be used by the organization as a whole or even by other project teams.

This is an example of how agile did not replace a structure from traditional project management, but instead it transcended and included it (by having it incorporated into each iteration).   It proved so useful that in the 6th Edition of the PMBOK Guide, it is no longer a part of the “Close Project or Phase” process in the Closing process group, but is a separate process that it done during the Executing process group, as process 4.4 Manage Project Knowledge.

So with this more positive way to thinking about agile approaches, what are some of the characteristics of organizations that make supporting agile principles easier?   And conversely, what are some of the characteristics that may be roadblocks to achieving agile principles?   This will be the subject of the next post.

Change Management in an Agile Environment–A Moment of Zen


The subject of the 6th chapter of the Agile Practice Guide is the ways in which the organization at large can support an agile project.   The previous chapter covered the subject of implementing an agile project by creating an agile environment within the team.    The theme of this chapter comes from the statement on p. 71, the first page of the 6th chapter:  “Project agility is more effective and sustained as the organization adjusts to support it.”

The first topic in this chapter is change management in an agile environment.    In my opinion, the biggest difference between change management in an agile environment and a traditional project management environment is a psychological one:   the goal is that change should be seen as a positive good rather than as a necessary evil.

A graphic way to understand this comes from a book I am reading now called The Zen Leader by Ginny Whitelaw.   She talks about transforming your consciousness so that you can go from passively coping with problems or changes to actively using those same issues to transform your team and your organization.

A clue to how to do this transformation comes from her explanation of what you have probably seen demonstrated by a karate instructor:   focusing your energy on breaking a board.   She explains that the trick is that the person who wants to break the board does not focus on the board.   If you are trying to break the board, you focus your physical and mental energy on a point behind the board.   You are trying to reach that point, and the board is just something you go through in order to get there.   She says that seeing a problem and focusing on it is a way of coping with the situation.

If there is a problem that the project team encounters, the ways of passively coping with the situation are as follows, from the worst to the, well, least bad:

  • Denial (what problem?)
  • Anger/Rage (hey, what gives–this messes up our comfortable status quo!)
  • Resistance (do we HAVE to do this? Is there any way we can mitigate the change (i.e., sabotage it)?
  • Rationalization (we don’t want to do this, but we’re being forced to by outside forces outside of our control)
  • Tolerance (it’s a necessary evil, but it has to be done if we want to go forward)

If you focus your team not on the problem itself, but on what the solution could possibly mean for the project, then you are flipping your mind from merely coping with the problem to transforming it.

Here are the stages of transforming as opposed to coping with a problem (from the good to the best):

  • Acceptance (well, we don’t have a choice about the problem, but we do have a choice about how we react to it)
  • Joy (if we were to make the change, it would make our customers happier, and benefit our organization)
  • Enthusiasm (let’s all roll up our sleeves and think of how to get there!)

Saying this is one thing, doing it is, of course, another.    One mental exercise she has to help you shift your mind into positively accepting the above three stages is the following.

  • Relax–raise your shoulders up to your ears, the way people do when they are tense.   Drop that tension, and exhale.   Sense that your awareness is going out of your head, and into the center of your body.    Start breathing from the belly or lower abdomen–it will continue your relaxation.
  • Enter–Picture a circle, like the eye of a hurricane, that you can enter.   Although all around you are the swirling patterns of reaction to the problem (the “coping” described above), you now see that inside the circle you see the situation as a puzzle or a game.   This will allow you to enter a flow state where you start to engage your creativity.
  • Add value–From the creativity within that you have engaged in the previous step, project this energy outwards to the other members of your team and lead them into your circle.   The energy will lead your team to help you add value by reaching towards a solution.

This book has been tremendously helpful to me in dealing with problems or changes on a project, and I thought it was important to start the section of Chapter 6 dealing with change management to show how important a psychological shift is when going from traditional project management to agile.

In the next post, I will discuss the particular changes associated with agile approaches.

 

 

Agile Measurements: Lead Time, Cycle Time, Response Time


This is a continuation of the discussion of section 5.4 Measurements in Agile Projects of the Agile Practice Guide.

Measurement of the progress in doing a project is different between traditional and agile projects.   Traditional projects used Earned Value Management, which takes the unit of measurement to be the number of dollars (or whatever currency the organization is using) budgeted for the completion of a work package.

There are basic building blocks to the measurements in Earned Value Management.

  • Planned value (PV)–this is the authorized budget assigned to the work that is scheduled to be done.   (The key word there is “scheduled”.)
  • Earned value (EV)–this is the measure of work actually performed.   (The key words there are “actually performed”–it is a a measure of how much scope was accomplished.)  This is expressed in terms of the budget authorized for that work.
  • Actual cost (AC)–this is the measure of the actual cost for work actually performed.  (The key word is obviously “cost”.)

To get the Cost Performance Index, you take AC/EV, and to get the Schedule Performance Index, you take EV/PV.

In agile, the unit of measurement is a story point, which is an estimation of how “big” the user story (i.e., a feature of the product being created) is.   It is a relative estimation, so the size of the smallest feature may be designated arbitrarily as 1, and the other features given a number of story points in comparison.

When I was initially studying about agile, I thought that Earned Value Management wouldn’t be used, but an analogy can be used in an agile environment.   For example, an analogy of the Schedule Performance Index can be used.  SPI in a traditional project environment would equal

SPI = EV/PV = (value of work actually performed)/ (value of work planned)

Since PV and EV are both in units of dollars, you get a simple ratio as a result.   An SPI of 0.80 tells you that your team only got 80% of the work done.   Analogously in an agile environment, if you take SPI now to mean

SPI = (number of story points actually completed)/(number of story points planned)

then you can compute the SPI for a given iteration.   If you planned on completing 20 points in an iteration, and the team actually only completed 15, then the SPI is 15/20 or 0.75.

However, one of the main differences between EVM in a traditional vs. agile environment is the following.   In traditional EVM, earned value or EV refers to the work completed by the team.   However, EV in an agile environment refers to work that is not only completed by the team, but shown to the customer and validated that it conforms to their understanding of the requirements.   Because if it does not conform, then the item or feature has to be reworked.   It is not just work that is complete, but work that is also correct, that counts in agile EVM.

Now there are two ways of setting up iterations in agile.   One is take an arbitrary amount of time (usually referred to as a timebox) and then you take a measure of your progress every two weeks or however long the iteration is.   The team following this approach is referred to as an iteration-based agile team.

There is another way of marking progress, and that is having the iteration not be specific length, but the length of time is takes to do the next feature in the product feature backlog.   The team following this approach is referred to as a flow-based agile team.

The measurements used with flow-based agile teams are listed below in relationship to a Kanban board.  The “ready” column on a Kanban board is the list of features from the product feature backlog that are ready to be worked on, usually the left-most column.  When the item is ready to be worked on, it is removed from the “ready” column and put in the development column to its right.

  • response time (the time that an item waits in the “ready” column until the work starts)
  • cycle time (the time that it takes to process an item once the work starts)
  • lead time (the total amount of time it takes to deliver an item, from the time it is added to the board in the “ready” column to the moment it is completed)

As you can see from the above definitions, the lead time is equal to the response time plus the cycle time.   The cycle time measures the work in progress.

According to the Agile Practice Guide, the cycle time can be used by a flow-based agile team in order to see bottlenecks or delays, whether they are inside the team or external (based on interactions with the customer or sponsor, or perhaps caused by delay of delivery of resources from a vendor).  The next post will talk about a cumulative flow diagram, a way of representing the measurements listed above, which can give clues as to the source of those delays.

 

Agile Measurements: Burndown and Burnup Charts


This is a continuation of a discussion based on Chapter 5 (Implementing Agile) from the Agile Practice Guide, a publication jointly produced by the Project Management Institute and the Agile Alliance.

In the previous posts, I compared measurement of a project’s progress in a traditional vs. an agile environment.    A traditional measurement system such as Earned Value Management measures work completed; an agile measurement system measures work completed and delivered to a customer.    So it includes what in a traditional project management scheme consists of the following processes

  • Process 8.3–Control Quality (internally verifying that the work conforms to the quality standards for the project)
  • Process 5.5–Validate Scope (externally validating with the customer that the work conforms to the requirements for the project)

In other words, you measure whether the work is complete in a traditional measurement system; you measure whether the work is correct in an agile measurement system.

Two of the common systems for measuring progress on a project in an agile environment are a burndown chart and a burnup chart.   The raw data for both types of charts is the same, the number of story points.   The number of story points is an estimate of the effort required to complete a particular user story (feature) from the feature backlog.   The difference between the two types of chart is simple:

  • a burndown chart starts with the total amount of story points expected to be completed in the course of a project, and as each user story is completed, the chart shows the remaining story points
  • a burnup chart starts at zero, and as each user story is completed, the chart shows the completed story points

At the end of the project, a burndown chart will show zero remaining story points, and the burnup chart will show the number of story points completed for the whole project.  For examples of these graphs, see Figure 5.1 on p. 62 and Figure 5.2 on p. 63 of the Agile Practice Guide.

In a traditional project environment, the baseline may change if there are any changes to the scope during the course of the project.   Similarly, in an agile environment, if the scope changes during the iteration, the burnup or burndown charts will also change to accommodate those changes.

In agile, the velocity is a sum of the story points sizes for the features actually completed in the current iteration.    It is useful because, if the project continues at the current velocity of work production, you can take the number of remaining story points and divide by the velocity (number of story points done by each iteration) to get how many iterations it will take to complete the project.

It may take four to eight iterations to achieve a stable velocity.   After that, if the velocity changes, it will have a direct effect on the length of time it may take to complete the project, so it is important to measure velocity and compare it to the velocity of previous iterations so you can see whether there is any changes (up or down).

Now, the description above applies to iteration-based agile measurement.   If you use flow-based agile measurement, which does not use the regular cadence of an iteration, it is based instead on the completion of a particular user story or feature.   The next post will go into further detail into the flow-based agile measurements of cycle time and lead time.

 

Measurements in Agile vs. Predictive Projects (3)–Agile Measurement and Estimation


This post will conclude my comparison of measurement of progress on projects in a traditional environment (using earned value analysis–contained in the first post) with measurement on projects in an agile environment (covered in the second post and this one).

Summing up the last two posts, as opposed to a predictive measurement system like earned value analysis that focuses on the completeness of the work, an agile measurement system will have the following characteristics:

  • It will focus on customer value added.
  • It will focus on quality (the correctness of the work), so that a feature is considered finished not when the team has done the work, but after the team has tested it and the customer approves.

Here are some additional features of agile measurement of progress on a project.

  • The chunks of work being measured are made smaller, so that people are more likely to deliver on it.
  • Product development involves a learning curve as well as delivery of value to a customer.   By keeping the work increments small, this allows for more feedback from a customer, which loops back to the team and causes them to improve on the next work increment.
  • Rather than trying for a heroic pace to get done as quickly as possible, a steady pace is preferred that allows enough time to get the work done correctly.

The “steady pace” referred to in the last point above is important for the purpose of estimation.   A sponsor who wants to know when a project will be completed will be best served by a steady pace of work, because this will allow a simple calculation of

The number of remaining story points/remaining number of story points done per iteration.   With 500 story points remaining and 50 done on average per iteration, you can tell the sponsor with confidence that you will be able to get the project done in 10 iterations.   If each iteration is two weeks, let’s say, then you can see it will be done in 20 weeks.

This post reviewed the characteristics of agile measurement and estimation.   The next posts will go into the details of this type of measurement based on the material on pages 62-70 of the Agile Practice Guide.

Measurements in Agile vs. Predictive Projects (2)–Agile Measurement


I’m going through the Agile Practice Guide in order to understand its contents by adding context to the material in the guide.

Section 5.4 of chapter 5 deals with measuremens in agile projects.   To give context to the discussion on p. 60 of the Guide, I did the previous post comparing agile measurement to measurement in a traditional project management setting.

There the measurements are typically done with a tool called earned value analysis.   This takes the following three building blocks of measurement …

  • Planned value (PV)–this is the authorized budget assigned to the work that is scheduled to be done.   (The key word there is “scheduled”.)
  • Earned value (EV)–this is the measure of work actually performed.   (The key words there are “actually performed”–it is a a measure of how much scope was accomplished.)  This is expressed in terms of the budget authorized for that work.
  • Actual cost (AC)–this is the measure of the actual cost for work actually performed.  (The key word is obviously “cost”.)

… and combines them into formulas that either give a shapshot of how the project is actually doing compared to the plan (both schedule and budget):   these are status measurements.  There are other formulas which predict whether the project will end up being on schedule and within the budget if the current trends continue.    These are predictive measurements.

The problem with earned value analysis is that it is precise, but can be inaccurate if the assumptions underlying the data turn out not to be true.   For example, let’s say you have a work package that was supposed to be completed by a certain date, and it actually is completed by that time and for the budgeted amount of money.    The schedule performance index and cost performance index for that work package should turn out to be 1.0.

But this focuses on the completeness of the work, which is the domain of scope; what about the correctness of the work, which is the domain of quality?   If internal testing is done on the unit of work done, and followed up by integration testing of how the system works with the newly-installed unit, it might turn out that it doesn’t work as expected according to the definition of “done” (the acceptance criteria).   Then the work has to be re-done, and the testing as well.   That’s one problem.

Another problem is where you hand the finished module or unit over to the customer for their inspection.   They say “that’s not what we ordered.”   The acceptance criteria are vague enough that there is a difference between what the customer intended and what the team thought it was doing to meet that requirement.

So, as opposed to a predictive measurement system like earned value analysis that focuses on the completeness of the work, an agile measurement system will have the following characteristics:

  • It will focus on customer value added.
  • It will focus on quality (the correctness of the work), so that a feature is considered finished not when the team has done the work, but after the team has tested it and the customer approves

There are additional features of agile measurement discussed on p. 61; these will be the subject of the next post.

Measurements in Agile vs. Predictive Projects (1)–Earned Value Analysis


I am going through the Agile Alliance Guide chapter by chapter, section by section, page by page, in order to understand its contents.   I am blogging about what I’ve read in order to add value for those who may be coming from a traditional project management background.

So, for example, I am starting section 5.4 on p. 60 which deals with measurements in agile projects.   How can you tell whether your project is on track for success or not?

Before I discuss the introduction to this section on p. 60, however, I want in this post to discuss how measurement is done on a traditional project, which usually has a predictive life cycle.   In this way, the contrast to how measurement is done on an agile project can be better appreciated in context.

Let’s review what the characteristics of a predictive life cycle are as compared to the other three types of project life cycle (iterative, incremental, and agile):

  • Requirements–these are fixed at the beginning of the project and change is managed as if it were a necessary evil (the other three life cycles have dynamic requirements and change is managed as if it were a positive good)
  • Activities–these are performed once for the entire project (in incremental, they are performed once for a given increment, and in iterative and agile they are repeated until correct)
  • Delivery–there is a single delivery of the final product (this is true in iterative as well, but in incremental and agile there are frequent smaller deliveries during the course of the project)
  • Main goal–to manage cost (in iterative the main goal is the correctness of the solution, in incremental it is speed, and in agile, it is customer value)

The main characteristic to focus on in our discussion of measurement is the first one, that of fixed requirements.   This allows you to breakdown the work with a tool called the Work Breakdown Structure.   This is then used to create both the schedule and the budget.   These in turn are used as inputs to a data analysis technique called Earned Value Analysis or EVA.   Remember, the goal of measurement is to

  1. compare the actual work done vs. what was supposed to be done and see if there is a variance (I call this the “snapshot” function of measurement)
  2. predict what resources will be required by the end of the project to complete it given the trends of those variances discovered (I call this the “crystal ball” function of measurement.

Okay, let’s go back to how EVA works.   The three building blocks of formulas dealing with variance are the following, which are based on the triple constraints of schedule, scope and cost, respectively.

  • Planned value (PV)–this is the authorized budget assigned to the work that is scheduled to be done.   (The key word there is “scheduled”.)
  • Earned value (EV)–this is the measure of work actually performed.   (The key words there are “actually performed”–it is a a measure of how much scope was accomplished.)  This is expressed in terms of the budget authorized for that work.
  • Actual cost (AC)–this is the measure of the actual cost for work actually performed.  (The key word is obviously “cost”.)

Let’s take a simple example to show how these are used.   Let’s say your project is to have a room painted, assuming that all of the preparation work has already been completed.  It will cost you $1,000 per day to do each of the four walls of the room.

Scenario one:   At the end of day 2, you only have one room painted.   Your gut feeling is that the project is behind.   What does EVA tell you?

The schedule performance index (SPI) is EV/PV.   What is EV?   It is the end of day 2, and you actually did only one wall, which costs $1,000 to do.  So your EV is $1,000.   What is the planned value?   Although you did one wall, you were supposed to (according to the schedule) do two walls, which would cost you $2,000 to do.  So your PV is $2,000.   Plugging these values in the formula, you get 0.5 as the result.   If your result was 1.0, you would be on schedule.   If your result is greater than 1.0, you are ahead of schedule, and if your result is less than 1.0, you are behind schedule.

The cost performance index (CPI) is EV/AC.   Let’s say that, it is the end of day 2, and were able to complete two walls, but on day 2, you realize you needed to add an extra painter (assuming you planned for two painters costing $500 each for labor and materials, but had to add a third painter in order to get the work done on time).   The SPI is going to be 1.0, because you are on schedule.   But what about the CPI?   The EV is $2,000, because two walls were done, but the budget authorized for doing two walls is $2,000.   The AC, however, doesn’t care about the budget authorized the work; it cares about what the actual costs of the work turned out to be.   This is $1,000 for day 1, but $1,500 for day 2 because of the extra painter, so the cumulative AC is $2,500.   Now using the CPI formula you get $2,000/$2,500 or 0.80.   Anything less than 1.0 with either the SPI or CPI is not good.   In this case, it means you are over budget.

So you can see how this works.   Because the schedule and budget are, at any one time, fixed, you can use EVA to create a measurement which is precise.   But it is accurate?  Let’s review what these two words imply.   A measurement is precise if the measurements are closer together (using smaller units).   A measurement is accurate if the measurements are close to the actual value that is being measured.

If I am at a pub playing darts, my precision will decrease as the amount of beer I have increases.   The fine motor control needed to throw the dart exactly where I want is affected by the alcohol, and I end up throwing more wildly as time goes on.   Now the accuracy increases the more times I play in the pub, because my brain learns and causes me to get a bulls-eye more often.

The fact that EVA measurements are precise can fool you into thinking they are accurate.  If someone says they are 90% completed with their work, you can use this data to say that the SPI is 90%.   But what if their own self-assessment is wrong?  Or what if they don’t want to admit that they aren’t as far along as they should be?   If someone tells me that the completion of their work is “just over the horizon”, that might make me feel better–until I look up the word “horizon” in the dictionary and see that one of the definitions is “an imaginary line that gets farther away from you the closer you approach to it.”   Well, that’s not comforting at all, is it?

And it gets worse.   Let’s say that the person IS actually telling the truth and that the work is 90% done.   They turn in the work and then the testing is done, first of the module itself and then of the entire system with the module added (an integration test).   It doesn’t work as it is supposed to, and it has to be reworked.

Okay now it is re-tested and it works fine.   It is delivered to the customer and the customer says “that’s not what I ordered.”   The user story was not clear or objective enough, and what the customer thought was being ordered isn’t what the team thought.  This is why you don’t use subjective acceptance criteria like “I want the product to look nice” because there’s a LOT of room for disagreement about what “looks nice” means.

So whether from your standpoint you are within the budget and/or schedule, in the end run it doesn’t matter if the work that was done does not add value from the customer’s standpoint.

So, as opposed to earned value analysis, which is a more precise measurement tool but may have problems with accuracy, measurement in agile projects uses tools that may not seem at first glance to be precise from the standpoint of traditional project management, because they are dealing with units (user story points) that seem to be more subjective than dollars and cents, but they are more accurate in that they are focused on actual customer value.

With that background of measurement in traditional project management in mind, let’s turn in the next post to characteristics of measurements in agile projects.

Troubleshooting Agile Project Challenges (5)–Testing Implementation


On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

I have already covered the first three solutions, production of an agile charter, production and maintenance of a product backlog, and the use of kanban boards, in the past three posts.

The production of the agile charter was essential for challenges dealing with the context of the project (what is the product vision?, what is the organization’s mission for doing the project?, etc.).

The production, refinement, and maintenance of the product backlog helped with challenges dealing with the requirements of the product and how these are reflected in the user stories that make up the product backlog.   It also showed how to adjust the user stories that comprise that backlog in order that the team not bite off more than it can chew during any one iteration.

And the challenges dealing with the process of the project, i.e., doing the work and then reviewing it during retrospectives, were met by the use of kanban boards to visualize those processes, which then in turn makes it easier to pinpoint bottlenecks or barriers so they can be focused on and removed.

The fourth series of challenges dealt with problems that could be solved by clarifying the roles of individual team members (having them work on cross-functional vs. siloed teams, for example), as well as the critical roles of product owner and servant leader.

Finally I’m covering the last two challenges (out of the total 21 challenges presented on pages 58 and 59).   These have the comment solution of paying attention to the implementation of testing after the design of a particular feature is complete.    Previous solutions have focused on the completeness of the work, which in traditional project management terms would be the Scope Management domain; this solution set is focused on the correctness of the work, which in traditional project management terms would be the Quality Management domain.   Specifically, it deals with quality control, the correctness of the product itself (does it meet the expectations of the customer).   The other aspect of quality, quality assurance, deals with the correctness of the process, and this is generally taken care of during the retrospectives at the end of each iteration (the use of kanban boards to visualize this process is helpful in this regard).

Now, here are the two remaining challenges or “pain points” that the Agile Alliance Guide discusses in their chart.

  1. Defects–focus on technical processes using techniques such as:
    • Working in pairs or groups
    • Collective product ownership
    • Pervasive testing (including test-driven and automated testing approaches)
    • A robust definition of “done”, i.e., acceptance criteria
  2. Technical debt (degraded code quality)–like the response to the challenge listed above, focus on technical processes using techniques such as:
    • Code refactoring–the process of clarifying and simplifying the design of existing code, without changing its behavior.   This is needed because agile teams often maintain and extend their code from iteration to iteration, and without continuous refactoring, this is difficult to do without adding unnecessary complexity to the code (which in itself can increase risk of defects).
    • Agile modeling–a methodology for modeling and documenting software systems based on best practice.
    • Pervasive testing–involves the cross-functional participation of all quality and testing stakeholders, both developers and testers
    • Automated code quality analysis–checks source code for compliance with a predefined set of rules or best practices.
    • Definition of done–acceptance criteria that a software product must satisfy in order to be accepted by the user or customer.   This prevents features that don’t meet the definition from being delivered to the customer or user.

This concludes this series of posts on the various challenges that may be encountered on an agile project and the various solutions that can be referred to as troubleshooting possibilities for those challenges.

The next section of this chapter, section 5.4, deals with measurements in agile projects.  This replaces the earned value analysis used on traditional project management projects, and will be the subject of the next series of posts …

 

 

 

Troubleshooting Agile Project Challenges (4)–Clarifying Team Roles and Responsibilities


On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

I have already covered the first three solutions, production of an agile charter, production and maintenance of a product backlog, and the use of kanban boards, in the past three posts.

The production of the agile charter was essential for challenges dealing with the context of the project (what is the product vision?, what is the organization’s mission for doing the project?, etc.).

The production, refinement, and maintenance of the product backlog helped with challenges dealing with the requirements of the product and how these are reflected in the user stories that make up the product backlog.   It also showed how to adjust the user stories that comprise that backlog in order that the team not bite off more than it can chew during any one iteration.

And the challenges dealing with the process of the project, i.e., doing the work and then reviewing it during retrospectives, were met by the use of kanban boards to visualize those processes, which then in turn makes it easier to pinpoint bottlenecks or barriers so they can be focused on and removed.

Today I’m covering the four series of challenges that can be faced by using a common solution, namely, focusing on and clarifying the team’s roles and responsibilities, especially those of the product owner and the servant leader (whatever title that person goes by, such as scrum master, project manager, etc.).

Here are the five challenges that can be faceed by focusing on the team’s roles and responsibliites.

  1. Team struggles with obstacles–The servant leader should be the one focusing on and clearing those obstacles.   The servant leader should create options for the team to choose among.   If the servant leader is unable for whatever reason (such as lack of experience) to remove the obstacles, consider escalating the problem by consulting with an agile coach.
  2. False starts, wasted efforts–This is usually caused by the team’s insufficient understanding of exactly what the project mission is (what exactly is it we are trying to produce).   The product owner needs to be an integral part of these team discussions so that he or she can communicate with the customer and clarify exactly what the requirements are.
  3. “Hurry up and wait”, i.e., an uneven flow of work–This is where the clarification of the roles and responsibilities of the individual team members is important.  Plan to the team’s work in progress (WIP) capacity and not more; even consider reducing that WIP capacity if necessary.   The team should stop multitasking (i.e., working on other projects) and be dedicated to one team.   Have the team members consider working in pairs or even in groups to even out the capabilities across the entire team, and to increase communication between team members.
  4. Impossible stakeholder demands–The servant leader needs to work together with the product owner and stakeholders to clarify the obstacles in meeting the current demands.
  5. Siloed teams, instead or cross-functional teams–If you are getting team members from managers in a specific department that are comfortable working with each other but not necessarily those from other departments, then the servant leader needs to educate the managers on why cross-functional teams are essential to the success of an agile project.    Ask the team members on the project to work together in pairs or even in groups with team members from other departments in order to create create cross-functional teams.

You can see that this set of solutions involves both the strengthening of the roles and responsibilities of the leadership on an agile team (the servant leader and the product owner), and the clarification of the roles and responsibilities of team members (they need to stop multitasking and working alone and move towards inclusion of other team members from other departments on their team).

This will help the team create the work product.   This helps with the scope of the project, which deals with the completeness of the work.   What about the correctness of that completed work, which is where quality control comes in?   For issues regarding this area, see the next and final post in this series on clarification of testing.

Troubleshooting Agile Project Challenges (3)–Using a Kanban Board


I am going over the Agile Practice Guide, a publication put out by the Agile Alliance in conjunction with the Project Management Institute.    I am currently reviewing chapter 5 on the implementation of agile projects, and am now on section 5.3, Troubleshooting Agile Project Challenges.

On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

I have covered the first two solutions, production of an agile charter and production and maintenance of a product backlog, in the past two posts.   In the post about a product backlog, these dealt with challenges to creating the product; in this post about kanban boards, we will discuss those challenges that exist within the process itself.   It turns out that kanban boards are especially useful in dealing with these type of process challenges.

Here are the three (out of a total 21) challenges that can be met by using a kanban board.

  1. Unclear work assignments or work progress–Use kanban boards to visualize the flow of work.   Consider using the kanban board during daily stand-ups, where team members walk the board and see what work they are doing is where on the kanban board.   This will clarify both their assignment and what the next step they need to take in order to be able to move the item they are working on to the next column.
  2. Unexpected or unforeseen delays–Ask the team to check the kanban boards more often.   Have them see the flow of work and WIP (work in progress) limits to understand how these impact the demands on the team and on the product itself.  Add a track to the kanban board for listing impediments and monitor impediment removal on a regular basis.
  3. Slow or no improvement in the teamwork process-Capture no more than three items to be improved at each retrospective.   Have the servant leader use the kanban board to track these three items, and then have the servant leader make sure that the improvements integrated into the overall process.

The kanban board takes the dynamic processes of the project and takes a snapshot of where they all stand so that the team members can clarify the nature of the work assignments, the impediments that prevent these work assignments from going forward, and any improvements in the process of getting those work assignments completed.

The process of setting up a kanban board will be discussed in a later post.

Now let’s go on to the next post covering the last five challenges where clarification of team roles and responsibilities, including those of the product owner and servant leader, can help them get resolved.