Measurements in Agile vs. Predictive Projects (2)–Agile Measurement


I’m going through the Agile Practice Guide in order to understand its contents by adding context to the material in the guide.

Section 5.4 of chapter 5 deals with measuremens in agile projects.   To give context to the discussion on p. 60 of the Guide, I did the previous post comparing agile measurement to measurement in a traditional project management setting.

There the measurements are typically done with a tool called earned value analysis.   This takes the following three building blocks of measurement …

  • Planned value (PV)–this is the authorized budget assigned to the work that is scheduled to be done.   (The key word there is “scheduled”.)
  • Earned value (EV)–this is the measure of work actually performed.   (The key words there are “actually performed”–it is a a measure of how much scope was accomplished.)  This is expressed in terms of the budget authorized for that work.
  • Actual cost (AC)–this is the measure of the actual cost for work actually performed.  (The key word is obviously “cost”.)

… and combines them into formulas that either give a shapshot of how the project is actually doing compared to the plan (both schedule and budget):   these are status measurements.  There are other formulas which predict whether the project will end up being on schedule and within the budget if the current trends continue.    These are predictive measurements.

The problem with earned value analysis is that it is precise, but can be inaccurate if the assumptions underlying the data turn out not to be true.   For example, let’s say you have a work package that was supposed to be completed by a certain date, and it actually is completed by that time and for the budgeted amount of money.    The schedule performance index and cost performance index for that work package should turn out to be 1.0.

But this focuses on the completeness of the work, which is the domain of scope; what about the correctness of the work, which is the domain of quality?   If internal testing is done on the unit of work done, and followed up by integration testing of how the system works with the newly-installed unit, it might turn out that it doesn’t work as expected according to the definition of “done” (the acceptance criteria).   Then the work has to be re-done, and the testing as well.   That’s one problem.

Another problem is where you hand the finished module or unit over to the customer for their inspection.   They say “that’s not what we ordered.”   The acceptance criteria are vague enough that there is a difference between what the customer intended and what the team thought it was doing to meet that requirement.

So, as opposed to a predictive measurement system like earned value analysis that focuses on the completeness of the work, an agile measurement system will have the following characteristics:

  • It will focus on customer value added.
  • It will focus on quality (the correctness of the work), so that a feature is considered finished not when the team has done the work, but after the team has tested it and the customer approves

There are additional features of agile measurement discussed on p. 61; these will be the subject of the next post.

Measurements in Agile vs. Predictive Projects (1)–Earned Value Analysis


I am going through the Agile Alliance Guide chapter by chapter, section by section, page by page, in order to understand its contents.   I am blogging about what I’ve read in order to add value for those who may be coming from a traditional project management background.

So, for example, I am starting section 5.4 on p. 60 which deals with measurements in agile projects.   How can you tell whether your project is on track for success or not?

Before I discuss the introduction to this section on p. 60, however, I want in this post to discuss how measurement is done on a traditional project, which usually has a predictive life cycle.   In this way, the contrast to how measurement is done on an agile project can be better appreciated in context.

Let’s review what the characteristics of a predictive life cycle are as compared to the other three types of project life cycle (iterative, incremental, and agile):

  • Requirements–these are fixed at the beginning of the project and change is managed as if it were a necessary evil (the other three life cycles have dynamic requirements and change is managed as if it were a positive good)
  • Activities–these are performed once for the entire project (in incremental, they are performed once for a given increment, and in iterative and agile they are repeated until correct)
  • Delivery–there is a single delivery of the final product (this is true in iterative as well, but in incremental and agile there are frequent smaller deliveries during the course of the project)
  • Main goal–to manage cost (in iterative the main goal is the correctness of the solution, in incremental it is speed, and in agile, it is customer value)

The main characteristic to focus on in our discussion of measurement is the first one, that of fixed requirements.   This allows you to breakdown the work with a tool called the Work Breakdown Structure.   This is then used to create both the schedule and the budget.   These in turn are used as inputs to a data analysis technique called Earned Value Analysis or EVA.   Remember, the goal of measurement is to

  1. compare the actual work done vs. what was supposed to be done and see if there is a variance (I call this the “snapshot” function of measurement)
  2. predict what resources will be required by the end of the project to complete it given the trends of those variances discovered (I call this the “crystal ball” function of measurement.

Okay, let’s go back to how EVA works.   The three building blocks of formulas dealing with variance are the following, which are based on the triple constraints of schedule, scope and cost, respectively.

  • Planned value (PV)–this is the authorized budget assigned to the work that is scheduled to be done.   (The key word there is “scheduled”.)
  • Earned value (EV)–this is the measure of work actually performed.   (The key words there are “actually performed”–it is a a measure of how much scope was accomplished.)  This is expressed in terms of the budget authorized for that work.
  • Actual cost (AC)–this is the measure of the actual cost for work actually performed.  (The key word is obviously “cost”.)

Let’s take a simple example to show how these are used.   Let’s say your project is to have a room painted, assuming that all of the preparation work has already been completed.  It will cost you $1,000 per day to do each of the four walls of the room.

Scenario one:   At the end of day 2, you only have one room painted.   Your gut feeling is that the project is behind.   What does EVA tell you?

The schedule performance index (SPI) is EV/PV.   What is EV?   It is the end of day 2, and you actually did only one wall, which costs $1,000 to do.  So your EV is $1,000.   What is the planned value?   Although you did one wall, you were supposed to (according to the schedule) do two walls, which would cost you $2,000 to do.  So your PV is $2,000.   Plugging these values in the formula, you get 0.5 as the result.   If your result was 1.0, you would be on schedule.   If your result is greater than 1.0, you are ahead of schedule, and if your result is less than 1.0, you are behind schedule.

The cost performance index (CPI) is EV/AC.   Let’s say that, it is the end of day 2, and were able to complete two walls, but on day 2, you realize you needed to add an extra painter (assuming you planned for two painters costing $500 each for labor and materials, but had to add a third painter in order to get the work done on time).   The SPI is going to be 1.0, because you are on schedule.   But what about the CPI?   The EV is $2,000, because two walls were done, but the budget authorized for doing two walls is $2,000.   The AC, however, doesn’t care about the budget authorized the work; it cares about what the actual costs of the work turned out to be.   This is $1,000 for day 1, but $1,500 for day 2 because of the extra painter, so the cumulative AC is $2,500.   Now using the CPI formula you get $2,000/$2,500 or 0.80.   Anything less than 1.0 with either the SPI or CPI is not good.   In this case, it means you are over budget.

So you can see how this works.   Because the schedule and budget are, at any one time, fixed, you can use EVA to create a measurement which is precise.   But it is accurate?  Let’s review what these two words imply.   A measurement is precise if the measurements are closer together (using smaller units).   A measurement is accurate if the measurements are close to the actual value that is being measured.

If I am at a pub playing darts, my precision will decrease as the amount of beer I have increases.   The fine motor control needed to throw the dart exactly where I want is affected by the alcohol, and I end up throwing more wildly as time goes on.   Now the accuracy increases the more times I play in the pub, because my brain learns and causes me to get a bulls-eye more often.

The fact that EVA measurements are precise can fool you into thinking they are accurate.  If someone says they are 90% completed with their work, you can use this data to say that the SPI is 90%.   But what if their own self-assessment is wrong?  Or what if they don’t want to admit that they aren’t as far along as they should be?   If someone tells me that the completion of their work is “just over the horizon”, that might make me feel better–until I look up the word “horizon” in the dictionary and see that one of the definitions is “an imaginary line that gets farther away from you the closer you approach to it.”   Well, that’s not comforting at all, is it?

And it gets worse.   Let’s say that the person IS actually telling the truth and that the work is 90% done.   They turn in the work and then the testing is done, first of the module itself and then of the entire system with the module added (an integration test).   It doesn’t work as it is supposed to, and it has to be reworked.

Okay now it is re-tested and it works fine.   It is delivered to the customer and the customer says “that’s not what I ordered.”   The user story was not clear or objective enough, and what the customer thought was being ordered isn’t what the team thought.  This is why you don’t use subjective acceptance criteria like “I want the product to look nice” because there’s a LOT of room for disagreement about what “looks nice” means.

So whether from your standpoint you are within the budget and/or schedule, in the end run it doesn’t matter if the work that was done does not add value from the customer’s standpoint.

So, as opposed to earned value analysis, which is a more precise measurement tool but may have problems with accuracy, measurement in agile projects uses tools that may not seem at first glance to be precise from the standpoint of traditional project management, because they are dealing with units (user story points) that seem to be more subjective than dollars and cents, but they are more accurate in that they are focused on actual customer value.

With that background of measurement in traditional project management in mind, let’s turn in the next post to characteristics of measurements in agile projects.

Troubleshooting Agile Project Challenges (5)–Testing Implementation


On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

I have already covered the first three solutions, production of an agile charter, production and maintenance of a product backlog, and the use of kanban boards, in the past three posts.

The production of the agile charter was essential for challenges dealing with the context of the project (what is the product vision?, what is the organization’s mission for doing the project?, etc.).

The production, refinement, and maintenance of the product backlog helped with challenges dealing with the requirements of the product and how these are reflected in the user stories that make up the product backlog.   It also showed how to adjust the user stories that comprise that backlog in order that the team not bite off more than it can chew during any one iteration.

And the challenges dealing with the process of the project, i.e., doing the work and then reviewing it during retrospectives, were met by the use of kanban boards to visualize those processes, which then in turn makes it easier to pinpoint bottlenecks or barriers so they can be focused on and removed.

The fourth series of challenges dealt with problems that could be solved by clarifying the roles of individual team members (having them work on cross-functional vs. siloed teams, for example), as well as the critical roles of product owner and servant leader.

Finally I’m covering the last two challenges (out of the total 21 challenges presented on pages 58 and 59).   These have the comment solution of paying attention to the implementation of testing after the design of a particular feature is complete.    Previous solutions have focused on the completeness of the work, which in traditional project management terms would be the Scope Management domain; this solution set is focused on the correctness of the work, which in traditional project management terms would be the Quality Management domain.   Specifically, it deals with quality control, the correctness of the product itself (does it meet the expectations of the customer).   The other aspect of quality, quality assurance, deals with the correctness of the process, and this is generally taken care of during the retrospectives at the end of each iteration (the use of kanban boards to visualize this process is helpful in this regard).

Now, here are the two remaining challenges or “pain points” that the Agile Alliance Guide discusses in their chart.

  1. Defects–focus on technical processes using techniques such as:
    • Working in pairs or groups
    • Collective product ownership
    • Pervasive testing (including test-driven and automated testing approaches)
    • A robust definition of “done”, i.e., acceptance criteria
  2. Technical debt (degraded code quality)–like the response to the challenge listed above, focus on technical processes using techniques such as:
    • Code refactoring–the process of clarifying and simplifying the design of existing code, without changing its behavior.   This is needed because agile teams often maintain and extend their code from iteration to iteration, and without continuous refactoring, this is difficult to do without adding unnecessary complexity to the code (which in itself can increase risk of defects).
    • Agile modeling–a methodology for modeling and documenting software systems based on best practice.
    • Pervasive testing–involves the cross-functional participation of all quality and testing stakeholders, both developers and testers
    • Automated code quality analysis–checks source code for compliance with a predefined set of rules or best practices.
    • Definition of done–acceptance criteria that a software product must satisfy in order to be accepted by the user or customer.   This prevents features that don’t meet the definition from being delivered to the customer or user.

This concludes this series of posts on the various challenges that may be encountered on an agile project and the various solutions that can be referred to as troubleshooting possibilities for those challenges.

The next section of this chapter, section 5.4, deals with measurements in agile projects.  This replaces the earned value analysis used on traditional project management projects, and will be the subject of the next series of posts …

 

 

 

Troubleshooting Agile Project Challenges (4)–Clarifying Team Roles and Responsibilities


On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

I have already covered the first three solutions, production of an agile charter, production and maintenance of a product backlog, and the use of kanban boards, in the past three posts.

The production of the agile charter was essential for challenges dealing with the context of the project (what is the product vision?, what is the organization’s mission for doing the project?, etc.).

The production, refinement, and maintenance of the product backlog helped with challenges dealing with the requirements of the product and how these are reflected in the user stories that make up the product backlog.   It also showed how to adjust the user stories that comprise that backlog in order that the team not bite off more than it can chew during any one iteration.

And the challenges dealing with the process of the project, i.e., doing the work and then reviewing it during retrospectives, were met by the use of kanban boards to visualize those processes, which then in turn makes it easier to pinpoint bottlenecks or barriers so they can be focused on and removed.

Today I’m covering the four series of challenges that can be faced by using a common solution, namely, focusing on and clarifying the team’s roles and responsibilities, especially those of the product owner and the servant leader (whatever title that person goes by, such as scrum master, project manager, etc.).

Here are the five challenges that can be faceed by focusing on the team’s roles and responsibliites.

  1. Team struggles with obstacles–The servant leader should be the one focusing on and clearing those obstacles.   The servant leader should create options for the team to choose among.   If the servant leader is unable for whatever reason (such as lack of experience) to remove the obstacles, consider escalating the problem by consulting with an agile coach.
  2. False starts, wasted efforts–This is usually caused by the team’s insufficient understanding of exactly what the project mission is (what exactly is it we are trying to produce).   The product owner needs to be an integral part of these team discussions so that he or she can communicate with the customer and clarify exactly what the requirements are.
  3. “Hurry up and wait”, i.e., an uneven flow of work–This is where the clarification of the roles and responsibilities of the individual team members is important.  Plan to the team’s work in progress (WIP) capacity and not more; even consider reducing that WIP capacity if necessary.   The team should stop multitasking (i.e., working on other projects) and be dedicated to one team.   Have the team members consider working in pairs or even in groups to even out the capabilities across the entire team, and to increase communication between team members.
  4. Impossible stakeholder demands–The servant leader needs to work together with the product owner and stakeholders to clarify the obstacles in meeting the current demands.
  5. Siloed teams, instead or cross-functional teams–If you are getting team members from managers in a specific department that are comfortable working with each other but not necessarily those from other departments, then the servant leader needs to educate the managers on why cross-functional teams are essential to the success of an agile project.    Ask the team members on the project to work together in pairs or even in groups with team members from other departments in order to create create cross-functional teams.

You can see that this set of solutions involves both the strengthening of the roles and responsibilities of the leadership on an agile team (the servant leader and the product owner), and the clarification of the roles and responsibilities of team members (they need to stop multitasking and working alone and move towards inclusion of other team members from other departments on their team).

This will help the team create the work product.   This helps with the scope of the project, which deals with the completeness of the work.   What about the correctness of that completed work, which is where quality control comes in?   For issues regarding this area, see the next and final post in this series on clarification of testing.

Troubleshooting Agile Project Challenges (3)–Using a Kanban Board


I am going over the Agile Practice Guide, a publication put out by the Agile Alliance in conjunction with the Project Management Institute.    I am currently reviewing chapter 5 on the implementation of agile projects, and am now on section 5.3, Troubleshooting Agile Project Challenges.

On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

I have covered the first two solutions, production of an agile charter and production and maintenance of a product backlog, in the past two posts.   In the post about a product backlog, these dealt with challenges to creating the product; in this post about kanban boards, we will discuss those challenges that exist within the process itself.   It turns out that kanban boards are especially useful in dealing with these type of process challenges.

Here are the three (out of a total 21) challenges that can be met by using a kanban board.

  1. Unclear work assignments or work progress–Use kanban boards to visualize the flow of work.   Consider using the kanban board during daily stand-ups, where team members walk the board and see what work they are doing is where on the kanban board.   This will clarify both their assignment and what the next step they need to take in order to be able to move the item they are working on to the next column.
  2. Unexpected or unforeseen delays–Ask the team to check the kanban boards more often.   Have them see the flow of work and WIP (work in progress) limits to understand how these impact the demands on the team and on the product itself.  Add a track to the kanban board for listing impediments and monitor impediment removal on a regular basis.
  3. Slow or no improvement in the teamwork process-Capture no more than three items to be improved at each retrospective.   Have the servant leader use the kanban board to track these three items, and then have the servant leader make sure that the improvements integrated into the overall process.

The kanban board takes the dynamic processes of the project and takes a snapshot of where they all stand so that the team members can clarify the nature of the work assignments, the impediments that prevent these work assignments from going forward, and any improvements in the process of getting those work assignments completed.

The process of setting up a kanban board will be discussed in a later post.

Now let’s go on to the next post covering the last five challenges where clarification of team roles and responsibilities, including those of the product owner and servant leader, can help them get resolved.

Troubleshooting Agile Project Challenges (2)–Production of a Product Backlog


I am going over the Agile Practice Guide, a publication put out by the Agile Alliance in conjunction with the Project Management Institute.    I am currently reviewing chapter 5 on the implementation of agile projects, and am now on section 5.3, Troubleshooting Agile Project Challenges.

On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

This second group of solutions, which centers on the creation of a product roadmap culminating in a project backlog with user stories, covers one third of all the challenges or “pain points” listed by the Agile Practice Guide.

Let’s pull back for a minute and discuss what is involved with a product roadmap.   In an agile project, you start with the product vision, which is how the customer envisions how the product will look and perform.   This vision is of the finished product.   That is like saying you are going on a journey and stating what it at the final destination.   You then have to figure out how you are going to get from here, the starting point, to that final destination.   That is the product roadmap, where you start creating requirements or features that the final product should have.   For each requirement, the team and the product owner should the expectations and value that the requirement represents for the customer.

The result should be a series of user stories which describe how the feature or requirement will fit into the overall final product.   The stories must strive to be as objective as possible, so the acceptance criteria for when the user story is “done” can be easily agreed upon by the customer and the team.

Once the user stories are clarified, and put into the product backlog (the equivalent of the “scope statement” in a traditional project), you can then move on to estimation of how long each story will take.   This takes the laundry list of user stories and ranks or prioritizes them according to their size (how long the completion of the user story will take) and their impact on the overall final product.   Once these factors are weighed, you can then figure out which user stories to work on first–hence the term product roadmap.

Okay, so that’s how the process is supposed to go.   If any of these steps listed above are not done carefully, problems can arise.   The list below shows the challenge or pain point an agile team can experience and its recommended remedy that has to do with the product backlog and the user stories it comprises.

  1. Poor user experience–the user needs to be involved in the process of creating the user stories that make up the product backlog
  2. Inaccurate estimation–reduce the story size by splitting up the user stories.  Use relative estimation involving the whole team in order to estimate the size of user stories.  Consider spiking (doing a focused study) to understand any story that is unclear.
  3. Work delays/cost overruns due to insufficiently refined product backlog items–the product owner and the team should work on user stories together.  Create a definition of “ready” (as in “ready to start work on) for the user stories.  Consider splitting user stories up so that they are smaller.
  4. Work is not complete–Work on the definition of done (i.e., acceptance criteria) for user stories and for the project as a whole.
  5. Too much complexity–For each user story, encourage the team to focus on the question, “What is the simplest thing that would work?”   This uses the agile principle of simplicity, which can always be described as “the art of maximizing the amount of work NOT done.”
  6. Too much rework–measure the work in progress (WIP) at the beginning of the project.  Consider team spikes (doing a focused study) to learn and focus on adding value rather than perfecting the design.    Create a robust definition of done for user stories and shorten iterations if necessary.
  7. Inefficiently ordered product backlog items–Rank the user stories not just by their duration, but also by the value that they will contribute to the final product.

You can see that some common theme of these solutions is to make sure that the entire team including the product owner work on the creation of user stories, to reduce the complexity and/or size of the user stories if possible (especially at the beginning of the project), and to create acceptance criteria (i.e., the definition of “done”) for these stories.  If you can make sure to incorporate these practices in the creation of the product backlog and the management of the user stories that it comprises, many of the issues or challenges listed above can be avoided or at minimized.

The next set of solutions deals with the clarification not of the product (which is what the product backlog is for), but of the process, and Kanban boards are an excellent tool for dealing with challenges in this area.   These challenges addressed by Kanban boards will be the subject of the next post.

Troubleshooting Agile Project Challenges (1)–Production of an Agile Charter


I am going over the Agile Practice Guide, a publication put out by the Agile Alliance in conjunction with the Project Management Institute.    I am currently reviewing chapter 5 on the implementation of agile projects, and am now on section 5.3, Troubleshooting Agile Project Challenges.

On pages 58 and 59 of the Agile Practice Guide, there are twenty-one challenges or “pain points” described together with the suggested solution(s) to the problem.   However, they are listed in a random, laundry-list fashion without much rhyme or reason to the order.  So what I have done is reviewed all the suggested solutions and grouped those challenges that require the same type of solution.   These five types of solution are:

  1. Production of agile charter
  2. Product backlog/user story definition
  3. Kanban boards
  4. Focus on team roles/responsibilities
  5. Testing implementation

This is the first post, so I will cover the four challenges (out of the total 21) that require the production of an agile charter.

Here are the pain points and their suggested solutions.

  1. Unclear purpose or mission for the team–create an agile charter which includes the vision (why is the product of the project being produced–what need does it fulfill), the mission (what benefits does your organization stand to gain by doing the project)
  2. Unclear working agreements for the team–create an agile charter that includes the values of the team, the principles, and the working agreements (how will meetings be conducted, how will people communicate with each other, etc.)
  3. Unclear team context–create an agile charter that explains the boundaries or constraints (e.g., is there are a hard deadline for the project), and the committed assets (both physical resources and human resources).
  4. Unclear requirements–create an agile charter that crafts a product vision (as mentioned above, why the product of the project is being produced).   The team and the product owner (who looks out for the interest of the customers and stakeholders) should clarify the expectations for the product, and decompose this into tracking a list of smaller, concrete requirements.

As you can see, going through the agile chartering process solves a number of problems down the line.    The biggest mistake people have when doing a project is rushing to the planning (design) phase before spending sufficient time in the initializing phase which includes the production of a project charter.

In the next post, the next type of solution will be discussed:   paying attention to the product backlog and the definition of the user stories it contains.   This process solves the largest group of challenges or pain points of any of the five listed above:  eight to be exact.   These eight challenges will be discussed in the next post.