Random Acts of IT Project Management

Project Management for Information Technology

Posts Tagged ‘Waterfall’

Waterfall vs Agile

Posted by iammarchhare on 21 July 2009

I frequently like to compare waterfall and Agile methodologies, or perhaps mindsets would be a better term.  Yet, I realize that my descriptions are colored a lot by my own experiences.  So, I do like to point you to other sources from time to time that take a different approach or describe the differences in other terms.  I hope that gives the reader a more well-rounded look at things.

Robert Merrill on 18 February 2009 posted “A Tale of two processes”.  He starts out describing “How to create software” by writing:

Let’s create some software value. It’s very simple.

  • You tell the programmers what you want the software to do
  • They create it
  • You verify that it does what it’s supposed to
  • You let people start using it, and out pours the value.

Sounds simple, right?  Well, in a nutshell, he has summed up how waterfall is supposed to work.

I think the contrast is interesting and a worthwhile read.  You can read his article here.

Posted in Agile, SDLC, Software, Waterfall | Tagged: , , , , | Comments Off on Waterfall vs Agile

Better Estimating Through Software Sizing

Posted by iammarchhare on 18 May 2009

How do you know how long it will take?  Gut feel is how many do their estimations.

Late last year, I attended a webinar that intrigued me.  I had heard of feature point analysis (FPA) before, but I didn’t know much about it.  I decided to look into it more.

Software sizing is the software engineering term for estimating the size of a piece of software (whether component or entire application).  These estimates then can be used in project management activities.  Software sizing processes are a requirement for CMMi level 2.

Lines of Code

One of the original measurements for coding projects was Lines of Code (LOC).  When procedural languages were the norm, it gave a rough estimate of effort based upon the developer’s output.  With OO software, though, it is a less useful measure, and so it has fallen out of favor in recent times.

In the 1970s, IBM tapped Allan Albrecht to come up with a better tool for estimation.  The result was published in 1979.  He came up with a way of measuring based upon 5 areas:  Internal logical files, external interface files, external inputs, external outputs and external queries.  The Code Project has a 2 part posting that goes into more detail on function point analysis.  Unfortunately, it appears there were supposed to be additional postings that did not occur.

One of the complaints leveled against such measurement is the amount of time required to do the measurements.  However, an experienced person can document one person-year’s worth of effort in about one day.  While some criticisms of function point analysis may be valid, “others are excuses for avoiding the hard work of measurement” (Total Metrics).  There are far too many organizations that would avoid procedures in the estimation process if it took an hour because it would “take too long”.

To me, the biggest disadvantages are the requirements of previous measurements and specialized training.  Previous measurements can be substituted with industry standards, though you will lose the impact of organizational maturity and influences.  Training and experience increase the accuracy of estimates by the estimator.

You also have a catch-22 situation in that functional requirements need to be detailed enough to make an accurate estimate.  No matter the method of estimating, you’ll have this problem, anyhow.  Estimates are improved through the progressive elaboration of requirements.

Both of these disadvantages are quite likely to discourage, rather than encourage, a more systematic approach to estimation.  In addition, FPA is not without its critics for other reasons.  For one thing, best practices in software and the way software is developed is pretty far removed from the 1970s, when FPA was developed.

In addition, project management itself has changed a lot since then.  The concept of FPA might have worked fine with monolithic waterfall projects.  However, with the adoption of Agile by many organizations, such detailed analysis prohibits change rather than encouraging it.

Use Case Points

One alternative to pure FPA is estimation built upon the number and complexity of use cases.  There are tools that can make this much easier, and anyone who understands use cases already can put together an estimate with little additional training.

There is a Windows tool you can use to estimate size with use cases called EZ Estimation at http://ezestimation.googlepages.com/.  I downloaded it, and it looks like a pretty decent estimator that can be used in the requirements gathering phase.

Conclusions

A good plan today is better than a perfect plan tomorrow.

~ George S. Patton

One thing to keep in mind is that any initial estimate is going to be wrong.  That is why progressive elaboration is pointed out in the PMBOK.  One place I worked realized this and broke all but the smallest of projects out into an analysis phase and a construction phase.  The gateway for the construction phase was how the estimate stacked up against the analysis estimate and whether or not the project was still worth it.

The beauty of Agile, of course, is that estimates are adjusted as more is learned.  Estimates become more accurate, as estimates over the life of the project become more accurate towards the end.

If time allows, however, it would seem prudent to do enough analysis upfront so as to be able to hit that middle ground of estimation so that less will be left out in the end.  By doing use cases and using estimating tools based upon that seems to me to be the most reasonable approach.  The larger the project, the more this approach might make sense.  In an Agile environment, this would be done once to get the best possible overall estimate, but user stories, backlogs and adjustments after a sprint would still be carried out on  a normal basis.  The key would be “appropriate detail” in use cases.

I would love to hear anyone’s experience with these.  I have a feeling it depends a lot upon the type of project, the type of customer and the overall project size.


Sources:

  1. Buglione, Luigi.  (25 July 2008).  Functional Size Measurement.  Retrieved 12 May 2009 from http://www.geocities.com/lbu_measure/fpa/fpa.htm.
  2. Cohn, Mike.  (2005).  Estimating With Use Case Points.  Retrieved 12 May 2009 from http://www.methodsandtools.com/archive/archive.php?id=25.
  3. Function point.  (n.d.).  Retrieved 12 May 2009, from http://en.wikipedia.org/wiki/Function_points.
  4. s.kushal.  (11 Mar 2007).  Function Point and Function Point Analysis.  Message posted to http://www.codeproject.com/KB/architecture/Function_Point.aspx.
  5. Software Composition Technologies, Inc.  (June 2003).  Function Point FAQ. Retrieved 12 May 2009 from http://www.royceedwards.com/floating_function_point_faq/about_function_point_analysis.htm.
  6. Software Sizing.  (n.d.).  Retrieved 12 May 2009, from http://en.wikipedia.org/wiki/Software_Size.
  7. Total Metrics.  (June 2007).  Methods for Software Sizing: How to Decide which Method to Use. Retrieved 12 May 2009 from http://www.totalmetrics.com/function-point-resources/downloads/Why-use-Function-Points.pdf.

Posted in Estimating, PM Basics | Tagged: , , , , , , , , , , | Comments Off on Better Estimating Through Software Sizing

The Beast Called “Waterfall”

Posted by iammarchhare on 1 May 2009

There seems to be an unofficial theme this week of pointing out some fallible lines of thinking.  I honestly didn’t plan it to come out this way, but sometimes patterns emerge before we become cognizant of them.

This week started off talking about “What is Agile Project Management?”  Basically, describing Agile as a “methodology” doesn’t make much sense.  Rather, it is an umbrella, or better a philosophy, underneath of which are some methodologies (scrum, XP, etc.).

Tuesday, I pointed out some faulty beliefs that point to a lack of proper priorities in “Where Is the Time?

Wednesday’s post was on “Web 3.0, Anyone?”  While the technology is emerging to do some really cool stuff, a lot of it just isn’t here yet.  Somehow, though, I saw job postings for people “experienced in Web 3.0”.

Thursday’s post was about “Don’t Make It Hard!”  This was mainly about how people can make things harder than they need to.  It also scratched the surface of how a team’s reaction can make the problems worse.  Finally, I used the example that bore the topic of implementation and how it isn’t implementation that is hard, but rather it is the sequence of steps and missteps that lead up to it that are hard.

It seems appropriate today to come full circle and examine the waterfall method.  Just like the others this week, I read something that made me go “No!” out loud.

In case you are new around here, let me summarize from my own Associated Content article on “Agile Project Management” just how I feel about the waterfall:

Like a dinosaur, the waterfall methodology is large, cumbersome and slow. If it trips and falls, it makes a very large noise. Unfortunately, the waterfall is not like a dinosaur, as the latter is extinct.

So, what do you think went through my mind when I read that Steve McConnell of all people wrote about a company that “embraced Extreme Programming (XP) as the development approach”.  Yet:

Development went on for about two years. While the team was being highly responsive to customer input, that wasn’t good enough. The cumulative total of its work was not converging to anything resembling a saleable product. Eventually the company concluded that the team was never going to produce a product, at which point most of the 200 people were laid off and the company reported a $50 million loss on the project.

~ McConnell, Steve.  (29 July 2008).  Agile Software: Business Impact and Business Benefits.  Retrieved 29 April 2009, from http://forums.construx.com/blogs/stevemcc/archive/2008/07/29/agile-software-business-impact-and-business-benefits.aspx.

The real tip-off here is that nothing was “converging” to make a “saleable product”.  Say what?

Sometimes life is a lot like Alice in Wonderland (or at least the Disney version).

Alice: Would you tell me, please, which way I ought to go from here?

Cheshire Cat: That depends a good deal on where you want to get to.

Alice: I don’t much care where–

Cheshire Cat: Then it doesn’t matter which way you go.

Alice: –so long as I get somewhere.

Cheshire Cat: Oh, you’re sure to do that, if you only walk long enough.

On George Dinwiddie’s Blog, Dinwiddie posted an article “Agility and Predictability”, where he addresses the predictability factor.  However, I want to focus on the fact that the product owner in the above example “didn’t have a vision for a salable product.”

I hope many of you got the chance to attend the webinar yesterday on “Agile Requirements (Not an Oxymoron)” by Ellen Gottesdiener of EBG Consulting.  She pointed out you need: A “now-view” at the iteration level, a “pre-view” at the release level and a “big-view” at the product level.  It is debatable if the team even had a release level view in McConnell’s example, but it is certain that they did not have a “big-view” of the end product.

Agile is no excuse for not knowing where you are going.  Agile won’t help you get there if you don’t know the destination.  The methodology cannot help you if you are simply wandering around Wonderland.

In fact, Agile is not a cure-all by any means.  If anything, Agile will tend to uncover organizational and procedural deficiencies much sooner than the waterfall method.  However, regardless of the methodology, if an organization buries its head in the sand, it won’t work out.  One of the main features of Agile is evaluating performance at the end of a sprint.  Like any evaluation, though, individuals and teams can gloss over and politic, or they can admit mistakes and make adjustments.

Waterfall will cover up mistakes until the very end, when suddenly, everyone realizes the project is behind schedule.  It’s almost inevitable on any project of any complexity or of any significant size.  So, yeah, waterfall is more “predictable”.  It is more predictable that it will fail!

When does waterfall work?  Waterfall works when the results and processes to get there are well known.  If you are setting up servers just like the last 20 you did in the last quarter, then the waterfall approach will most likely be suitable.  When the chance of randomness is very low, then it is much easier to project the end.

Waterfall also works when the duration is shorter than 4 weeks (not including project management paperwork).

IOW, waterfall sometimes works when the one iteration is about the same size as recommended for one or two Agile sprints!

When you think about that, you think about how there is no real feedback until the end of the cycle, you think about how little testing is done until the end and you think about “progressive elaboration”, then why would you tie your cart to the waterfall horse?

Posted in Agile, PMBOK, SDLC | Tagged: , , , , , , , , , , , , , | Comments Off on The Beast Called “Waterfall”

What is Agile Project Management?

Posted by iammarchhare on 27 April 2009

When is a methodology not a methodology?

The term “Agile” has become such a buzzword that it has lost its meaning in the current development environment.  To paraphrase an old joke, if you ask 4 IT managers what Agile is, you’ll get 5 different answers.  Some will interchange the usage of “Scrum” and “Agile”, but is that the correct usage of these terms?  What is the difference between “Scrum”, “XP” and “Agile”?

Read more about Agile Project Management at Associated Content.

Posted in Agile, PM Basics, SDLC | Tagged: , , , , , , , , , , | 1 Comment »

Webinar: Agile Cuts Costs

Posted by iammarchhare on 13 April 2009

I watched part 1 of a webinar called “Agile Cuts Costs” by Rally Software and GlobalLogic have a pretty good high-level Agile presentation that is free to view.  If you find yourself with the need to justify Agile project/program management or a high-level view of how to move into Agile, then I suggest you check it out.  The presentation is given by Jean Tabaka of Rally and Johnny Scarborough of GlobalLogic.  It is just over an hour long, so I picked out points that hit home with my own experience.

Signing up and getting the presentation going was more annoying than it needed to be.  I had  to fill out a form, which wasn’t totally unexpected.  However, I have no desire to give you my phone number.  Don’t call me.  It’s too early yet to know if I will get inundated with spam, but at least that is a normal question.  The player has a clunky interface (I didn’t know I could minimize silly audio control window which pops up right in the way and cannot be moved off the screen entirely).  Once I got beyond this, though, things got dramatically better.

Jean and John go into the importance of metrics, particularly productivity index (PI).  “Productivity is often the most difficult measure to improve in regards to software development.”  PI for large, distributed teams can be lower, but they do not have to be.

The presentation shows that defect count can be kept under control even though schedules are shortened more than 50% by using Agile.  This would depend a lot upon the maturity of the organization, though.  I should point out that I have seen other presentations that show it doesn’t always mean faster delivery or reduced cost, but even those presentations show that it raises quality.  Let’s face it, you do save time and money when you aren’t fixing numerous defects.  However, as a team matures, there is no reason to believe the schedule would not be shortened as efficiencies become habit.

They only briefly mention the importance of time boxing.  From surveying the literature and experience, I have found that this is at the heart of Agile.  Getting that “flow” is very important.  However, flow is just the first level.

Teams often think that meeting a 2-week iteration is next to impossible and that nightly builds and testing are too expensive.  However, the presentation points out that it is actually too expensive to not do nightly builds.  I have found that there are pros and cons with nightly builds, but if you want to point out bad habits and team weaknesses early on so you can close those gaps, there are fewer ways to find them than by doing nightly builds and tests.

I agree 100% with the importance of a “step-by-step approach” to Agile adoption.  This cuts to the heart of my personal experience moving teams and processes to Agile practices and processes.  Culture doesn’t change overnight, and Agile is usually as much of a culture change as anything else.  It takes about 3 months or 6 iterations to go from amateur (“flow” organization) Agile level and mature to intermediate level (“pull).  It takes 12 – 24 months to attain to expert level (“innovate”), in part because it takes maturity across the organization.  Agile cannot “sit in engineering” and be at the expert level.  I have to believe, however, that larger organizations could take longer to adjust than this, but I could be wrong about this.

The risk inherent in the waterfall method is something that often is overlooked.  I really appreciated this slide.  It also shows how the risk of failure rapidly drops using Scrum.  This agrees with other presentations I have seen where quality is higher in Agile than waterfall methods.  Quality and risk of failure are often an inverse relationship.

One thing that cannot be stressed enough is buy-in.  This was mentioned quite a few times in different ways, but it is vital to have executive interest in Agile.  The team has to buy-in on Agile.  Old habits cannot be allowed to resurface, and the retrospective meetings (“lessons learned” for us old-timers) must result in the implementation of needed changes.

Having tools that can provide measurements automatically will free people up to concentrate on the tools.  John says, “Don’t skrimp [sic] on the tools.  You have to invest in the tools, especially when you are going into distributed development.  Post-It notes don’t scale out across time zones.”

There is a lot more to the presentation.  I only highlighted the parts I found interesting or have suffered past pain with.  I suspect part 2, which covers scalability, will be just as good.

Posted in Agile | Tagged: , , , , , , , , , , | 2 Comments »