Retrospective Results

Below are the results of the post conference retrospective.  Some of these are self-evident, but a few were discussed in some amount.

Slide1

We discussed how recording as a podcast may impede conversations and open dialog. People may not be as willing to state items if they were taking a view contrary to say an employer they have.

We also discussed the idea of inviting additional experts; while people were reached out, we want open dialog and if 1-2 people come in that are felt to be anointed in the area, that may again stifle open conversation.

Lastly we also discussed about having a statement of resolution; Agile Dialogs isn’t about trying to find one ending conclusion, but simply to allow each side to discover more about the other side. Having a conclusion would make it more like a debate as opposed to dialog.

Advertisement

The Next Agile Dialogs

Towards the end of the day, we explored other possible topics and then gathered input on what may be good candidates for the next Agile Dialogs unconference. The call for people to join us may not exactly reflect the way these are worded. the number behind each is the number of votes it received. (Those with no numbers received no votes.)

Imposed vs Invitational Agile (Transformation) – 5

Pairing/Sharing Work/Swarming – 4

Leader as Servant, Facilitator, Controller, or Overseer? – 3

Does Colocation Matter? Is it feasible? – 3

Agile Assessments and Maturity – 3

Kill the Performance Review – 2

Approaches to Scaling Agile – 2

Hierarchies vs Other Organizational Structures (or are Hierarchies of Humans Natural?) – 2

What is “After” Agile? – 1

Games? Gasification? Good or Bad for Work? – 1

Dangers of Having Experts – 1

Titles Matter: “Master”, “Owner”, “Resources” – 1

Promise & Peril of the Various “DDs” – TDD, ATDD, BDD

Certification vs Learning; Which Certifications (if any) Are Ideal?

Introverts/Extroverts – Does it Matter?

Organizational Culture/Structures That Enables Dysfunction to Thrive

Change Agents vs Change Resistants

Agile: Strategy or End-Game?

Assume Agile or Understand Agile?

Managers: Hindrance or Enablers?

Release Planning – Necessary?

 

Learnings

ideas_lightbulbSince we didn’t have anyone on the extremes, we elected to explore the learnings a bit differently as a group.  We formulated two similar questions on informing one side what they could learn from the other side. Originally we planned to break into the two camps people indicated they leaned towards, but since we didn’t have that division, we adapted the questions to explore each side.

What would you tell a “Proestimates” person they can learn from the “No Estimates “crowd?

Remember, these are learnings that may be helpful, we’re not asking you to give up your need to estimate for any length of time, though there is a request to try dropping them at least once and see what is like with your team.

  • Estimates have a cost. Make sure the value is greater than the cost.
  • Have you assessed if your estimations actually provide value?
  • Treat estimates (of value in particular) as a hypothesis
  • A team gets more self-organized gradually, so estimates will increase in both accuracy and precision over time and yet also become less necessary
  • Estimation does not encourage sharing of risk between parties (it provides a scapegoat mechanism)
  • Estimation creates longer feedback cycles
  • The very way estimation is done matters, and some ways can turn people off to it.
  • Not estimating does not mean not planning/No estimates ≠ No planning (you change the focus of what you talk about)
  • Question assumptions
  • I don’t care if the project ends tomorrow (invokes a head in the sand attitude – the estimate produces false confidence of project necessity)
  • Effective measurement systems can improve estimates (and reduce the need for them); estimation can reduce the desire to improve the measurement system
  • Keep estimating, but ensure it is really the right level of fidelity and not overkill
  • Stay focused on working software
  • Delivering working software is more fun than doing so with estimates – try it before judging
  • It’s OK to acknowledge you don’t have perfect knowledge
  • Know alternatives to answering questions other than just estimates
  • Ask WHY you are estimating/Question WHY you are asking for or providing estimates
  • Reducing the time to estimate can give you more time towards delivery
  • Devolution is real, meaning estimation is not a necessity under all circumstances, don’t stay stuck in the past

What would you tell a “No Estimates“ person they can learn from the “Proestimates” crowd?

Remember, these are learnings that may be helpful, we’re not asking you to take up doing estimates, though as you’ll see estimation gets a bad rap due to other issues in the system.

  • The conversations that take place during estimating can help a team arrive at a shared understanding of the work
  • Don’t discards value that can come from estimates and the estimating process
  • Estimates can drive learning of a shared understanding of a task
  • Estimations make teams be better prepared for clarity and visibility
  • You need to do something to set goals/expectations
  • Estimates are scapegoats for other organizational dysfunctions (such as their misuse)/don’t throw out estimates and keep broader organizational dysfunction
  • Estimates can show many different things that could be amiss in a team/organization
  • Points, time, & $ are not the only things we estimate
  • The right estimation gives/makes better release planning
  • Lots of implied estimations occur
  • Many of our assumptions (e.g. what is most valuable) are intuitive estimates
  • Unless you measure how will the team manage?
  • Measure more
  • Focus more on testable hypotheses if not estimating

What Assumptions Do We Make When We Don’t Estimate?

inverted_flight_attitude_indicatorNext up in the 4th session, what assumptions are we making when we decide to skip estimation?  This produced some interesting results and likewise we realized that some assumptions are the same between the two, though there were some subtle differences in how they get worded. Like before we explored the first set then discussed those we felt had significant difference.

  • That we know enoughto do the right thing
  • The team gets more value from starting work sooner
  • That our work is emergent in nature
  • We can commit to a high level objective versus a detailed plan
  • We know the value of what we’re building or have a testable/measurable hypothesis of value
  • That what we are doing is actually not some form of estimating
  • Underestimating that is just another easy one, but not
  • Heroics are a given (to get the work done)
  • The estimates are wrong or don’t provide value

  • We can deliver working software frequently
  • Estimates are wrong so why bother
  • We have a collective sharing of the risk (with the business or person hiring us)
  • We can properly measure progress or ROI
  • That dependent items can identified just in time (timed w/our release)
  • We don’t need to help someone else understand why they should spend their $|£|¥|€|rubles
  • There are our limits to our knowledge (and we don’t always know those limits)
  • That we don’t need them
  • We’ve defined the system into a predictable state of behavior
  • We need to discover what we need to do
  • The team understands what the highest priority work is and can make a plan without estimation

What Assumptions Do We Make When Estimate?

In_Dive_Attitude_IndicatorThe 3rd session began our exploration of assumptions underneath making estimations. We explored the first 12 in detail, and then hit ones that significantly differed from these 12 in the list below the line. (People voted on their individual top assumption by placing it above the line.)

  • Stable (long-standing) teams that have everyone and everything they need to start/continue working
  • The estimate is more valuable than the time spent generating it
  • That the estimation in and of itself adds value
  • The customer knows what they need
  • That we know all there is to know
  • The stories are defined and the team understands the relative complexity of the work involved (including dependencies)
  • That we understand what we are estimating
  • That the Definition of Ready is correct
  • That estimated cost = value (as reflected by EVM)
  • If estimating “when”, that we know how many working hours there are (assuming fixed feature set)
  • We have a model (for estimating) that is “useful”
  • Team’s competency and past experiences for dealing with similar stories

  • We understand “it” well enough to estimate
  • Confidence in estimate
  • That we’re somewhere close to right
  • That we’re probably wrong/they are wrong
  • That further analysis will improve on our initial intuition
  • We know enough about the work to make relative estimates
  • We will actually use the estimate to make decisions
  • That we know our throughput
  • Dependencies of our gives and gets from other groups
  • Unknowns as risks
  • That past experience actually informs future returns
  • We have perfect knowledge
  • That people consider time impacts (i.e. people think too optimistically about their availability)
  • Risks
  • That is an estimate and not treated as some super-duper precise time set in stone
  • Things will not change/evolve as we begin
  • We have perfect knowledge

Objectives and Techniques

With knowledge of the successes and failures we had encountered, and the start of an inventory of items we estimate (see Background), we began to explore what objectives we were pursuing when pursue one approach or the other.

 

Objectives

For Estimating When Not Estimating
  • Build trust
  • Build confidence/feel more comfortable about being able to deliver
  • Data for retrospectives
  • Plan and coordinate with other functions/groups
  • Transfer risk (and find a scapegoat)
  • See complexity over time (in points) that a team can sustainably deliver on in a sprint (ballpark)
  • Confidence for team and business
  • Plan accurately
  • Forecast to coordinate
  • Justify project
  • Make budgets
  • Make better prioritization decisions
  • Forecast to make commitment/investment decisions
  • Get funded
  • Calculate ROI
  • Satisfy Management
  • Improve release planning
  • Gain clarity & visibility
  • Produce more value
  • Focus on value (or even just questions to answer)
  • Eliminate waste
  • Reducing waste in the work system
  • Get started faster knowing where we are going
  • Deliver more value over time
  • Stop giving super specific (and false) estimates to minor detailed tasks to the business
  • Free our creativity (don’t box us in)
  • Depends on work if points even make any sense
  • Potentially allow for a different trade-off
  • Routine work e.g. monthly release support
  • Focus on retrospective action items for improvement

 

Then we turned our attention to what techniques we use when we either estimate or don’t.

Techniques

Used For Estimating Used When Not Estimating
  • Only talk about estimation  during backlog refinement (t-shirt sizing)
  • Burndown chart
  • Understand the confidence interval
  • Affinity estimation
  • Discuss, ..1-2-3… Show count
  • Monte Carlo simulation
  • Planning Poker
  • Rank Ordering
  • Weighted Shortest Job First
  • Modeling
  • Use/Compare to Actuals
  • Estimation by Analogy
  • Specific criteria & acceptance
  • Affinity mapping
  • In depth grooming for debated story complexity
  • Sustainable commitments
  • Mob programming
  • Implicit estimation (ballpark)
  • Impact – Effort Matrix
  • Kanban
  • Pay only what it’s worth
  • Priority Pyramid for a backlog
  • Cycle-time analysis of work items in the retrospective
  • Team’s confidence & vote to commit
  • Flow efficacy and run rate
  • Update progress on tasking frequently
  • Measure story completion rates (flow)
  • “Just do it”
  • See investment, flow, et cetera for entire line of funding
  • Monte Carlo on live data
  • Use economic/finance metrics (run rate)
  • LeSS, no Lean
  • Establish a time or cost box to discover if value is there (think like a spike)
  • Revisit Definition of Ready stories (much as you do for Definition of Done)
  • Tabletop card sorting (can sort by relative effort without writing down size)

 

Successes and Failures

We explored the success behind our approaches, both when we estimate and we decided not to…  We posted where we saw these work (and not) and told stories about them, pulling out key characteristics for future thoughts.

Successes

With Estimates Without Estimates
  • Incremental change to existing functionality
  • Clear visibility to team on what and how they do things
  • Teams are normally good at point estimates of complexity
  • Provided useful tool to the business to negotiate priority
  • Estimated branch’s annual budget with a low fidelity estimate
  • Discussion of size can lead to a better understanding of complexity
  • Predicted lead-time and cost based on actual data
  • Embedded team:
    • project plan
    • estimates
    • hitting dates regularly (team built in a good amount of slack)
  • Stronger focus on business priority
  • Faster turn-around of issues
  • Faster planning sessions
  • Kanban team:
    • established rate (stories/week)
    • decomposed release backlog into stories
    • determined release date by applying rate
    • hit date within reason
  • Delivered on time and under budget
  • Team raising more questions and uncovering issues
  • Saved time and effort not using estimates of time/effort in maintenance (using Kanban)
  • Keeps customer engaged with team; holds customer accountable
  • Teams establish a mental model faster
  • Shorter release planning sessions without any true loss in fidelity of the work
  • More time to code & test; fewer meetings (#nomeetings)
  • Release planning; impact identified earlier (before it came to the team)

 

Likewise we also explored failures on both sides. We had less failures in the #noestimates which we jokingly said proved it was better, but putting the joke aside, it most likely stemmed from the fact most of our audience had not gone that route.

Failures

With Estimates Without Estimates
  • Longer than necessary debates about size
  • Estimating line items in an Excel sheet of requirements w/LOE in hours
  • Difficult to Forecast
  • Very bad at estimating time for tasks
  • Estimates happen, but not responded to  i.e. work only gets added, but not removed
  • Estimates get a life of their own – used by business to obligate team, contracts, etc.
  • Disconnect between business value, effort, and time tracking
  • Large teams (16+ members) struggle with estimation
  • Not understanding the value of what is being delivered (and only looking at cost)
  • Underestimating LOE
  • Ambiguity/unclear scope delivery
  • Impact on customer relations when very wrong
  • Asked of team: “when can you have a prototype?” Answer: 2 weeks that turned into a 2 week deadline for a working system for a customer
  • People (mgmt.) wanting to make demands for a schedule rather than accepting a schedule produced by the team
  • Forecasting when a feature set can be finished
  • People don’t understand what’s going on so they jump to (usually worst case) conclusions
  • Low effect on teamwork and lack of transparency
  • Focus on learning can be lost
  • People ask: “what’s a story point?” “What if X is doing the work? Is the story points different?”
  • Acquisition mandates estimates
  • Got bogged down in a feature whose complexity wasn’t well understood
  • Unanticipated work distracting team, preventing effective use of velocity for predictions/planning

Additional 15.0 Background

footbridge_into_fogBy now, you have hopefully read through the site’s background on what Agile Dialogs itself is about.

If you haven’t looked over it, our Agenda is posted below the schedule on that page. The proceedings will follow that structure.

Our theme as you may have noted was:

“Agile Predictions: Exploring the tools for making sound business decisions with & without estimates”

Who Attended

We had a good mix of folk from various sectors and roles. We had reached out to a variety of folk on both sides of the debate.  We didn’t really wind up with anyone at either end of the spectrum. (i.e. We didn’t have anyone that not willing to have their assumptions challenged if they were proestimates and we didn’t have anyone that tried to avoid estimates at all cost.) We did have people that had never not estimated and we had some folk that had dropped at least certain kinds of estimates. Here’s a quick breakdown of some of the attendees:

  • We had private and public sector (Government) representation both as contractors and at least a former GS type.
  • We had folk that had worked all different sizes of projects/prduct development activities.
  • We had people that served as coaches both internally and externally (i.e. hired coaching consultants).
  • We had Scrum Masters/agile team leaders.
  • Our private sector representation were from the finance, insurance, embedded system, broadcast media, and large-scale website development.
  • Our farthest attendee came from Saint Louis.

One interesting thing to note about participants: all those that had experimented with #noestimates had come from a background of using estimates.

Why We Structured Like We Did

When you look at our agenda, you may ask why didn’t we take an Open Space approach? When Trent and I discussed whether that would work, we included that the self-organization aspect of OST would get us there most likely, but it may take longer than the day we had set apart.  Not knowing the attendees ahead of time, if we had people that had been arguing over Twitter or via their blogs, people that had never before suspended their assumptions about the other side, then it may take too long to actually have fruitful discussion.

Additionally, while some polling to both sides indicated some wanted data to argue, we knew that data could be used to prove either side. We felt that would take away from the story-telling we wanted to use at the start.  Experiences are different than data.  I could be totally on target with accurate estimates, yet the experience may yield something that we called a failure if the business wanted IT to get started and the estimation process was frustrating them. The same could be in reverse… (estimates aren’t needed, but the experience produces what we would consider a failure).

So we settled on a set of structured questions, again you can see these on the Agenda/Schedule page.

Resources

At the beginning we provided references (people, blogs, and books) on both sides that we felt were thoughtful in how they approached their side (and the other). We did not try to be exhaustive, so here is what was put on our lists:

Want to get better at estimation? Follow these people!

Other resources:

  • Agile for Humans podcast @AgileforHumans by @RyanRipley (covered both sides)
  • Agile Estimation and Planning by Mike Cohn

Want to learn more about #noestimates? Follow these people!

Other resources:

  • Agile for Humans podcast @AgileforHumans by @RyanRipley
  • Engineering Culture at Spotify (2 part video)
  • This Agile Life podcast @ThisAgileLife
What We Estimate

Throughout the day, we asked people to post what they estimate. Here’s what made the list –

  • Swag in time looking at a UX mock-up understanding the tech stack
  • Throughput
  • Value to Customer
  • Impact to mission
  • Trips to Home Depot
  • Complexity of Story
  • Cost
  • Time to Complete Release
  • Level of Effort (LOE) to implement/complete a change request
  • Revenue