Agile Functional Requirements Model

Your team is humming along, producing sprint after sprint of working code, continually expanding your software application with great features that your clients love. Then one day during a backlog refinement session someone asks, “What’s the maximum length for a user-defined report name?”

You all look at one another and shrug. Then someone says,

“I don’t remember, was it in the user story?”

“Maybe. When did we do it? What sprint?”

“Two sprints back? No, three. I remember it was hot out.”

“Can’t be, we haven’t touched user reports in months. You’re thinking of dashboards.”

“Oh, right.”

“Was the maximum even in the acceptance criteria?”

“No, it wasn’t. I remember we had to ask the product owner during the sprint.”

“So the story won’t have it, even if we do find it.”

“Somebody log in, create a report, and try really long names until you find the limit …”

You’ve just run up against one of the drawbacks to agile methodologies: valuing conversation over documentation means you sometimes don’t write down things that you’ll need to remember much later.

There’s no question that the focus on business value delivery, the elimination of big requirements up front, and decomposition into small demonstrable chunks, are all advantages to agile requirements that can improve a team’s success. But when you’re building a big system over a long period of time, you need more than memory and a pile of completed user story cards as documentation.

A continually maintained functionally organized requirements model bridges the gap between business-oriented user stories and technical design documentation (assuming you’re creating and maintaining those). It can be used by your requirements team for impact analysis, and it can be a crucial part of your trace matrix. With proper linking you can generate metrics around functional test coverage, development by persona, and other measures of your product.

Maintaining a requirements model iteratively does not add a great deal of work to each sprint. And that’s key – requirements are not added to the model until the work is committed to a sprint. During the sprint, the team (usually a business analyst, but on a truly cross-functional team any member should be able to do it) adds the requirements for each item in the sprint to the model. The user story becomes one or more business requirements, the acceptance criteria become functional and non-functional requirements and business rules. The team identifies and documents any additional requirements that weren’t part of the conditions of acceptance – like that maximum field size for report names.

When creating test cases, the team creates links between the tests and the requirements. Even if you work with an IDE (Integrated Development Environment) that automatically links test cases to user stories, you will derive value from linking the tests directly to the more granular requirements. The team will be able to tell immediately if their test plan is not covering part of the functionality. They will also see other requirements related to the functionality that have already been done, and the test cases linked to those. This supports test case maintenance and reuse.

Organizing your requirements functionally makes it easier to find requirements, and allows your requirements team to see new requirements in context. You will be far less likely to specify a new feature that contradicts one you built six months ago if, as you’re analyzing the new requirement, you have a look at the model and see the earlier requirements.

Breaking out requirements by type: functional, non-functional, business rule, business requirement, and perhaps others allows you to trace from most technical/granular up to least. You can look at a top level business requirement and see all of the functionality linked to it that a change might impact.

Isolating non-functional requirements is important because, at least in a Scrum process, many of them are not stated as part of the user story and conditions of acceptance. They are applied to all work by the team as part of standards and definition of done. Likewise, the non-functional requirements in the requirements model should be derived from standards (e.g., your user interface standard, your performance standards) and will change less frequently – unless you’re redoing your user interface, improving performance, or focused on some other area related to quality. What must happen each sprint is linking of new functional requirements to related non-functional requirements.

And finally, business rules should be isolated because they often can change without impacting functionality. Changing the allowed length of that user-defined report name should not mean changing all the code having to do with defining and naming the report – just the validation on its length.

If you’ve scaled your agile process and have more than one team working on the same software, the shared requirements model is a critical tool to help them keep track of the state of the system as they go. Imagine if the team doing the grooming at the start of this blog entry didn’t even do the user story that created the report name function?


Software is for People

When you empower your software development team to make decisions about the details — and in case you hadn’t noticed, that’s one of the core principles of agile practices – you must also supply them with an arsenal of tools that help them make the right decisions. Understanding who will use the software they’re building is one such tool.

But it’s not good enough to just say, “we’re adding this feature so that people can draw on their photos.”

One engineer might post a lot to sites like Reddit and Quora, so she thinks “redact license plates and faces. Got it!”

Another is a visual artist and thinks, “Select areas and add colors, blend, maybe some stroke filters…”

A third is all about Instagram and thinks, “funny filters and stickers! Let’s find some libraries to offer.”

Certainly by the time the item has gone through refinement you’ve set this team on course for the right kind of drawing. But why not save time and establish the intended use more clearly from the start by telling them what kind of user the feature is for?

A persona is an aggregated biography of a certain type of user of your software. Each persona should be based on real people in the role that they represent, which can be derived from market data about the slice of the population that has that job title or role. Personal details like age, gender, home life, marital status, hobbies, and geographical location should all be derived from market demographic data. Even more importantly software usage details must match the role that the persona represents.

Does all of that sound more like marketing than software requirements? It should. Personas are a standard marketing device. And it also should because you’ve most likely got to sell your software to that same market. If you aren’t building your software for your market, why are you building it?

So let’s write a user story for that photo drawing requirement:

“As Raul the frequent social media poster, I want to obscure parts of my photos so I can post them on public forums without revealing any personal information that I don’t have the right to and earn ‘likes’ or ‘karma.’”

You just eliminated a half hour of debate over the scope of the item. Redacting license plates it is!

Giving the persona a name and a biography puts real people in the minds of the team members. You know your team is maturing when you hear them referring to your most common personas by name.

Identifying the persona that each requirement is for helps you keep your product focused. If you can’t figure out which persona wants the requirement, perhaps you shouldn’t be doing it at all. If you have requirements for a legion of personas, you’re probably spreading yourself too thin. Most of your requirements should be for a single, primary persona. Try to satisfy more than one and you’ll end up satisfying none.

At Streamline we have a sizeable inventory of personas because we’re actively developing four software products (which have sub-products with different user types). The personas are a common resource – some of them use more than one of our products. A great example is Dorothea the user administrator. She’s a busy lady, handing user accounts for all of our products!

Dorothea is one of our “secondary” personas – others include technical support analysts, implementation engineers, and report writers. These are not the main roles that we build software for, but they do interact with our software in very specific use cases.

Roles we do not have personas for are the Product Owner, the Scrum Master, the chief architect, and other engineering management type roles. Why not? Well, are we selling our software to those people? No.

I once had an inexperienced business analyst argue that the scrum team’s work was going to be judged by the product owner, so the user stories should be written from the product owner’s perspective. Yikes! The product owner is just a conduit for other stakeholders’ requirements. This attitude smelled of the command-and-control culture that our organization was trying to overcome at the time (and largely has). Watch out for this kind of perturbation of the intent of agile practices — they can slip in when you aren’t looking!

We recently gathered data for requirements by primary and secondary persona for each of our products. One of them has been doing as much for secondary personas as primary personas, and one has done more for secondary personas! When we drilled deeper we found out why: we’re commercializing one, and fixing data infrastructure for the other. But if we couldn’t lay our finger on these reasons, we’d need to take a good long look at those backlogs as related to our company goals. (You’ll note that I’ve used that redaction feature to obscure the product names.)

How would this metric look for your products?

Backlog Management

Agile’s Dirty Secret

Agile purists sometimes seem very proud of eliminating “requirements” from the software development process. The thing is, they haven’t. They’ve just rebranded requirements as the “product backlog.”

A team can’t successfully build anything without a plan, and that’s what the requ — uh, I mean backlog — is. At Streamline we use three tools to manage the backlog through its entire lifecycle:

  • Aha! — a product management tool for capturing and forming ideas and product strategy and refining into features.
  • Microsoft Team Foundation Server/Visual Studio — for the granular management of user stories. Because we’re a Microsoft shop, the MS development environment is the best tool for the team to manage their daily work, from user stories to tasks to code change sets.
  • IBM Rational DOORS — for the functional requirements model and more granular trace matrix. DOORS is a powerful, traditional requirements management tool that might not seem to fit in an agile world, but in fact serves as the backbone of knowledge about our products.

INVEST in Good Requirements

Scrum teams should not accept into their sprint package any backlog item that does not meet INVEST: Individual, Negotiable, Valuable, Estimable, Small, and Testable. Your process must include opportunity for the team to determine whether each candidate item meets INVEST, and the opportunity for the team to estimate the effort before the sprint is packaged. At Streamline we hold a couple backlog grooming sessions and a couple estimation meetings during each sprint to work on the items for the next sprint. While this draws the team away from the current work, it is critical preparation for the next iteration.

Just-in-Time and Trailing Requirements

When you’re building a complex software application that will continue to evolve over several years, it is important to keep track of decisions about functionality that are typically made during each agile iteration. If you don’t, you’ll be struggling to remember the intended sequence of steps in some process, or the business rule governing choices in a context-sensitive menu months later when you need to make changes.

Before agile, you would have turned to the massive requirements specification. But that’s the thing those agilists are so proud of eliminating. Instead, you’ve got all those user stories and conditions of acceptance on hundreds, maybe thousands, of physical or virtual cards to look through. What sprint was the original story packaged in? Did we do any other changes since then? Did the COAs actually cover this specific business rule that’s not making any sense now, or did we come up with the rule during the sprint?

Even if you maintain good metadata on your digital backlog (like, what area of the application the user story is for), and you are able to search for key words, the completed product backlog is still more like the pieces of a jigsaw puzzle — and if you’re under the gun to package an enhancement you’re not in the mood for games.

At Streamline, we believe in just-in-time requirements and maintaining a living functional requirements model. We manage the product backlog in the traditional agile way — high level features decomposed into actionable user stories with conditions of acceptance. Teams groom and decompose and estimate, and stories are packaged. We only specify enough for the team to know what to do. But here’s where we divert from the agile purists: during the sprint, the team also captures the complete requirements in a functionally organized requirements model. The conditions of acceptance are converted into traditional requirements (“The system shall…”), and additional requirements that the team identifies are also documented. Even if functionality is easy to understand by using the software, so granular requirements are not necessary, business rules can be opaque and are especially important to document.

During each sprint, the team also links their test cases to these requirements so that we can collect metrics on functional test coverage. The model supports a trace matrix that allows us to track from a bug through a test case to a functional requirement, over to a related business rule, and up to the original user story.

This functional requirements model serves as an analysis tool for future requirements. We can see all of the related requirements to the one we’re considering changing, as well as the scope of the test cases, and whether there are any existing bugs. The investment the team makes in the requirements model pays off for the product owner and business analyst preparing features and stories for future sprints.

Calibrating Business Value

When you’ve got five software products managed by five product managers, how do you know that they’re all using the same scale when they assign business value?

You don’t. At least, not if you don’t pay some attention to calibrating across the organization.

The other day we had a lively discussion as we looked at completed user stories from our various backlogs and the business values that they’d been assigned. Participants included not only the folks in backlog management roles (directors, managers, and business analysts), but also scrum team members and folks from sales, IT, and client services. It was a fantastic opportunity to show these non-development team members that engineers do want to deliver value!

Nobody in the room knew all about all of the products. But once we’d presented the fundamentals of how business value is used in scrum and a way to evaluate a story, most everyone reached similar values on stories that they understood. We explained our existing scale (zero to three hundred points in ten point increments), and then we suggested some criteria:

  • What percentage of existing clients want it?
  • Is it for the primary user persona?
  • Has it been validated through customer development?
  • Is it sizzle or steak?
    • Is it primarily to help sales close deals?
    • Is it a technical change that will retain existing clients?

Then we presented a series of examples and asked the group to assign value (see the image above). For some, consideration of the number of impacted users was a revelation. For others, abstracting business value by not somehow tying it to revenue was pointless. And for a few, accepting that solving our own problems, like improving deployment or fixing the logo on the home page, has no business value was really tough to swallow. Nonetheless, everyone left with a clear understanding of what those numbers mean on our work items, and the product management team is going to do better at calibrating their value assignments with their colleagues’. That’s a win.

The Client Feedback Loop

If any organization has discovered the secret to producing error free software I’d like to hear about it. For most of us, if your teams were to take the time to make their working code absolutely flawless, by the time they’re done the market window will have closed and the users will have bought a different app. You almost always have to release software that still has some flaws, so you have to include a way to manage prioritizing and fixing them in your lifecycle.

Software development does not happen in a vacuum. Agile practices effectively bring business stakeholders into the development process to better steer teams toward creating software that solves real world problems. But what about the other end of the process? All of the agile process diagrams show the feedback loop where input from users is fed back into the agile backlog. But how does an organization realize that loop?

The tools that manage support cases are not designed to manage development work and vice versa. While plenty of these tools offer integrations to the other kind, it’s not just about being able to create an item over there when your daily working tool is over here. Support staff and other client-facing roles — who may be the best proxies you have to actual hands-on users — need to know what information is critical in an item in the development backlog and also how the development backlog is managed.

When is priority evaluated and assigned? By whom? How can I check up on where my item is in the list? How can I influence the prioritization decision? Have I provided enough information for someone to analyze the problem?

Our product teams conduct change management meetings that include these stakeholders. Everyone sees what’s in line to be done next and can weigh in on which item goes first. But talking about priority is not the same as making the creation and tracking of an effective work item a part of daily workflow rather than a disruptive special task.

At Streamline Health we do not have a full integration between our development management system (Microsoft Team Foundation Server (TFS)) and our support management tool (Salesforce). Client-facing folks have to find the TFS web portal to create a work item when they need help from development to solve a client issue.

The other day we had a lunch and learn session called “TFS for Non Developers.” You can believe that the support and other client facing folks were there, even though this one was a “bring your own lunch” session.

Although we’d presented this material before, regular refreshers, even for those who’ve been here for a while, are a great help in keeping the lifecycle process moving like a well-oiled machine. We showed the audience how to get to the TFS web portal and where to find the backlogs that they care about (structured by product). We reviewed the required fields and why they’re necessary. We showed them how to create and save their own queries, and how to create their own alerts. And probably most importantly, we gave them links to both external resources like the Scaled Agile Framework website, and internal guidance documentation.

An effective agile implementation must include cultural acceptance of principles that the framework depends on, like:

  • Transparency
  • Information Radiation
  • Accountability and Responsibility

Helping our client-facing team members understand that they can look at the backlogs at any time, and that they are responsible for communicating their needs to the development teams in a usable form, is our responsibility. Only then can everyone be held accountable for delivering and maintaining excellent software.