A Measure of Inclusion

By Erik Schwartz

As digital services evolve, marginalized residents stand to lose the most. The UK’s Government Digital Service helps ensure that doesn’t happen.

At the 2013 Code for America Summit, Mike Bracken, director of the UK’s national office of Government Digital Service (GDS), provided a plain-language explanation of how his organization would approach user-centered design: “governments must organize around the needs of the user rather than the needs of the government itself.” He later clarified “our target user is everyone.”

GDS is redesigning a diverse set of high-volume public services to work for a broader swath of the U.K. Practically speaking, this means that GDS must not only identify marginalized citizens but also measure for inclusivity alongside traditional metrics such as cost per transaction.

How do they plan to do this? In this article, we’ll take a closer look at the tools GDS has put into place to effect the change it seeks, including its digital standard and inclusivity metrics. Before we jump in, though, it’s useful to explain why inclusivity matters in the first place.

Why inclusivity matters

As Emily Shaw notes in “Civic Wants, Civic Needs, Civic Tech”, each new piece of civic technology can either reinforce or redistribute existing lines of power. Efficiency, in this case, is not an unqualified good as it can potentially amplify policies both good and bad.

It’s therefore crucial that civic technologists define who we seek to serve as well as the policies we seek to amplify. GDS does this by laying out a qualitative set of 10 design principles that guide their efforts1. Their sixth principle, in particular, identifies those most in need:

Design for Inclusiveness

We’re designing for the whole country—not just the ones who are used to using the web. In fact, the people who most need our services are often the people who find them hardest to use. If we think about those people at the beginning we should make a better site for everyone.

The GDS toolkit

In order to practice what it preaches, GDS has produced both an inclusivity roadmap that explains how the government will help get people online and an inclusion scale that rates the level of skill necessary to use a digital service unassisted. The organization has also hired a team of support stewards for those in need of assistance.

What’s more, all GDS services must adhere to a 26-point service standard before they can go live. Some of their standards regarding inclusivity stipulate:

  1. 1. Understand user needs. Research to develop a deep knowledge of who the service users are and what that means for digital and assisted digital service design.
  2. 10. Put appropriate assisted digital support in place that’s aimed towards those who genuinely need it.
  3. 21. Establish a benchmark for user satisfaction across the digital and assisted digital service. Report performance data on the Performance Platform.
  4. 23. Make a plan (with supporting evidence) to achieve a high digital take-up and assisted digital support for users who really need it.

With regard to measurement, inclusivity is embedded into the agency’s key performance indicators (KPIs). They are:

  • Cost per transaction - Quarterly calculations of service cost divided by the number of completed transactions.
  • User satisfaction - Scores based on surveys taken after transaction completion.
  • Completion rate - How many people complete a transaction once it has been started.
  • Digital take-up - Monthly calculations of the number of transactions completed digitally, divided by the total number of completed transactions. Note that assisted digital transactions do not count toward a service’s digital take-up.

KPIs such as digital take-up incentivize teams to both simplify their service and encourage digital literacy. Over time, GDS hopes that fewer users will require assisted digital services (i.e., more people will transition to the mainline service). KPIs such as completion rate and user satisfaction encourage agencies to meet user needs as quickly as possible with the minimum amount of frustration.

GDS measures its services in a way that promotes cost efficiency and inclusiveness. But how do they determine which programs to tackle in the first place?

Selecting projects

As a new organization, GDS took inventory of the 650+ government programs and selected a high-value subset to transform in their first couple of years2. In their words, “[reforming all of them] is something we can’t do in one go, so we helped eight of the biggest transactional departments to select 25 ‘exemplar’ services.”

To understand the formative days of GDS I spoke with Frances Berriman, a front-end developer who joined when the team was at a mere headcount of 20 (they’re now 600). She reiterated that transactional volume was a big factor, especially in light of a high-level mandate to demonstrably reduce cost per transaction for each service. Berriman explained that GDS did not choose its projects; rather, they guided each service department through the process of selecting its own transformation.

Berriman was also sure to note the qualitative, human elements of GDS’s work. The Carer’s Allowance benefit, for example, is a “regular payment for carers to help look after someone with substantial caring needs…” It could potentially help 2 million carers, but the usage was actually much lower—in large part due to bureaucratic application hurdles. GDS made the quantitative argument for transforming the Carer’s Allowance benefit, but also appealed to its human elements. “These people are already really stretched and the government could do more to help them out.”

Berriman argues for both types of measurement in civic design:

Civil servants are ultimately there to provide service for people, so having the human side is totally legit… most of these things would have been better if a human side was there to begin with. Fewer bureaucratic nightmares.

It’s easy to think that [the quantitative and qualitative] are mutually exclusive worldviews but they can totally work together. You need the objective, lean side to make it measurable so that you have accountability. But to actually provide the services and make them human you have to have that other side, the soft touch. Otherwise you end up with a robotic site that doesn’t care that you’re a person at the end of the day. Or you end up with the “it seems nice but we don’t know if it works.” You have to be able to hold both.

Broader applications

The takeaways from GDS’s approach to inclusiveness are several: It’s crucial to know who the design is meant to reach; audience is everything; when focusing on efficiency, project selection is policy amplification3.

Civic designers should work to select projects that include and support the power-distant; create assistance mechanisms that ensure no one is left behind; and measure their team’s ability to gradually simplify the service and broaden digital access so that assistance is needed less over time. These ideas are an excellent starting point for ensuring that we intentionally bring inclusivity to our work.

Footnotes

  1. These principles have inspired similar proclamations in the U.S. such as the U.S. Digital Service Playbook.

    Return
  2. Much to their credit, GDS has documented nearly every aspect of their work. Unfortunately they didn’t document how they went about narrowing the initial 650 down to 25.

    Return
  3. It’s therefore incumbent upon civic designers to identify and empathize with whomever will be affected by the technologies we create.

    Return

Erik SchwartzErik Schwartz is a digital services expert with the city of Lexington, Kentucky and former Code for America fellow for the very same city. It went well! His goal is to be a skillful inventor and consummate teammate in the service of socially impactful work. erik@erikschwartz.net / @eeeschwartz