Filling up the Civic Tech Toolbox

Mark Headd
5 min readNov 8, 2020
We need to put more tools in our civic tech toolbox

It’s sort of a cliche to say it these days, but digital transformation in government is largely not about technology.

It seems like this is an idea that has broad consensus, but much of the work that gets done in and around government technology modernization still views problems solely through the lens of technology. What if we approached this work differently? What if we viewed the problems of government technology through the lens of other disciplines?

We don’t need to pretend we’re always experts in these other disciplines — we’re not going to convince our partners that we’re doctors or economists, because we are not. But it is can be useful to look at the problems facing governments in implementing and managing digital solutions using the analytic tools and frameworks of non-technology disciplines.

The following is not an exhaustive list of ideas for doing this, and I may not be saying anything all that revolutionary here. But this is a summary of a few of the ideas for doing this I’ve been thinking about a lot lately. There’s no doubt lots more, and I love to hear more about them.

Symptoms vs. cause

It’s easy to spot bad technology — we often just point our browsers at a government website and our minds immediately begin to coalesce around solutions. We think we understand the problems and we move quickly — often too quickly — to find the answers.

It’s helpful sometimes to think of a government technology problem as a sick patient — one we must diagnose in order to prescribe a cure. The analytic frameworks of the world of medicine accommodate the idea that what we see with our eyes or touch with our hands may not be the actual problem. Physicians work to identify the underlying cause of a condition and treat it, rather than simply addressing the symptoms that manifest for our eyes.

And so it is often with government technology problems. Distinguishing between the symptoms and the cause is important. A broken website or failed system roll out are often the symptoms of larger, more fundamental problems. If we simply treat the symptoms, the underlying malady may linger. There are plenty of other approaches in IT consulting that can help with root cause analysis, I find it really helpful sometimes to think about the approach of differential diagnosis.

What are all of the possible things that could be causing the symptoms that we are seeing with our eyes? Which of them makes the most sense in explaining what we are seeing, and which do not? This approach not only focuses on underlying causes, it inherently implies that there may be more than one cause for a problem — something that is often true when we work on government technology projects that have run into challenges.

Assuming rationality

So much of the work we do in government technology modernization involves developing what essentially boils down to advice — lists of best practices, playbooks, and guides to help agencies adopt technology more successfully. “Here is how agile works, here is why DevSecOps is so important, talk to users, scope projects smaller — now go do these things.” *

And while all of this guidance and content is valuable, this approach assumes that the sole problem facing development and service delivery teams is a lack of information. But what if we proceeded from the assumption that an information deficit was not the primary cause of technology dysfunction in a government agency, that delivery teams had access to information on how to properly run their projects but chose not to anyway? What would explain these decisions if lack of access to information on how to do it properly was not the issue?

It’s useful in these cases, I think, to look at a framework from the world of economics — rational choice theory. This theory holds that individual actors in a market will (usually) act rationally and in their own self interest based on available information. I’m not suggesting that this theory explains everything about how markets work (it doesn’t), but it is a useful construct in helping us understand why service delivery teams — and government agencies more broadly — sometimes make the choices that they make. If we approach a technology problem using the assumption that the individuals making decisions will act rationally, then it helps us to focus on the forces behind how these choices get made.

Consider the common civic tech precept to scope technology projects more modestly, and to iterate more frequently. There is plenty of information and accumulated experience to suggest that larger technology projects fail at a higher rate than smaller ones. If we assume that information deficit is not the root cause of decisions to make projects larger — i.e., project teams know that larger projects have increased risk of failure — what then explains the reasons that projects become large?

If we assume rationality, we might better understand the reasons why a service delivery team might choose a larger project over a smaller one. Consider the process to accredit technology systems for production deployment — this process usually involves a significant amount of paperwork and documentation on a litany of different security controls. It can sometimes cost a team more in terms of time and effort than the work to actually build a solution in the first place. Given the overhead involved in a process like this, it might make sense from a project team perspective to limit the need to run this gauntlet to as few times as possible — or even only once if that can be achieved. This might drive a team to enlarge a project scope, to reduce the number of iterations, and jam as much work into a release as possible so as to only pay the overhead of the accreditation process once.

Viewed in this way, we can see that the decision to scope a project more broadly is rational (from the perspective of the project team). It makes sense given the weight of the process that may be needed to get successive iterations of a solution released. It’s not that a project team isn’t aware of the risks of larger projects with fewer iterations, it’s that the environment in which they are building the solution favors larger projects. It “rewards” them with fewer trips through the accreditation labyrinth.

This is a bit of a contrived example, but it’s not a stretch to apply this same framework to things like the procurement process (which can also be lengthy and costly) or other processes that have phase-gates with large documentation requirements at each step. Faced with the overhead of complying with these processes, it is sometimes the rational choice to enlarge projects so as to pay this cost as few times as possible.

Understanding why delivery teams would make these decisions is key to understanding the root cause of technology dysfunction. The answer is often about more than just the technology being used.

We need to free our minds, to open up opportunities for thinking about these problems differently. For this to work, we need to look at how other disciplines analyze problems and understand their causes. There’s so much room here to add to our standard tool kit for fixing government technology issues. As civic technologists, it’s up to us to find them and start using them.

Let’s fill up our tool box.

* Oversimplified to make a point, but you get my drift.

--

--