Towards Ethical Algorithms

Old tools & new challenges for governments

Mark Headd
9 min readJan 16, 2019

There is a common misconception that data-driven decision making and the use of complex algorithms are a relatively recent phenomenon in the public sector. In fact, making use of (relatively) large data sets and complex algorithms has been fairly common in government for at least the past few decades.

As we begin constructing ethical frameworks for how data and algorithms are used, it is important that we understand how governments have traditionally employed these tools. By doing so, we can more fully understand the challenges governments face when using larger data sets and more sophisticated algorithms and design ethical and governance frameworks accordingly.

In early June 2018, I was on the campus of Syracuse University working on final arrangements for a data conference I was organizing on campus. While there I had a chance to connect with an old classmate, now a professor at the Maxwell School of Citizenship and Public Affairs. We got to talking over coffee about how the school’s traditional programs of study lined up with a lot of the things that were to be discussed at the upcoming conference — open data, big data, predictive analytics and civic hacking.

Hendricks Chapel, on the campus of Syracuse University

There seemed to be some interest in adding new classes to accommodate some of these topics as part of the MPA program, but as we talked he observed that a lot of the things being discussed as part of the civic tech movement seemed very similar to things that had traditionally been taught as part of the typical MPA program. He drew parallels between the use of big data and predictive analysts with disciplines like econometrics, which have long been a part of the public administration toolkit, and have been practiced regularly in government for many decades.

This discussion resonated with me, but I was too distracted with details of the upcoming conference — then just a few days away — to give it deeper thought then. I’ve been meaning to get back to this idea for a while, and with new frameworks and toolkits for ethical use of data and algorithms being developed, it seems like a good time to unpack this idea a bit more.

Here are the questions that stuck with me from this discussion.

Is the way that governments make use of data and algorithms today fundamentally different than how these tools have traditionally been used? If so, how? And what lessons can we draw from the way governments have traditionally used these tools to guide their use by governments into the future?

Back in the Day

A couple of years removed from graduate school, I took a job with the State of Delaware’s Department of Finance and got my first introduction to data-driven decision making. The Division of Revenue, where I worked, was responsible for conducting fiscal analyses of proposed tax law changes, and supporting the work of the group that conducted revenue forecasting, among other things.

One of the primary tools we used to do our work was a custom software application (developed in house) that calculated the impact of tax law changes. A sprawling, spaghetti code program authored in FoxPro, the application was a way of calculating tax liability for a set of taxpayers using information from actual tax returns. It allowed us to see how proposed changes to tax rates, deductions, credits, etc. would play out using actual taxpayer data. And while such tools were not uncommon in federal or state tax agencies at the time, Delaware’s small population allowed us to avoid the sampling techniques typically used by other agencies and use real tax information from every single state taxpayer in our analyses. It was really quite something — we were stretching the computing resources we had access to at the time (meager as they were) pretty close to their limit.

While I was there, I wrote my first mainframe computer program. It was written in the Natural programming language and queried tax data in an ADABAS system. I wrote it to calculate the withholding amounts being reported on individual taxpayer W-2 forms and compared them to the withholding amount being reported by employers when these amounts were remitted to the Division of Revenue. The director of the Division at the time suspected that this might generate some good audit opportunities, and so I was assigned the task of writing the program and delivering the results to the audit unit.

My program added up all of the withholding amounts reported on individual W-2 forms for a particular employer, and then compared the sum to the amounts that the employer had paid the state for that year. If the two numbers were different, it could indicate an underpayment of withholding tax by the employer. It was clunky, and I probably violated lots of Natural programming norms at the time, but it worked.

Delaware State Senate Chamber, in Dover, Delaware

My memory of these early tools and the work we did in a state tax agency in the late 1990’s informs how I look at the current debate about algorithm use in government. When we ran tax impact analyses to evaluate the impact of proposed tax law changes, our application would tell us how much less (or more) revenue the state would collect based on the change. It could also show the distribution of this change — highlighting which taxpayers would be paying more or less depending on what was changing. It did not indicate whether the change was good or bad, positive or negative, just what the change was. Whether the estimated change was desirable was a judgement that others would make (the Governor’s Office and General Assembly), informed by the analysis we provided.

Similarly, my clunky mainframe program provided information on employer withholding payments that was ultimately used by the audit department. The program did some basic sorting, but ultimately the determination of which potential underpayment to pursue first (or most vigorously) was a judgement that others would make. Was it more important to examine the largest potential underpayment, or the most recent? Would employers with lots of employees be examined first, or those that had underpayments in multiple years? These were questions that other state employees in the audit unit would answer using the data that my program generated.

We didn’t conduct a lot of regression analyses of our own, but — like other state tax and revenue agencies — we were frequent consumers of econometric reports. These reports helped us to understand broader trends that could potentially impact state revenue collections, and also helped inform larger policy questions. How did the number of years of eduction impact wages? How would a raise in the minimum wage impact employment levels?

Econometric analyses help establish the relationship between different variables — their direction, whether they are causal — and can be used to quantify relationships (an x% increase in the minimum wage will result in a y% change in employment). They are, by their nature, predictive. But these analyses don’t infer whether such changes are good or bad, desirable or undesirable: they were inputs into larger policy discussions.

So while the use of algorithms, sophisticated computer programs and other data-driven technology tools in government is not a new phenomenon, their use today is different in some important ways. Understanding the nature of this difference is crucial to understanding why we need ethical frameworks to help guide their use.

What’s Different Now?

Changes in computing power and data availability have obviously changed the way we use algorithms and software application in government today. In the 1990’s many state tax agencies used only a sampling of taxpayer data to conduct revenue analyses because of limited access to computers powerful enough (or with large enough hard drives) to analyze every single taxpayer return.

The availability of different tools to build algorithms, and to conduct complex analyses has also mushroomed. Governments have access to many more tools, and at lower cost, than they ever have before to develop new ways to use data. These tools are also increasingly easy to use, and no longer require expertise in application development or mainframe programming to generate outputs.

The availability of these new tools and the abundance of data also means that the use of algorithms and sophisticated software has come out of the policy shop, and is increasingly being used to inform day-to-day government operations. Where previous econometric analyses informed policy debates, and resulted in broad changes affecting large classes of people — often over long periods of time — new algorithmic tools have a much more immediate and direct impact on specific individuals.

In the past, we may have used data to design or fund more effective suicide prevention programs for groups of returning veterans. Today, we use data to identify individual veterans who are demonstrating tendencies deemed to be consistent with suicidal intentions, and direct immediately dispatch intervention resources to their door.

In the past, we may have used data to inform more effective sentencing guidelines to be adopted by state legislatures and eventually impacting how judges passed down sentences. Today, we use data to inform sentencing or pre-trial bail decisions for specific individuals on the spot in the courtroom.

Data and algorithms have traditionally been used to inform how policy is created and adopted. Now it is increasingly being used to determine how policy is executed. This immediacy has tremendous appeal, particularly for it’s potential to improve the efficiency with which government operates. But it also raises a number of ethical concerns.

How can we determine when a potential ethical issue exists in how we use data or algorithms in government? What questions can we ask ourselves before we use these tools to better ensure they are used responsibly and that there is accountability for outcomes?

Ethical Considerations for Algorithm Use in Government

Governments making use of data and algorithms face a number of risks in ensuring that these tools are used effectively and responsibly.

Data quality is one important risk to consider: is the data accurate and complete? Does the data contain bias or errors that could skew results one way or another? Algorithmic accuracy is another important risk to consider — does the model that is being used produce accurate results?

But it is important to note that these are not new risks. Governments have faced these risks for as long as they have been using data and algorithms. Contemporary algorithm use that is closely tied to policy implementation and execution does make these issues more acute, as the potential impact of bad data or algorithmic accuracy is more immediate. But fundamentally, these issues are the same as ones governments have faced for decades.

But there is a new kind of risk that contemporary algorithm use raises that is critical for governments to consider and address as they make use of these new tools. In the past, data and algorithms helped government policy makers understand the relationship between variables — education and wages, tax rates and the distribution of the tax burden. The outputs of these analyses helped inform decisions by policy makers or other actors in government.

Contemporary use of these tools is different in that algorithms now increasingly embody the capacity to make an inference or a judgement about something. In the past, we might have used data to try and identify the relationship between households with specific characteristics and the presence of at risk children to inform policy changes. Today, we use algorithms to identify which specific households government officials think have at risk children in them based on the data fed into a model. The model can infer that the risk of an at-risk child in a household is sufficient to warrant a government agency to take immediate action and intervene.

When we imbue the ability to make a judgement to an algorithm, a new class of ethical questions arises.

Who do we hold accountable for decisions that get made or actions that are taken by government when some part of the decision making has been ceded to an algorithm? And because the consequences of algorithm use in government are now much more immediate for specific individuals, how do we ensure that their use is being applied fairly and without prejudice or bias?

There are important discussions for current government employees, and those studying to become government administrators to have. It’s heartening to see more and more schools of public administration take up these issues and give students a grounding in contemporary data and algorithm use.

But as we arm government officials with new tools and training that could potentially revolutionize how we govern and provide public services, we should not lose sight of the ethical issues these new tools create.

We have more work to do.

--

--