算法中立性

偏见感染了掌控我们生活的算法。预测性执法系统高估了有色人种社区的犯罪率;招聘算法刻意排除了符合条件的女性候选人;面部识别软件难以识别深色皮肤的脸孔。算法偏见已经得到了相当大的关注。相比之下,算法中立性却得到了较少的关注。我研究的是算法中立性。我提出了三个问题:什么是算法中立性?算法中立性是否可能?在我们关注算法中立性时,我们可以从算法偏差中学到什么?为了以具体的方式回答这些问题,我选择了研究搜索引擎这个案例。参考科学中的中立性研究,我认为只有当搜索引擎排名网页时,诸如政治意识形态或搜索引擎运营商的财务利益等价值不起作用时,搜索引擎才是中立的。我认为,搜索中立是不可能的。它的不可能似乎威胁到搜索偏见的重要性:如果没有一个搜索引擎是中立的,那么每个搜索引擎都是有偏见的。为了化解这种威胁,我区分了两种偏见形式,自我失败的偏见和其他价值观的偏见。尽管中立性不可能实现,但这种区分使我们能够理解搜索偏见,并捕捉到其规范的色彩。
Bias infects the algorithms that wield increasing control over our lives.
Predictive policing systems overestimate crime in communities of color; hiring
algorithms dock qualified female candidates; and facial recognition software
struggles to recognize dark-skinned faces. Algorithmic bias has received
significant attention. Algorithmic neutrality, in contrast, has been largely
neglected. Algorithmic neutrality is my topic. I take up three questions. What
is algorithmic neutrality? Is algorithmic neutrality possible? When we have an
eye to algorithmic neutrality, what can we learn about algorithmic bias? To
answer these questions in concrete terms, I work with a case study: search
engines. Drawing on work about neutrality in science, I say that a search
engine is neutral only if certain values, like political ideologies or the
financial interests of the search engine operator, play no role in how the
search engine ranks pages. Search neutrality, I argue, is impossible. Its
impossibility seems to threaten the significance of search bias: if no search
engine is neutral, then every search engine is biased. To defuse this threat, I
distinguish two forms of bias, failing-on-its-own-terms bias and other-values
bias. This distinction allows us to make sense of search bias, and capture its
normative complexion, despite the impossibility of neutrality.
论文链接:http://arxiv.org/pdf/2303.05103v1


Posted

in

by

Tags: