什么是认知偏见

    科技2025-02-26  10

    什么是认知偏见

    The term Artificial Intelligence has been in use since 1955. AI pioneer, John McCarthy described AI as “making machines do things that would require intelligence if done by man.” However recently there has been a sudden resurgence in the concept because of how pervasive it is in our daily lives. The shows we watch on Netflix, to what ads and search results we get served, what information we are shown on social media, to how we make decisions — they are all governed by AI algorithms created by people.

    自1955年以来就一直使用人工智能一词。人工智能的先驱约翰·麦卡锡(John McCarthy)将人工智能描述为“制造机器做的事情如果要由人来完成则需要智慧。” 但是,由于它在我们日常生活中的普遍性,最近该概念突然复活了。 我们在Netflix上观看的节目,向我们提供的广告和搜索结果,在社交媒体上显示的信息,我们如何做出决策-这些都由人们创建的AI算法控制。

    This pervasiveness makes it important to acknowledge and unpack what problems are we solving through AI, and who is creating the steps to solve those problems, whose interests are being represented and why is it being done in the first place.

    这种普遍性使得重要的是确认并解开我们通过AI解决的问题,以及谁在创建解决这些问题的步骤,代表谁的利益以及为什么要首先解决这个问题。

    To discuss just this, Tea Leaves hosted a panel discussion with eminent theorists and practitioners of AI technology including Dr Jamika D. Burge, Jennifer Bove, Ruth Kikin-Gil and Dr Molly Wright Steenson. The event was a part of the Nature X Design series which has a really cool lineup for the coming months.

    为了对此进行讨论,Tea Leaves与著名的AI技术理论家和实践者进行了小组讨论 ,包括Jamika D. Burge博士 , Jennifer Bove , Ruth Kikin-Gil 博士和Molly Wright Steenson博士 。 该活动是Nature X Design系列的一部分,该系列在接下来的几个月中将提供非常酷的阵容。

    This article highlights some of the key points discussed.

    本文重点介绍了所讨论的一些关键点。

    数据收集的重要性 (Importance of data collection)

    AI is not ‘intelligent’ but rather is intentional. AI does things that it is trained to do based on existing data. Analyzing and reflecting on the data set being used to train is crucial. Are we reaching everybody and reflecting a diversity of experiences through the data sets we use to train AI algorithms? Asking this question is important because data comes from the past, and runs the risk of reinforcing existing biases. If we want AI to be responsible and ethical the first step should be to intentionally source data that is reflective of multiple viewpoints and lived experiences.

    人工智能不是“智能”的,而是故意的。 AI会根据现有数据执行受过训练的工作。 分析和反思用于训练的数据集至关重要。 我们是否通过使用用于训练AI算法的数据集来影响每个人并反映出各种各样的经验? 提出这个问题很重要,因为数据来自过去,并且存在加强现有偏差的风险。 如果我们希望AI负责任和符合道德,那么第一步应该是有意地获取反映多种观点和生活经验的数据。

    AI的交叉性 (Intersectionality in AI)

    The algorithms that govern AI are created by people for people. Biases do exist unless they are systematically eliminated. One of the strategies proposed to eliminate these biases is to base the creation of AI on basic human values. While human values may be a good starting point, it may be limited because of the subjective nature of values. What if people designing and using these algorithms have drastically different values? To represent the context and lived experience of a large number of users, it is therefore more pertinent to create a strong foundation in intersectionality. Intersectionality, which is to consider the multiple dimensions of lived experience and identity, can truly push the needle towards more just, ethical and unbiased AI.

    支配AI的算法是人为人创造的。 除非系统地消除了偏见,否则确实存在偏见。 提出的消除这些偏见的策略之一是将AI的创建基于人类的基本价值观。 虽然人的价值观可能是一个很好的起点,但由于价值观的主观性质,它可能受到限制。 如果设计和使用这些算法的人具有截然不同的价值怎么办? 为了代表大量用户的上下文和生活经验,因此在交叉性方面打下坚实的基础更为相关。 交叉性是考虑生活经验和身份的多个维度,可以真正推动实现更公正,道德和公正的AI。

    设计师的角色 (Role of designers)

    Designers have a pivotal role to play in the creation of AI by being the voice of the users and key players in democratizing AI. Their role includes raising questions on what data is being collected about them, how it’s being used, who is benefiting from the data being collected. Bringing this voice of the user to the development process can address some ethical concerns earlier on. As designers, we are also well-positioned to raise awareness among end-users on potential errors, consequences from the use of AI and shed light on the ways to mitigate them. Designers can advocate for transforming AI from being a tool for extracting value from users to providing value to users.

    设计师在成为AI民主化的用户和关键参与者的心声中,在AI的创建中扮演着举足轻重的角色。 他们的角色包括就收集有关它们的哪些数据,如何使用它们,谁从收集的数据中受益等问题提出疑问。 将用户的声音带入开发过程可以在早期解决一些道德问题。 作为设计师,我们也处于有利位置,可以提高最终用户对潜在错误,使用AI的后果以及减轻错误的方式的认识。 设计师可以倡导将AI从一种从用户身上获取价值的工具转变为向用户提供价值的工具。

    The great thing about AI is that it can be taught new patterns and models. It is time to re-educate our machines to be more ethical, just and an ally for people.

    AI的伟大之处在于可以教授新的模式和模型。 现在是时候重新培训我们的机器,使其更具道德,公正和人民盟友的精神。

    翻译自: https://medium.com/design-forward/bias-in-bias-out-942dd08ff72f

    什么是认知偏见

    相关资源:论文研究 - 文化差异和认知偏见引发严重的崩溃或灾难
    Processed: 0.011, SQL: 8