人工智能实践教程机器人篇

    科技2025-03-21  39

    人工智能实践教程机器人篇

    Are recent advances finally getting us to the point where AI and humans will be indistinguishable?

    最近的进展是否最终使我们到达了人工智能和人类无法区分的地步?

    Recently, the Guardian, one of the UK’s most popular outlets, released op-ed with a provocative title: “A robot wrote this entire article. Are you scared yet, human?”. Overall, the essay held together unexpectedly well, despite some simple language and repetition, giving it an eerie self-referential quality– an AI telling us why we shouldn’t be afraid of AI.

    最近,英国最受欢迎的媒体之一《卫报》发表了带有挑衅性标题的专栏文章:“机器人写了整篇文章。 人类,您还害怕吗? ”。 总体而言,尽管使用了一些简单的语言和重复内容,但这篇文章的结局出乎意料地很好,使其具有令人毛骨悚然的自我参照质量–一种AI告诉我们为什么我们不应该害怕AI。

    The essay wasn’t created by a robot per se, but by a new piece of software called GPT-3, a text generation AI engine created by San Francisco based Open AI. Not only has the new release raised eyebrows (MIT’s Technology Review called it “shockingly good”) but it has re-surfaced a question that has been explored in popular fiction starting with Mary Shelley’s Frankenstein in the nineteenth century all the way up to modern sci fi classics like Blade Runner and more recently, HBO’s Westworld, where robots that are indistinguishable from humans escape from their sheltered theme park world that they were created for, causing havoc.

    本文不是由机器人本身创建的,而是由名为GPT-3的新软件创建的,GPT-3是由位于旧金山的Open AI创建的文本生成AI引擎。 新版本不仅引起了人们的关注(《麻省理工学院的技术评论》称其“令人震惊”),而且还重新浮出了一个流行小说中所探讨的问题,从19世纪的玛丽·雪莱的《科学怪人》一直到现代科学像《银翼杀手》(Blade Runner)和最近的HBO的《西部世界》(Westworld)这样的经典作品,与人类没有区别的机器人会从为其创造的庇护主题公园世界中逃脱,从而造成破坏。

    Guessing whether something is artificially generated or not has a storied history, going back to British mathematician Alan Turing at the dawn of the modern computing era in the 1950s. He proposed a parlor game, which he called the “Imitation Game” (note that a biopic of the same name had less to do with AI and more to do with this code-breaking work during WWII). Today we refer to it as the “Turing test”. It involved sending messages to entities behind two curtains — one hiding a computer and one a human — and reading their responses. If you can’t tell which curtain hides the human vs. the computer solely on the basis of the text messages, the we would say that the computer has passed the Turing test.

    猜测是否由人为产生的东西具有悠久的历史,可以追溯到1950年代现代计算时代来临的英国数学家艾伦·图灵(Alan Turing)。 他提出了一个客厅游戏,他称之为“模仿游戏”(请注意,同名传记人物与AI的关系较小,而与第二次世界大战期间的密码破译工作关系更大)。 今天,我们将其称为“图灵测试”。 它涉及向两个幕后的实体发送消息,其中一个隐藏计算机,一个隐藏人,然后阅读他们的回复。 如果您仅凭短信无法分辨是哪个窗帘遮住了人与计算机之间的距离,那么我们可以说计算机已经通过了图灵测试。

    Previous attempts by AI to write text usually only succeeded on a very small scale — after a few sentences it became pretty obvious that the “author” wasn’t really understanding the content of what they were writing.

    AI以前的尝试编写文本的尝​​试通常只能在很小的范围内取得成功-经过几句话后,很明显“作者”并没有真正理解他们所写内容的内容。

    Eliza was one of the first textual conversation tools that made waves at the MIT AI Lab Eliza是最早在MIT AI Lab引起轰动的文本对话工具之一

    One of the earliest examples that made waves was Eliza, a natural language processing “digital psychiatrist” built at the MIT AI lab in the 1960s. Eliza would ask the user questions and then use some clever pattern matching on the user’s replies to make statements and ask more questions, not unlike a real therapist. For example, you might say “I am going to the see my mother” and it might respond “How do you feel about your mother?”, which might lead to a whole conversation. Of course, if you had more than one short conversation or a long conversation you could quickly realize that the questions were all formulated in the same way … revealing the underlying pattern matching algorithm one question at a time.

    引起轰动的最早例子之一是Eliza,他是1960年代在MIT AI实验室建立的自然语言处理“数字精神病医生”。 Eliza会问用户问题,然后在用户的答复上使用一些巧妙的模式匹配来做出陈述并提出更多问题,这与真正的治疗师不同。 例如,您可能说“我要去见妈妈”,并且可能回答“您对妈妈的感觉如何?”,这可能导致整个对话。 当然,如果您进行了不止一次简短的对话或长时间的对话,您可能会很快意识到所有问题都是以相同的方式提出的……一次揭示一个基础模式匹配算法一个问题。

    More recently, GPT-3’s predecessor, GPT-2, was used by an engineer to build a text adventure game, ala the old Zork games. Content generated by AI would seem to enable endless adventures, but when you played it, while the individual text snippets held up well, they didn’t coalesce together and it became obvious that there wasn’t much correlation between what you read previously and what you were reading in the next “room” of the adventure. It was like a novel with different paragraphs written by different people. Still, it was significant progress over previous attempts.

    最近,工程师使用GPT-3的前身GPT-2来构建文本冒险游戏,例如旧的Zork游戏。 AI生成的内容似乎可以实现无尽的冒险,但是当您播放它时,虽然各个文本片段保持良好,但它们并没有融合在一起,很明显,您先前阅读的内容与所读内容之间没有太多的相关性。您正在冒险的下一个“房间”中阅读。 就像一部小说,不同的人写了不同的段落。 不过,与以前的尝试相比,这是一个重大进步。

    GPT-3, on the other hand, goes beyond simple pattern matching, and has been trained using modern machine learning techniques on billions of snippets of text. The way it works is that you give it a prompt of any length, and it generates a variable length response — even up to a whole essay — selecting and combining the “best” bits of text it can which seem most relevant and putting them together in a way that would seem most “optimal”. The definition of “optimal” is of course the whole point of training the neural network — and this latest incarnation does a better job at placing snippets of text near each other that seem to “fit”. The longer the prompt, the better the “fit” is likely to be.

    另一方面,GPT-3超越了简单的模式匹配,并且已经使用现代机器学习技术对数十亿个文本片段进行了培训。 它的工作方式是给它一个任意长度的提示,并且它会产生可变长度的响应-甚至是整篇文章-选择并组合看起来最相关的“最佳”文本片段并将它们放在一起以一种看起来“最优化”的方式。 “最优”的定义当然是训练神经网络的全部要点-这种最新的化身在将似乎“合适”的文本片段彼此靠近放置方面做得更好。 提示时间越长,“拟合”可能会越好。

    Nevertheless, the initial alarm that the Turing test had been passed may have been unfounded. It is the latest in a series of AI advances that have cause speculation about whether the Turing test has been passed. Upon closer examination of the Guardian’s methods, it was revealed that they generated 8 different essays and selected the best parts of each and then edited it, just like a human op-ed would have been edited for style, continuity, etc. This makes it even more suspect to many watchers of AI that a full essay generated by the engine would hold up together.

    但是,关于图灵测试已通过的最初警报可能是没有根据的。 这是一系列AI进步中的最新成果,引发了人们对图灵测试是否通过的猜测。 在仔细研究《卫报》的方法时,发现他们产生了8篇不同的论文,并选择了每篇论文的最佳部分,然后对其进行了编辑,就像对人的操作进行样式,连续性等方面的编辑一样。对于许多AI观察者来说,甚至更怀疑引擎生成的完整文章是否会合在一起。

    Technology news today is laden with AI-related announcements, and speculation about passing the Turing Test now goes beyond simple text messages (which Turing envisioned would be done via teletype machines in his original version) to voice and even video.

    今天的技术新闻中充斥着与AI相关的公告,现在关于通过图灵测试的猜测已经超出了简单的文本消息(图灵所设想的将通过其原始版本的电传打字机完成)到语音甚至视频。

    In 2018, there was similar sentiment about Google’s Duplex having passed the Turing Test. Using its voice generation capabilities, Google was able to make automated phone calls to hair salons, for example, to make appointments for you. The vocal articulation was so good, including fillers like “umm” and “ahh” that it was difficult for the salons to know that it was a robot and not a human on the other end. In the end, Google agreed to state at the beginning of the call that it was being made by a computer and not a human. Was this a passing of a “vocal” Turing Test? There is some debate over this, since it didn’t involve a general purpose conversation, just a very narrow conversation for a specific purpose (to make an appointment). But it did go a long way to show that computers were well on the path to becoming indistinguishable from humans in multiple ways.

    在2018年,人们对Google的Duplex通过图灵测试也有类似的看法。 借助其语音生成功能,Google能够自动拨打发廊的电话,例如为您预约。 发声非常好,包括“ umm”和“ ahh”之类的填充词,沙龙很难知道它是机器人,而不是人。 最后,谷歌同意在通话开始时声明它是由计算机而非人为制造的。 这是否通过了“声音”图灵测试? 对此存在一些争论,因为它不涉及通用对话,而只是针对特定目的(进行约会)的狭窄对话。 但是它确实显示了很长的路要走,表明计算机正以多种方式与人类变得毫无区别。

    Surely, most of us think, if we did away with the curtains then we could tell visually which was the human and which was the AI, correct? Well maybe not as virtual characters become more and more realistic.

    当然,我们大多数人都认为,如果我们取消了窗帘,那么我们就可以从视觉上分辨出哪个是人类,哪个是AI,对吗? 随着虚拟角色变得越来越逼真,也许不会。

    The Virtual model Mia created by Christian Guernelli 克里斯蒂安·格纳利(Christian Guernelli)创建的虚拟模型Mia

    In 2018, Chinese news organization released its two virtual news anchors — that looked, well, pretty much human while reading the news, including facial expressions, verbal flaws and gestures. More recently, virtual influencers, like Li’l Mikaela and others, generated using the same techniques used in video games, have millions of followers, waiting for their videos and images on Instagram and YouTube. In fact, the LA Times recently wrote about the emergence of new type of modeling agencies, where there are no “organic” supermodels, and posits why the next Gigi, Kaia and Kendall just might be digital.

    2018年,中国新闻机构发布了两个虚拟新闻主播-阅读新闻时看起来很人性化,包括面部表情,言语缺陷和手势。 最近,像Li'l Mikaela和其他人一样的虚拟影响者使用与视频游戏中使用的相同技术生成,吸引了数百万粉丝,他们在Instagram和YouTube上等待他们的视频和图像。 实际上,《洛杉矶时报》最近发表了有关新型造型机构的出现的文章,其中没有“有机”超模,并提出了为什么下一个Gigi,Kaia和Kendall可能是数字化的原因。

    All of these virtual characters are created using the same CGI techniques used in video games and by Hollywood to create special effects. While the latest virtual characters are becoming harder and harder to distinguish, in all of these cases — newscasters, virtual influencers, digital supermodels and CGI characters in films — they all follow a pre-determined script. This means that even if the visual fidelity gets better, as we expect it to in the next few years, even if we aren’t able to distinguish between virtual and real people, thy technically won’t be passing the Turing test because that would require interactivity.

    所有这些虚拟角色都是使用与视频游戏中相同的CGI技术创建的,好莱坞也使用它们来创建特殊效果。 尽管最新的虚拟角色越来越难以区分,但在所有这些情况下(新闻广播员,虚拟影响者,数字超模和电影中的CGI角色),它们都遵循预定的脚本。 这意味着,即使视觉保真度得到改善(正如我们期望的那样,在接下来的几年中),即使我们无法区分虚拟人物和真实人物,您的技术上也不会通过图灵测试,因为这会需要互动。

    We’re not there yet, but we are getting closer. Each year’s advances advances us along the path of passing the Turing test — textually, vocally, visually and perhaps even eventually, physically. A physical Turing test would be when you couldn’t distinguish between an actual physical robot and a human … no curtains needed, and you might say that that the replicants of Blade Runner and the “hosts” in Westworld both pass this kind of “physical” Turing test.

    我们还没到那儿,但是我们越来越近了。 每年的进步使我们沿着通过图灵测试的道路前进-从文字,声音,视觉甚至最终在身体上都可以。 图灵测试是当您无法区分实际的机器人和人类……不需要窗帘时,您可能会说Blade Runner的复制品和Westworld中的“主机”都通过了这种“物理的”测试。图灵测试。

    Since text was the “original” format of Turing’s famous imitation game, GPT-3 is a big step in this direction, as is the Guardian’s op-ed (even if there was some editing required afterwards). It’s possible that there may be more stories coming out soon which will appear to have been written by a human but were, in actuality, written by a robot.

    由于文本是图灵着名的模仿游戏的“原始”格式,因此GPT-3和《卫报》的专栏文章都朝着这个方向迈出了一大步(即使之后需要进行一些编辑)。 可能很快会有更多的故事问世,这些故事似乎是由人类编写的,但实际上是由机器人编写的。

    Thank goodness, you must be thinking, that this article was obviously written by a human.

    谢天谢地,您一定在想,这篇文章显然是由人类撰写的。

    But, then again, given what I have just been telling you, can you be completely sure?

    但是,话又说回来,根据我刚才告诉您的内容,您可以完全确定吗?

    Rizwan Virk is a venture capitalist, founder of Play Labs @ MIT the author of “The Simulation Hypothesis: An MIT Computer Scientist Shows Why AI, Quantum Physics and Eastern Mystics Agree We Are in a Video Game.” Follow him via his website at www.zenentrepreneur.com or on Twitter @rizstanford.

    Rizwan Virk是一位风险投资家,是Play Labs @ MIT的创始人,其作者是“模拟假说:麻省理工学院的计算机科学家向人们展示了AI,量子物理学和东方神秘主义者为何同意我们参与视频游戏。” 通过他的网站www.zenentrepreneur.com或在Twitter @rizstanford上关注他。

    翻译自: https://medium.com/swlh/this-essay-was-written-by-a-human-not-a-robot-or-was-it-5309c1067590

    人工智能实践教程机器人篇

    Processed: 0.009, SQL: 8