ais分级

    科技2022-07-12  201

    ais分级

    演示地址

    Medical science is making such remarkable progress that soon none of us will be well. — — Aldous Huxley

    医学科学取得了令人瞩目的进步,以至于我们每个人都不会很快康复。 — —奥尔德·赫x黎

    A good physician treats the disease, the great physician treats the patient who has the disease — Moses Maimonides (Rambam)

    一位优秀的医生可以治疗这种疾病,一位出色的医生可以治疗患有这种疾病的患者-Moses Maimonides(Rambam)

    Technology and computer science have, at last, conquered all the diseases of the world, enabled 100% accurate diagnosis, and established foolproof treatment protocols.

    技术和计算机科学最终征服了世界上所有的疾病,能够100%准确地诊断并建立了万无一失的治疗方案。

    If you believe that, you have a bit of reading to do because artificial intelligence (AI) and technology continue to make mistakes in both diagnosis and treatment, thanks to human and dataset imperfections.

    如果您相信这一点,那么您还有很多工作要做,因为由于人类和数据集的不完善, 人工智能(AI)和技术在诊断和治疗方面仍会犯错误 。

    Errors are encoded with input from flawed databases, and I don’t mean to blame coders who use those databases. Newly incorporated data mistakes continue to occur, and that may indicate the learning curve is still in the process of weeding out coding errors.

    错误是用有缺陷的数据库的输入编码的,我并不是要怪罪使用这些数据库的编码人员。 新出现的数据错误继续发生,这可能表明学习曲线仍在清除编码错误中。

    But bias isn’t a coding error if the coder used a library of algorithms for this new algorithm and never knew of the bias inherent in the library.

    但是,如果编码人员使用该算法的新算法库而不知道该库中固有的偏差,则偏差不是编码错误。

    Once the data is churned multiple times into ever-increasing new databases, how do you detect the “original sin” in the code? Should someone be held accountable for computer-generated life-and-death decisions that go awry?

    一旦将数据多次添加到不断增加的新数据库中,您如何检测代码中的“原始罪恶”? 有人应该对计算机生成的生死决定负责吗?

    When the mistakes arise, and the treatment proves lethal or less-than-satisfactory, where does the buck stop? Is it the physician, the medical center, the software, the company that sells the software, the coder, or the dataset that was used?

    当错误出现,并且治疗证明是致命的或不尽人意时,责任何处停止? 是医师,医疗中心,软件,销售软件的公司,编码器还是所使用的数据集?

    All of the above? Lawyers will probably say everyone involved is responsible, and, to a degree, they all are, aren’t they? Didn’t all of them spin the wheel and demonstrate a great degree of trust in what they were using, doing, working on at the moment?

    上述所有的? 律师可能会说,每个相关人员都有责任,在一定程度上,他们都是,不是吗? 他们不是所有人都转动方向盘并表现出对当前正在使用,正在做和正在从事的工作的高度信任吗?

    Placing our total faith in anyone or anything is a gamble, no matter how reassuring the treatment team may be. They don’t know, without a doubt, that something is without error (and everything has a degree of error). Only in an imagined world do we have perfection. Please refer back to your first class in philosophy for a refresher on that one.

    无论治疗团队如何放心,让我们完全相信任何人或任何事情都是一场赌博。 毫无疑问,他们不知道某事没有错误(而且一切都有一定程度的错误 )。 只有在想象中的世界中,我们才能拥有完美。 请再次参考您的哲学第一课,以复习该课程 。

    Image: Variety.com 图片:Variety.com

    我们期待什么? (What Were We Expecting?)

    When 2020 rolled around, the anticipated significant strides in technology and, in particular, healthcare and medicine were here at last. But not really.

    2020年到来时,技术,尤其是医疗保健和医学方面预期的重大进步终于来了。 但事实并非如此。

    Were we naive or too eager for relief from disease and the death it brought? Did we want computer-guided medicine without mistakes, or were we too mesmerized by comic book heroes and Hollywood fantasy films?

    我们是天真的还是太渴望从疾病及其带来的死亡中得到缓解? 我们是否想要没有错误的计算机辅助医学,还是我们也被漫画英雄和好莱坞奇幻电影迷住了?

    Was Pinocchio a fantasy film? How could it not be, but actual science fiction it wasn’t if we measure it against recent movies. I’ll leave the explanation of fairy tales to Bruno Bettelheim’s book.

    皮诺曹是一部奇幻电影吗? 怎么可能不行,但是如果我们根据最近的电影来衡量,那实际上不是科幻小说。 我将童话的解释留给布鲁诺·贝特海姆(Bruno Bettelheim)的书 。

    Forbes outlined some of what we wished for as the New Year rang in, and the famous crystal ball descended on Times Square.

    福布斯概述了新年的到来,而我们所希望的是,著名的水晶球落在时代广场上。

    Healthcare with digital technology. New digital technologies, from telemedicine and on-call services to wearable technologies and new medical devices are already beginning to change the industry. The next decade will see these technologies reach their true potential.

    数字技术的医疗保健 。 从远程医疗和呼叫服务到可穿戴技术和新医疗设备的新数字技术已经开始改变行业。 未来十年,这些技术将发挥其真正的潜力。

    Reach their true potential? Have they fulfilled the promise we so desperately wanted, or are we left wanting? If they’d reached their full potential, would we continue to need coders?

    发挥自己的真正潜力 ? 他们实现了我们如此迫切想要的诺言,还是我们只剩下想要的诺言? 如果他们发挥了全部潜能,我们会继续需要编码员吗?

    Wouldn’t all the algorithms, thanks to deep learning, correct themselves when errors were detected? And wouldn’t a version of HAL be running the show without the intervention of humans?

    借助深度学习 ,所有算法都不会在检测到错误时自行纠正吗? 如果没有人的干预, HAL版本是否可以运行该节目?

    Photo by Arif Riyanto on Unsplash Arif Riyanto在 Unsplash上 拍摄的照片

    我们现在有什么? (What Do We Have Now?)

    We’ve come a long way, baby (according to the cigarette commercial), but we’re not there yet. The warning signs are now being posted.

    宝贝,我们走了很长一段路(根据香烟商业广告 ),但我们还没有到那儿。 现在正在张贴警告标志。

    However, health innovators need to be careful to design a system that enhances doctors’ capabilities, rather than replace them with technology and also to avoid reproducing human biases.

    但是,健康创新者需要谨慎设计能够增强医生能力的系统,而不是用技术代替它们,并且还应避免重现人类的偏见。

    How do you “avoid reproducing human biases” if you are unaware which database you used contained it or which database that database used or that the coders never knew of this bias? You are totally in the dark here. Bias, in terms of race, however, has been found in these databases.

    如果您不知道所使用的哪个数据库包含该数据库,或者该数据库使用的是哪个数据库,或者编码人员不知道这种偏见,那么如何“ 避免重现人为的偏见 ”? 您在这里完全处于黑暗中。 但是,在种族方面 ,在这些数据库中发现了偏见 。

    Bias is elusive, and it takes a lot of brainstorming, not coding, to wrench it out of the dataset. Datasets will have to be discarded because of the bias in it. But which datasets and do clinicians or coders want to toss something that might be valuable?

    偏差难以捉摸,并且需要大量的头脑风暴而不是编码,才能将其从数据集中剔除。 由于数据集存在偏差,因此必须将其丢弃。 但是,哪些数据集以及临床医生或编码人员想要扔掉可能有价值的东西?

    Chances are this could be the beginning of something akin to the dog chasing its tail.

    这可能是类似于追逐狗尾巴的东西的开始。

    The “promise” of databases in EHR has already created an additional problem, physician burnout. Inputting data was never the young physicians’ intention when they entered medicine, but it’s a fact of life now.

    EHR中数据库的“承诺”已经造成了另一个问题,即医生倦怠 。 输入数据绝不是年轻医师入药时的意图,但这已经成为现实。

    How culpable is the burnout or the “EHR fatigue” in database errors? Who checks them after they are entered and how often are problems caught and corrected?

    数据库错误中的倦怠或“ EHR疲劳 ”有多严重? 输入它们后,谁来检查它们?发现和纠正问题的频率是多少?

    I’ve heard medical office personnel detailing that they could not correct errors because the database is maintained by some other entity to which they don’t have access. If a patient had an entry of a cardiac problem, and this was inaccurate, it couldn’t be changed. This, then, goes into a database that will be used for another medical algorithm. Again, garbage in, garbage out.

    我听过医务人员详细说明他们无法纠正错误,因为数据库由他们无法访问的其他某些实体维护。 如果患者进入心脏问题,而这是不准确的,则无法更改。 然后,它进入一个数据库,该数据库将用于另一种医学算法。 再说一次, 垃圾进,垃圾出 。

    在哪里发现了一些错误? (Where Have Some Mistakes Been Found?)

    A jaw-dropping incident would be malware that can detect cancer when it’s not there.

    令人incident目结舌的事件可能是恶意软件,如果不在的话,它可以检测到癌症。

    A new study from a team of Israeli researchers shows just how easy it has become to use deep learning as a way to alter medical images to add incredibly realistic cancerous tumors and fool even the best radiologists the majority of the time.

    以色列研究人员团队的 一项新研究表明,使用深度学习作为改变医学图像以添加难以置信的逼真的癌性肿瘤并愚弄大多数放射线医师的方法变得多么容易。

    When experienced medical radiologists tried to parse out the malware-constructed tumor images, they had extreme difficulty. The scans were exceedingly good, and the staff was shocked because they believed it would be impossible to fool them.

    当经验丰富的医学放射科医生试图解析出由恶意软件构成的肿瘤图像时,他们遇到了极大的困难。 扫描效果非常好,员工感到震惊,因为他们认为不可能愚弄他们。

    The malware researchers produced a video to explain how it could be done. They also provided the original code, which I won’t note here.

    恶意软件研究人员制作了一段视频,解释了如何实现。 他们还提供了原始代码,在此不再赘述。

    Image: raspberrypi.org 图片:raspberrypi.org

    How sophisticated was the computer used in some of these simulations of incursions of medical databases? The “supercomputer” was a mindblowing $50 Raspberry Pi you can fit into your jacket pocket.

    这些医疗数据库入侵模拟中使用的计算机的复杂程度如何? “超级计算机”是一款令人难以置信的$ 50 Raspberry Pi,您可以将其放入夹克口袋中。

    Google has developed software to detect breast cancer and claims extraordinary results for it in their study.

    Google已开发出可检测乳腺癌的软件,并在研究中宣称其具有非凡的结果。

    “According to the study, using the AI technology resulted in fewer false positives, where test results suggest cancer is present when it isn’t, and false negatives, where an existing cancer goes undetected.” Could the Google program detect the doctored malware images? It presents an interesting challenge.

    “ 根据这项研究 ,使用AI技术可以减少假阳性(测试结果表明不存在癌症的情况)和假阴性(无法检测到现有癌症的情况)。” Google程序可以检测到篡改的恶意软件图像吗? 它提出了一个有趣的挑战。

    医疗保健聊天机器人如何? (How About Chatbots for Healthcare?)

    In the UK, one chatbot (Babylon Health) has been up and running and found wanting. A 60+ woman was asked if her symptoms might be related to her pregnancy. Sure, some women of a certain age might become pregnant with medical intervention, but that’s not the usual diagnosis for specific symptoms.

    在英国,一个聊天机器人( Babylon Health )已经启动并运行并发现有问题。 询问了60多名妇女的症状是否可能与怀孕有关。 当然,某些年龄的女性可能会接受医学干预,但这并不是特定症状的常见诊断。

    A physician, Dr. David Watkins, who began questioning the accuracy and utility of the chatbot, was quickly rebuffed when he posed his concerns. For his effort, he was called a “troll” by the company.

    医师大卫·沃特金斯博士(David Watkins)开始质疑聊天机器人的准确性和实用性, 当他提出疑虑时,他很快被拒绝了 。 由于他的努力,他被公司称为“巨魔”。

    Over the past couple of years, Dr Watkins has provided many examples of the chatbot giving dangerous advice. In one example, an obese 48-year-old heavy smoker patient who presented himself with chest pains was suggested to book a consultation “in the next few hours”. Anyone with any common sense would have told you to dial an emergency number straight away.

    在过去的几年中,沃特金斯博士提供了许多聊天机器人提供危险建议的示例。 在一个示例中,建议一名肥胖的48岁重度吸烟者患者出现胸痛,他建议“在接下来的几个小时内”进行咨询。 任何有常识的人都会告诉您立即拨打紧急电话。

    Photo by Laurynas Mereckas on Unsplash Laurynas Mereckas在 Unsplash上的 照片

    药物相互作用的最重要的问题 (The All-Important Question of Drugdrug Interactions)

    Chemistry isn’t everyone’s strong suit, and with ever-increasing chemical combinations leading to newer drugsdrug interactions, AI can offer a modicum of safety. A program has been designed to address this issue.

    化学并不是每个人的强项,随着化学组合的不断增加导致药物与药物之间的相互作用,AI可以提供一定的安全性。 设计了一个程序来解决这个问题。

    The high number of possible adverse drug­drug interactions, which can range from minor to severe, may inadvertently cause doctors and patients to ignore alerts, which the researchers call “alert fatigue.” In order to avoid alert fatigue, the researchers identified only interactions that would be considered high priority, such as life­threatening, disability, hospitalization and required intervention.

    药物不良React的大量可能发生,范围从轻微到严重,可能会无意间导致医生和患者忽略警报,研究人员将其称为“警报疲劳”。 为了避免机敏的疲劳,研究人员只确定了被视为高度优先的相互作用,例如威胁生命,残疾,住院和需要干预。

    The researcher, Dr. Soundar Kumara of Penn State, said, “This study is of very high importance. Most patients are not on one single drug. They’re on multiple drugs. A study like this is of immense use to these people.”

    宾夕法尼亚州立大学的研究员Soundar Kumara博士说:“ 这项研究非常重要。 大多数患者不使用一种药物。 他们正在服用多种药物。 这类研究对这些人有巨大的用途。”

    The challenges and rewards of AI in healthcare are apparent. The future awaits those who will identify those that need to be recognized.

    AI在医疗保健领域的挑战和回报是显而易见的。 未来等待着那些将识别出那些需要被认可的人。

    Each advance forward, however, must be made with mindfulness of the pitfalls that can inveigle themselves into algorithms. Reputations will be made and lost, depending on the care with which these opportunities are managed.

    但是,每一项进步都必须牢记可以将自己融入算法的陷阱。 名声的成败取决于管理这些机会的方式。

    翻译自: https://medium.com/beingwell/ais-failed-and-tarnished-promises-in-healthcare-bfbcdd92e5a2

    ais分级

    相关资源:AIS03_系統開發
    Processed: 0.010, SQL: 8