2023 年5 月 22 日(May 22)星期一,一个名为“Bloomberg Feed”的经过验证的Twitter帐户分享了一条推文(Twitter),声称(Monday)五角大楼(explosion at the Pentagon)发生爆炸,并附有一张图片。如果您想知道这与人工智能 (AI) 有什么关系,那么这张图片是人工智能生成的,该推文迅速传播开来,并引发了股市短暂下跌。情况可能会更糟——这清楚地提醒人们人工智能的危险。

人工智能的危险
我们需要担心的不仅仅是假新闻。人工智能存在许多直接或潜在的风险,从隐私和安全到偏见和版权问题。我们将深入探讨人工智能的一些危险,看看现在和未来正在采取哪些措施来减轻这些危险,并询问人工智能的风险是否大于收益。
假新闻
当 Deepfake 首次出现时,人们担心它们可能会被恶意使用。对于新一波的AI 图像生成器(AI image generators)(如DALL-E 2、Midjourney或DreamStudio)来说也是如此。2023 年3 月 28 日(March 28),人工智能生成的假教皇方济(AI-generated images of Pope Francis)各穿着白色Balenciaga羽绒服、享受滑板和玩扑克等冒险活动的图片在网上疯传。除非你仔细研究这些图像,否则很难将这些图像与真实的东西区分开来。

虽然教皇的例子无疑有点有趣,但五角大楼的图片(以及随附的推文)却(Pentagon)一点也不有趣。人工智能生成的虚假(Fake)图像有能力损害声誉、结束婚姻或职业生涯、制造政治动荡,如果被错误的人使用,甚至会引发战争——简而言之,这些人工智能生成的图像如果被滥用,可能会带来巨大的危险。
随着人工智能图像生成器现在可供任何人免费使用,并且Photoshop(Photoshop adding an AI image generator )在其流行软件中添加了人工智能图像生成器,操纵图像和制造假新闻的机会比以往任何时候都大。
隐私、安全和黑客攻击
当谈到人工智能的风险时,隐私和安全也是一个巨大的问题,许多国家已经禁止 OpenAI 的ChatGPT。意大利(Italy)出于隐私考虑而禁止了该模型,认为它不符合欧洲通用数据保护条例(General Data Protection Regulation)(GDPR),而中国(China)、朝鲜(North Korea)和俄罗斯政府则因担心它会传播错误信息而禁止了它。
那么,为什么我们在人工智能方面如此关注隐私呢?人工智能应用程序(AI apps)和系统收集大量数据以便学习和做出预测。但这些数据是如何存储和处理的呢?确实存在数据泄露、黑客攻击和信息落入坏人之手的风险。

面临风险的不仅仅是我们的个人数据。人工智能黑客攻击是一种真正的风险——这种情况尚未发生,但如果那些怀有恶意的人可能侵入人工智能系统,则可能会造成严重后果。例如,黑客可以控制无人驾驶汽车,侵入人工智能安全系统以进入高度安全的位置,甚至侵入具有人工智能安全功能的武器系统。
美国国防部(US Department)国防高级研究(Defense Advanced Research) 计划(Projects) 局(Agency)( DARPA )的专家认识到这些风险,并已着手开展DARPA的保证人工智能鲁棒性防止欺骗(Guaranteeing AI Robustness Against Deception)( GARD ) 项目,从头开始解决这个问题。该项目的目标是确保算法和人工智能能够抵御黑客攻击和篡改。
版权侵权
人工智能的另一个危险是侵犯版权。这听起来可能不像我们提到的其他一些危险那么严重,但像GPT-4(GPT-4)这样的人工智能模型的发展使每个人都面临更大的侵权风险。

每次您要求ChatGPT为您创建一些内容时(无论是关于旅行的博客文章还是您企业的新名称),您都在向其提供信息,然后它会使用这些信息来回答未来的查询。它反馈给你的信息可能会侵犯他人的版权,这就是为什么在发布人工智能创建的任何内容之前使用剽窃检测器并对其进行编辑如此重要的原因。
社会和数据偏见
人工智能不是人类,所以它不会有偏见,对吗?错误的。人和数据被用来训练人工智能模型和聊天机器人(chatbots),这意味着有偏见的数据或个性将导致有偏见的人工智能。人工智能中有两种偏见:社会偏见和数据偏见。

日常生活社会中存在许多偏见,当这些偏见成为人工智能的一部分时会发生什么?负责训练模型的程序员可能会有偏差的期望,然后这些期望就会进入人工智能系统。
或者用于训练和开发人工智能的数据可能不正确、有偏见或恶意收集。这会导致数据偏见,这可能与社会偏见一样危险。例如,如果面部识别系统主要使用白人的面部进行训练,它可能很难识别少数群体的人脸,从而使压迫永久化。
机器人抢走了我们的工作
ChatGPT和Google Bard等聊天机器人的发展引发了围绕人工智能的全新担忧:机器人(robots )可能会抢走我们的工作。我们已经看到科技行业的作家被人工智能取代,软件开发人员担心他们的工作会被机器人抢走,公司使用 ChatGPT 来创建(ChatGPT)博客内容和社交媒体内容而不是雇用人类作家。

根据世界经济论坛的《2020 年就业未来报告》(World Economic Forum’s The Future of Jobs Report 2020),到 2025 年,人工智能预计将取代全球 8500 万个工作岗位。即使人工智能没有取代作家,但它已经被许多人用作工具。那些面临被人工智能取代风险的工作人员可能需要适应才能生存——例如,作家可能成为人工智能提示工程师,使他们能够使用 ChatGPT 等工具进行内容创作,而不是被这些模型取代(ChatGPT)。
未来潜在的人工智能风险
这些都是直接或迫在眉睫的风险,但是我们未来可能看到的一些不太可能但仍然可能发生的人工智能危险又如何呢?其中包括人工智能被编程来伤害人类,例如,经过训练可以在战争期间杀人的自主武器。

然后,人工智能可能会一心一意地专注于其编程目标,在不惜一切代价试图实现该目标时发展出破坏性行为,即使人类试图阻止这种情况发生。
天网(Skynet)告诉我们当人工智能变得有知觉时会发生什么。然而,尽管Google工程师Blake Lemoine可能试图让所有人相信LaMDA(Google 的人工智能聊天机器人生成器)(LaMDA, Google’s artificially intelligent chatbot generator was sentient)早在2022 年 6 月(June 2022)就具有感知能力,但幸运的是,迄今为止没有证据表明这是真的。
人工智能监管的挑战
202 年 5 月 15 日星期一,OpenAI 首席执行官萨姆·奥尔特曼 (Sam Altman) 出席了关于人工智能的第一次国会听证会(OpenAI CEO Sam Altman attended the first congressional hearing on artificial intelligence),警告说:“如果这项技术出了问题,它可能会出大问题。” OpenAI CO(OpenAI CO)明确表示支持监管,并在听证会上提出了许多自己的想法。问题是人工智能发展速度如此之快,很难知道从哪里开始监管。
国会(Congress)希望避免重蹈社交媒体时代之初的覆辙,参议院多数党领袖查克·舒默 (Chuck Schumer)(Senate Majority Leader Chuck Schumer)的专家团队已经在制定法规,要求公司披露他们用于训练模型的数据源以及谁训练他们。不过,人工智能的具体监管方式可能还需要一段时间才能变得清晰,毫无疑问,人工智能公司将会强烈反对。
通用(Threat)人工智能(Artificial General Intelligence)的威胁
还存在创建可以完成人类(或动物)可以执行的任何任务的通用人工智能(AGI )的风险。(AGI)科幻电影中经常提到,我们距离这样的创造可能还需要几十年的时间,但如果我们真的创造出(Often)通用人工智能(AGI),它可能会对人类构成威胁。
许多公众人物已经认可人工智能对人类构成生存威胁的信念,包括史蒂芬·霍金(Stephen Hawking)、比尔·盖茨(Bill Gates),甚至谷歌前首席执行官埃里克·施密特(Google CEO Eric Schmidt),他表示:“人工智能可能构成生存风险,政府需要知道如何确保这项技术不会被邪恶的人滥用。”
那么,人工智能是否危险?其风险是否大于其收益?对此还没有定论,但我们现在已经看到了我们周围存在一些风险的证据。其他危险即使有的话,也不太可能很快发生。但有一件事是明确的:人工智能的危险不应被低估。最重要的是,我们确保人工智能从一开始就受到适当的监管,以最大限度地减少并有望减轻任何未来的风险。
Is Artificial Intelligence (AI) Dangerous?
On Monday, May 22, 2023, a νerified Twitter account called “Bloomberg Feed” shared a tweet claiming there had been an explosion at the Pentagon, accompanied by an image. If you’re wondering what this has to do with artificial intelligence (AI), the image was an AI-generated one, with the tweet quickly going viral and sparking a brief stock market dip. Things could have been much worse — a stark reminder of the dangers of artificial intelligence.

Artificial Intelligence Dangers
It’s not just fake news we need to worry about. There are many immediate or potential risks associated with AI, from those concerning privacy and security to bias and copyright issues. We’ll dive into some of these artificial intelligence dangers, see what is being done to mitigate them now and in the future, and ask whether the risks of AI outweigh the benefits.
Fake News
Back when deepfakes first landed, concerns arose that they could be used with ill intent. The same could be said for the new wave of AI image generators, like DALL-E 2, Midjourney, or DreamStudio. On March 28, 2023, fake AI-generated images of Pope Francis in a white Balenciaga puffer jacket, and enjoying several adventures, including skateboarding and playing poker went viral. Unless you studied the images closely, it was hard to distinguish these images from the real thing.

While the example with the pope was undoubtedly a bit of fun, the image (and accompanying tweet) about the Pentagon was anything but. Fake images generated by AI have the power to damage reputations, end marriages or careers, create political unrest, and even start wars if wielded by the wrong people — in short, these AI-generated images have the potential to be hugely dangerous if misused.
With AI image generators now freely available for anybody to use, and Photoshop adding an AI image generator to its popular software, the opportunity to manipulate images and create fake news is greater than ever.
Privacy, Security, and Hacking
Privacy and security are also huge concerns when it comes to the risks of AI, with a number of countries already banning OpenAI’s ChatGPT. Italy has banned the model due to privacy concerns, believing it does not comply with the European General Data Protection Regulation (GDPR), while China, North Korea, and Russia’s governments banned it due to fears it would spread misinformation.
So why are we so concerned about privacy when it comes to AI? AI apps and systems gather large amounts of data in order to learn and make predictions. But how is this data stored and processed? There’s a real risk of data breaches, hacking, and information falling into the wrong hands.

It’s not just our personal data that’s at risk, either. AI hacking is a genuine risk — it hasn’t happened yet, but if those with malicious intent could hack into AI systems, this could have serious consequences. For example, hackers could control driverless vehicles, hack AI security systems to gain entry to highly secure locations, and even hack weapons systems with AI security.
Experts at the US Department of Defense’s Defense Advanced Research Projects Agency (DARPA) recognize these risks and are already working on the DARPA’s Guaranteeing AI Robustness Against Deception (GARD) project, tackling the problem from the ground up. The project’s goal is to ensure that resistance to hacking and tampering is built into algorithms and AI.
Copyright Infringement
Another of the dangers of AI is copyright infringement. This may not sound as serious as some other dangers we’ve mentioned, but the development of AI models like GPT-4 puts everyone at increased risk of infringement.

Every time you ask ChatGPT to create something for you — whether that be a blog post on travel or a new name for your business — you’re feeding it information which it then uses to answer future queries. The information it feeds back to you could be infringing somebody else’s copyright, which is why it’s so important to use a plagiarism detector and edit any content created by AI before publishing it.
Societal and Data Bias
AI isn’t human, so it can’t be biased, right? Wrong. People and data are used to train AI models and chatbots, which means biased data or personalities will result in a biased AI. There are two types of bias in AI: societal bias and data bias.

With many biases present in everyday society, what happens when these biases become part of AI? The programmers responsible for training the model could have expectations that are biased, which then make their way into AI systems.
Or data used to train and develop an AI could be incorrect, biased, or collected in bad faith. This leads to data bias, which can be as dangerous as societal bias. For example, if a system for facial recognition is trained using mainly white people’s faces, it may struggle to recognize those from minority groups, perpetuating oppression.
Robots Taking Our Jobs
The development of chatbots such as ChatGPT and Google Bard has opened up a whole new worry surrounding AI: The risk that robots will take our jobs. We’re already seeing writers in the tech industry being replaced by AI, software developers worried they’ll lose their jobs to bots, and companies using ChatGPT to create blog content and social media content rather than hiring human writers.

According to the World Economic Forum’s The Future of Jobs Report 2020, AI is expected to replace 85 million jobs worldwide by 2025. Even if AI doesn’t replace writers, it’s already being used as a tool by many. Those in jobs at risk of being replaced by AI may need to adapt to survive — for example, writers may become AI prompt engineers, enabling them to work with tools like ChatGPT for content creation rather than being replaced by these models.
Future Potential AI Risks
These are all immediate or looming risks, but what about some of the less likely but still possible dangers of AI we could see in the future? These include things like AI being programmed to harm humans, for example, autonomous weapons trained to kill during a war.

Then there’s the risk that AI could focus single-mindedly on its programmed goal, developing destructive behaviors as it attempts to accomplish that goal at all costs, even when humans try to stop this from happening.
Skynet taught us what happens when an AI becomes sentient. However, though Google engineer Blake Lemoine may have tried to convince everyone that LaMDA, Google’s artificially intelligent chatbot generator was sentient back in June 2022, there’s thankfully no evidence to date to suggest that’s true.
The Challenges of AI regulation
On Monday, May 15, 202, OpenAI CEO Sam Altman attended the first congressional hearing on artificial intelligence, warning, “If this tech goes wrong, it can go quite wrong.” The OpenAI CO made it clear he favors regulation and brought many of his own ideas to the hearing. The problem is that AI is evolving at such speed, it’s difficult to know where to start with regulation.
Congress wants to avoid making the same mistakes made at the beginning of the social media era, and a team of experts alongside Senate Majority Leader Chuck Schumer are already working on regulations that would require companies to reveal what data sources they used to train models and who trained them. It may be some time before exactly how AI will be regulated becomes clear, though, and no doubt there will be backlash from AI companies.
The Threat of an Artificial General Intelligence
There’s also the risk of the creation of an artificial general intelligence (AGI) that could accomplish any tasks a human being (or animal) could perform. Often mentioned in sci-fi films, we’re probably still decades away from such a creation, but if and when we do create an AGI, it could pose a threat to humanity.
Many public figures already endorse the belief that AI poses an existential threat to humans, including Stephen Hawking, Bill Gates, and even former Google CEO Eric Schmidt, who stated, “Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not misused by evil people.”
So, is artificial intelligence dangerous, and do its risk outweigh its benefits? The jury’s still out on that one, but we’re already seeing evidence of some of the risks around us right now. Other dangers are less likely to come to fruition anytime soon, if at all. One thing is clear, though: the dangers of AI shouldn’t be underestimated. It’s of the utmost importance that we ensure AI is properly regulated from the outset, to minimize and hopefully mitigate any future risks.