hello大家好,我是達(dá)達(dá)。機(jī)器智能就在這里,我們已經(jīng)在使用它來做出主觀決策。但是,AI增長和改進(jìn)的復(fù)雜方式使其難以理解甚至難以控制。在這個(gè)警示性演講中,技術(shù)社會(huì)學(xué)家Zeynep Tufekci解釋了智能機(jī)器如何以不適合人為錯(cuò)誤模式的方式發(fā)生故障,以及以我們不會(huì)期望或未做好準(zhǔn)備的方式發(fā)生故障。她說:“我們不能將責(zé)任外包給機(jī)器。”“我們必須更加嚴(yán)格地遵守人類價(jià)值觀和人類道德?!?/section>演說題目:人工智能時(shí)代,我們更需堅(jiān)守人類道德 我們更需堅(jiān)守人類道德 來自TED英語演說優(yōu)選 00:00 17:45 So, I started my first job as a computer programmer in my very first year of college -- basically, as a teenager. 我的第一份工作是程序員,那是在我剛上大學(xué)的時(shí)候,不到二十歲。 Soon after I started working, writing software in a company, a manager who worked at the company came down to where I was, and he whispered to me, 'Can he tell if I'm lying?' There was nobody else in the room. 我剛開始工作不久,正當(dāng)在公司寫程序,公司的一位經(jīng)理來到我旁邊,他悄悄的對我說, “他能看出來我在撒謊嗎?” 當(dāng)時(shí)屋子里沒有別人。 'Can who tell if you're lying? And why are we whispering?' The manager pointed at the computer in the room. 'Can he tell if I'm lying?' Well, that manager was having an affair with the receptionist. “你是指誰能看出你在撒謊?還有,我們干嘛要悄悄地說話?” 那個(gè)經(jīng)理指著屋子里的電腦,說: “他能看出我在撒謊嗎?” 其實(shí),那個(gè)經(jīng)理和前臺(tái)有一腿。 And I was still a teenager. So I whisper-shouted back to him, 'Yes, the computer can tell if you're lying.' 當(dāng)時(shí)我只有十來歲,我低聲地回答他, “是的,電腦什么都知道?!?nbsp;Well, I laughed, but actually, the laugh's on me. Nowadays, there are computational systems that can suss out emotional states and even lying from processing human faces. Advertisers and even governments are very interested. 我笑了,但其實(shí)我是在笑自己,現(xiàn)在,計(jì)算機(jī)系統(tǒng)已經(jīng)可以通過分析人臉來辨別人的情緒,甚至包括是否在撒謊。廣告商,甚至政府都對此很感興趣。 I had become a computer programmer because I was one of those kids crazy about math and science. But somewhere along the line I'd learned about nuclear weapons, and I'd gotten really concerned with the ethics of science. I was troubled. 我選擇成為電腦程序員,因?yàn)槲沂悄欠N癡迷于數(shù)學(xué)和科學(xué)孩子。其間我也學(xué)習(xí)過核武器,我也非常關(guān)心科學(xué)倫理。我曾經(jīng)很困惑。However, because of family circumstances, I also needed to start working as soon as possible. So I thought to myself, hey, let me pick a technical field where I can get a job easily and where I don't have to deal with any troublesome questions of ethics. So I picked computers. 但是,因?yàn)榧彝ピ?,我需要盡快參加工作。我對自己說,嘿,選一個(gè)容易找工作的科技領(lǐng)域吧,并且找個(gè)不需要操心倫理問題的。所以我選了計(jì)算機(jī)。 Well, ha, ha, ha! All the laughs are on me. Nowadays, computer scientists are building platforms that control what a billion people see every day. They're developing cars that could decide who to run over. They're even building machines, weapons, that might kill human beings in war. It's ethics all the way down. 哈哈哈,我多可笑。如今,計(jì)算機(jī)科學(xué)控制著十億人每天能看到的信息,它們可以控制汽車朝哪里開,它們可以建造機(jī)器、武器,那些在戰(zhàn)爭中用于殺人的武器。說到底,都是倫理問題。 Machine intelligence is here. We're now using computation to make all sort of decisions, but also new kinds of decisions. We're asking questions to computation that have no single right answers, that are subjective and open-ended and value-laden. 機(jī)器智能來了。我們用計(jì)算機(jī)來做各種決策,包括人們面臨的新決策。我們向計(jì)算機(jī)詢問多解的、主觀的、開放性的或意義深遠(yuǎn)的問題。 We're asking questions like, 'Who should the company hire?' 'Which update from which friend should you be shown?' 'Which convict is more likely to reoffend?' 'Which news item or movie should be recommended to people?' 我們會(huì)問, “我們公司應(yīng)該聘請誰?” “你該關(guān)注哪個(gè)朋友的哪條狀態(tài)?” “哪種犯罪更容易再犯?” “應(yīng)該給人們推薦哪條新聞或是電影?” Look, yes, we've been using computers for a while, but this is different. This is a historical twist, because we cannot anchor computation for such subjective decisions the way we can anchor computation for flying airplanes, building bridges, going to the moon. 看,是的,我們使用計(jì)算機(jī)已經(jīng)有一段時(shí)間了,但現(xiàn)在不一樣了。這是歷史性的轉(zhuǎn)折,因?yàn)槲覀冊谶@些主觀決策上無法主導(dǎo)計(jì)算機(jī),不像我們在管理飛機(jī)、建造橋梁、登月等問題上,可以主導(dǎo)它們。Are airplanes safer? Did the bridge sway and fall? There, we have agreed-upon, fairly clear benchmarks, and we have laws of nature to guide us. We have no such anchors and benchmarks for decisions in messy human affairs. 飛機(jī)會(huì)更安全嗎?橋梁會(huì)搖晃或倒塌嗎?在這些問題上,我們有統(tǒng)一而清晰的判斷標(biāo)準(zhǔn),我們有自然定律來指導(dǎo)。但是在復(fù)雜的人類事務(wù)上,我們沒有這樣的客觀標(biāo)準(zhǔn)。 To make things more complicated, our software is getting more powerful, but it's also getting less transparent and more complex. Recently, in the past decade, complex algorithms have made great strides. 讓問題變得更復(fù)雜的,是我們的軟件正越來越強(qiáng)大,同時(shí)也變得更加不透明,更加復(fù)雜。最近的幾十年,復(fù)雜算法已取得了長足發(fā)展。They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and block spam and they can translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go. 它們可以識別人臉,它們可以破解筆跡,它們可以識別信用卡欺詐,可以屏蔽垃圾信息,它們可以翻譯語言,他們可以通過醫(yī)學(xué)圖像識別腫瘤,它們可以在國際象棋和圍棋上擊敗人類。 Much of this progress comes from a method called 'machine learning.' Machine learning is different than traditional programming, where you give the computer detailed, exact, painstaking instructions. It's more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. 類似的很多發(fā)展,都來自一種叫“機(jī)器學(xué)習(xí)”的方法。機(jī)器學(xué)習(xí)不像傳統(tǒng)程序一樣,需要給計(jì)算機(jī)詳細(xì)、準(zhǔn)確的逐條指令。它更像是你給系統(tǒng)喂了很多數(shù)據(jù),包括非結(jié)構(gòu)化數(shù)據(jù),比如我們在數(shù)字生活中產(chǎn)生的數(shù)據(jù)。And the system learns by churning through this data. And also, crucially, these systems don't operate under a single-answer logic. They don't produce a simple answer; it's more probabilistic: 'This one is probably more like what you're looking for.' 系統(tǒng)扎進(jìn)這些數(shù)據(jù)中學(xué)習(xí),重要的是,這些系統(tǒng)不再局限單一答案。他們得出的不是一個(gè)簡單的答案,而是概率性的: “這個(gè)更像是你在尋找的。” Now, the upside is: this method is really powerful. The head of Google's AI systems called it, 'the unreasonable effectiveness of data.' The downside is, we don't really understand what the system learned. In fact, that's its power. This is less like giving instructions to a computer; it's more like training a puppy-machine-creature we don't really understand or control. 它的優(yōu)勢是:它真的非常強(qiáng)大。Google人工智能系統(tǒng)的負(fù)責(zé)人稱它為: “不可思議的數(shù)據(jù)效率”。缺點(diǎn)在于,我們無法清楚的了解系統(tǒng)學(xué)到了什么,事實(shí)上,這也正是它的強(qiáng)大之處。不像是給計(jì)算機(jī)下達(dá)指令,更像是在訓(xùn)練一個(gè)機(jī)器狗,我們無法精確的了解和控制它。So this is our problem. It's a problem when this artificial intelligence system gets things wrong. It's also a problem when it gets things right, because we don't even know which is which when it's a subjective problem. We don't know what this thing is thinking. 這就是我們遇到的問題。人工智能會(huì)出錯(cuò),這是一個(gè)問題。但他們得出正確答案,又是另一種問題。因?yàn)槲覀兠鎸χ饔^問題,是不應(yīng)該有答案的。我們不知道這些機(jī)器在想什么。 So, consider a hiring algorithm -- a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees' data and instructed to find and hire people like the existing high performers in the company. Sounds good. 所以,考慮一下招聘算法-通過機(jī)器學(xué)習(xí)構(gòu)建的招聘系統(tǒng)。這樣的系統(tǒng)會(huì)用員工現(xiàn)有的數(shù)據(jù)進(jìn)行自我培訓(xùn),參照公司的優(yōu)秀員工來尋找和招聘新人。聽起來很好。I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers. 有次我參加了一個(gè)會(huì)議,會(huì)上聚集了很多人力資源部的經(jīng)理和總監(jiān),都是高管,讓他們使用這樣的招聘系統(tǒng)。他們都非常興奮,認(rèn)為這可以讓招聘變得更加客觀,從而減少偏見,給女性和少數(shù)族裔更多的機(jī)會(huì),減少他們自身的偏見。 And look -- human hiring is biased. I know. I mean, in one of my early jobs as a programmer, my immediate manager would sometimes come down to where I was really early in the morning or really late in the afternoon, and she'd say, 'Zeynep, let's go to lunch!' I'd be puzzled by the weird timing. It's 4pm. Lunch? 你知道的,招聘是存在偏見的,我也很清楚。在我剛開始做程序員的時(shí)候,我的直接主管會(huì)來找我,在早晨很早或下午很晚的時(shí)候,說,“ 圖費(fèi), 我們?nèi)コ晕顼?!?nbsp;我就被這奇怪的時(shí)間給搞糊涂了,現(xiàn)在是下午4點(diǎn),吃午飯?I was broke, so free lunch. I always went. I later realized what was happening. My immediate managers had not confessed to their higher-ups that the programmer they hired for a serious job was a teen girl who wore jeans and sneakers to work. I was doing a good job, I just looked wrong and was the wrong age and gender. 我當(dāng)時(shí)很窮,所以不會(huì)放過免費(fèi)的午餐。后來我才想明白原因,我的主管們沒有向他們的上級坦白,他們雇了一個(gè)十多歲的小女孩來做重要的編程工作,一個(gè)穿著牛仔褲,運(yùn)動(dòng)鞋工作的女孩。我的工作做得很好,我只是看起來不合適,年齡和性別也不合適。 So hiring in a gender- and race-blind way certainly sounds good to me. But with these systems, it is more complicated, and here's why: Currently, computational systems can infer all sorts of things about you from your digital crumbs, even if you have not disclosed those things. 所以,忽略性別和種族的招聘,聽起來很適合我。但是這樣的系統(tǒng)會(huì)帶來更多問題,當(dāng)前,計(jì)算機(jī)系統(tǒng)能根據(jù)零散的數(shù)據(jù),推斷出關(guān)于你的一切,甚至你沒有公開的事。They can infer your sexual orientation, your personality traits, your political leanings. They have predictive power with high levels of accuracy. Remember -- for things you haven't even disclosed. This is inference. 它們可以推斷你的性取向,你的性格特點(diǎn),你的政治傾向。它們有高準(zhǔn)確度的預(yù)測能力,記住,是你沒有公開的事情,這就是推斷。 I have a friend who developed such computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms -- months before. No symptoms, there's prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring. 我有個(gè)朋友就是開發(fā)這種系統(tǒng),從社交媒體的數(shù)據(jù)中,推斷患臨床或產(chǎn)后抑郁癥的可能性。結(jié)果令人印象深刻,她的系統(tǒng)可以在癥狀出現(xiàn)前幾個(gè)月成功預(yù)測到患抑郁的可能性,提前幾個(gè)月。在有癥狀之前,就可以預(yù)測到,她希望這可以用于臨床早期干預(yù),這很棒!現(xiàn)在我們把這項(xiàng)技術(shù)放到招聘中來看。 So at this human resources managers conference, I approached a high-level manager in a very large company, and I said to her, 'Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They're not depressed now, just maybe in the future, more likely. What if it's weeding out women more likely to be pregnant in the next year or two but aren't pregnant now? 在那次人力資源管理會(huì)議中,我接近了一位大公司的高管,我對她說,“看,如果這個(gè)系統(tǒng)在不通知你的情況下,就剔除了未來有可能抑郁的人,怎么辦?他們現(xiàn)在不抑郁,只是未來有可能。如果它剔除了有可能懷孕的女性,怎么辦?她們現(xiàn)在沒懷孕,但未來一兩年有可能。What if it's hiring aggressive people because that's your workplace culture?' You can't tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled 'higher risk of depression,' 'higher risk of pregnancy,' 'aggressive guy scale.' 如果因?yàn)槟愕墓疚幕?,它只雇傭激進(jìn)的候選人怎么辦?” 只看性別比例,你發(fā)現(xiàn)不了這些問題,性別比例是可以被調(diào)整的。并且因?yàn)檫@是機(jī)器學(xué)習(xí),不是傳統(tǒng)的代碼,不會(huì)有一個(gè)變量來標(biāo)識 “高抑郁風(fēng)險(xiǎn)”、 “高懷孕風(fēng)險(xiǎn)”、 “人員的激進(jìn)程度”。Not only do you not know what your system is selecting on, you don't even know where to begin to look. It's a black box. It has predictive power, but you don't understand it. 你不僅無法了解系統(tǒng)在選什么樣的人,你甚至不知道從哪里入手了解。它是個(gè)暗箱。它有預(yù)測的能力,但你不了解它。 'What safeguards,' I asked, 'do you have to make sure that your black box isn't doing something shady?' She looked at me as if I had just stepped on 10 puppy tails.我問,“你有什么措施可以保證,你的暗箱沒有在做些見不得人的事?” 她看著我,就好像我剛踩了10只小狗的尾巴。 She stared at me and she said, 'I don't want to hear another word about this.' And she turned around and walked away. Mind you -- she wasn't rude. It was clearly: what I don't know isn't my problem, go away, death stare. 她瞪著我說: “我不想再聽你多說一個(gè)字?!比缓笏D(zhuǎn)身走開了。其實(shí),她不是無禮,她想表達(dá)的其實(shí)是:我不知道,這不是我的錯(cuò),走開,不然我瞪死你。 Look, such a system may even be less biased than human managers in some ways. And it could make monetary sense. But it could also lead to a steady but stealthy shutting out of the job market of people with higher risk of depression. Is this the kind of society we want to build, without even knowing we've done this, because we turned decision-making to machines we don't totally understand? 看,這樣的系統(tǒng)可能在某些方面比人類高管懷有更少偏見,而且可以創(chuàng)造經(jīng)濟(jì)價(jià)值。但它也可能用一種頑固且隱秘的方式,把高抑郁風(fēng)險(xiǎn)的人清出職場。這是我們想要的未來嗎?把決策權(quán)給予我們并不完全了解的機(jī)器,在我們不知情的狀況下構(gòu)建一種新的社會(huì)? Another problem is this: these systems are often trained on data generated by our actions, human imprints. Well, they could just be reflecting our biases, and these systems could be picking up on our biases and amplifying them and showing them back to us, while we're telling ourselves, 'We're just doing objective, neutral computation.' 另一個(gè)問題是,這些系統(tǒng)通常使用我們真實(shí)的行為數(shù)據(jù)來訓(xùn)練。它們可能只是在反饋我們的偏見,這些系統(tǒng)會(huì)繼承我們的偏見,并把它們放大,然后反饋給我們。我們騙自己說, “我們只做客觀、中立的預(yù)測?!?nbsp;Researchers found that on Google, women are less likely than men to be shown job ads for high-paying jobs. And searching for African-American names is more likely to bring up ads suggesting criminal history, even when there is none. Such hidden biases and black-box algorithms that researchers uncover sometimes but sometimes we don't know, can have life-altering consequences. 研究者發(fā)現(xiàn),在Google 上,高收入工作的廣告更多的被展示給男性用戶。搜索非裔美國人的名字,更可能出現(xiàn)關(guān)于犯罪史的廣告,即使某些根本不存在。這些潛在的偏見以及暗箱中的算法,有些會(huì)被研究者揭露,有些根本不會(huì)被發(fā)現(xiàn),它的后果可能是改變一個(gè)人的人生。 In Wisconsin, a defendant was sentenced to six years in prison for evading the police. You may not know this, but algorithms are increasingly used in parole and sentencing decisions. He wanted to know: How is this score calculated? It's a commercial black box. The company refused to have its algorithm be challenged in open court. 在威斯康星,一個(gè)被告因逃避警察被判刑六年。你可能不知道,但計(jì)算機(jī)算法正越來越多的被應(yīng)用在假釋及量刑裁定上。他想要弄清楚,這個(gè)得分是怎么算出來的?這是個(gè)商業(yè)暗箱,這家公司拒絕在公開法庭上討論他們的算法。But ProPublica, an investigative nonprofit, audited that very algorithm with what public data they could find, and found that its outcomes were biased and its predictive power was dismal, barely better than chance, and it was wrongly labeling black defendants as future criminals at twice the rate of white defendants. 但是一家叫ProPublica的非盈利機(jī)構(gòu),根據(jù)公開數(shù)據(jù),對這個(gè)算法進(jìn)行了評估,他們發(fā)現(xiàn)這個(gè)算法的結(jié)論是有偏見的,它的預(yù)測能力很差,比碰運(yùn)氣強(qiáng)不了多少,并且它錯(cuò)誤的把黑人被告未來犯罪的可能性標(biāo)記為白人的兩倍。 So, consider this case: This woman was late picking up her godsister from a school in Broward County, Florida, running down the street with a friend of hers. They spotted an unlocked kid's bike and a scooter on a porch and foolishly jumped on it. As they were speeding off, a woman came out and said, 'Hey! That's my kid's bike!' They dropped it, they walked away, but they were arrested. 看下這個(gè)案例:這個(gè)女人急著去佛羅里達(dá)州,布勞沃德縣的一所學(xué)校,去接她的干妹妹。女人和她的朋友在街上狂奔,她們看到門廊上一輛沒上鎖的兒童自行車,和一輛電瓶車,于是就愚蠢的騎上了車。正在她們要騎走的時(shí)候,另一個(gè)女人出來,喊道: “嘿!那是我孩子的自行車!” 她們?nèi)拥糗囎唛_,但還是被抓住了。 She was wrong, she was foolish, but she was also just 18. She had a couple of juvenile misdemeanors. Meanwhile, that man had been arrested for shoplifting in Home Depot -- 85 dollars' worth of stuff, a similar petty crime. But he had two prior armed robbery convictions. But the algorithm scored her as high risk, and not him. 她做錯(cuò)了,她很愚蠢,但她也才剛滿18歲,她之前有不少青少年輕罪的記錄。與此同時(shí),這個(gè)男人在連鎖超市偷竊被捕了,偷了價(jià)值85美金的東西,同樣的輕微犯罪,但他有兩次持槍搶劫的案底。這個(gè)程序?qū)⑦@位女性判定為高風(fēng)險(xiǎn),而這位男性則不是。Two years later, ProPublica found that she had not reoffended. It was just hard to get a job for her with her record. He, on the other hand, did reoffend and is now serving an eight-year prison term for a later crime. Clearly, we need to audit our black boxes and not have them have this kind of unchecked power. 兩年后,ProPublica發(fā)現(xiàn)她沒有再次犯罪,但這個(gè)記錄使她很難找到工作。而這位男性,卻再次犯罪,并因此被判八年監(jiān)禁。顯然,我們需要審查這些暗箱,確保它們不再有這樣不加限制的權(quán)限。 Audits are great and important, but they don't solve all our problems. Take Facebook's powerful news feed algorithm -- you know, the one that ranks everything and decides what to show you from all the friends and pages you follow. Should you be shown another baby picture? 審查是很重要的,但不能解決所有的問題。拿Facebook的強(qiáng)大的新聞流算法來說,就是通過你的朋友圈和你瀏覽過的頁面,決定你的 “推薦內(nèi)容”的算法。它會(huì)決定要不要再推一張嬰兒照片給你,A sullen note from an acquaintance? An important but difficult news item? There's no right answer. Facebook optimizes for engagement on the site: likes, shares, comments. 要不要推一條熟人的沮喪狀態(tài)?要不要推一條重要但艱澀的新聞?這個(gè)問題沒有正解。Facebook會(huì)根據(jù)網(wǎng)站的參與度來優(yōu)化:喜歡、分享、評論。 In August of 2014, protests broke out in Ferguson, Missouri, after the killing of an African-American teenager by a white police officer, under murky circumstances. The news of the protests was all over my algorithmically unfiltered Twitter feed, but nowhere on my Facebook. Was it my Facebook friends? 在2014年8月,密蘇里州弗格森市爆發(fā)了游行,一個(gè)白人警察在不明狀況下殺害了一位非裔少年。關(guān)于游行的新聞在我的未經(jīng)算法過濾的Twitter上大量出現(xiàn),但Facebook上卻沒有。是因?yàn)槲业腇acebook好友不關(guān)注這事嗎?I disabled Facebook's algorithm, which is hard because Facebook keeps wanting to make you come under the algorithm's control, and saw that my friends were talking about it. It's just that the algorithm wasn't showing it to me. I researched this and found this was a widespread problem. 我禁用了Facebook的算法,這是很麻煩的一鍵事,因?yàn)镕acebook希望你一直在它的算法控制下使用,希望我的朋友持續(xù)地談?wù)撨@件事。只是算法沒法給我這些信息。我研究了這個(gè)現(xiàn)象,發(fā)現(xiàn)這是個(gè)普遍的問題。 The story of Ferguson wasn't algorithm-friendly. It's not 'likable.' Who's going to click on 'like?' It's not even easy to comment on. Without likes and comments, the algorithm was likely showing it to even fewer people, so we didn't get to see this. 弗格森事件對算法是不適用的,它不是值得“贊”的新聞,誰會(huì)在這樣的文章下點(diǎn)“贊”呢?甚至這新聞都不好被評論。因?yàn)闆]有“贊”和評論,算法會(huì)減少這些新聞的曝光,所以我們無法看到。Instead, that week, Facebook's algorithm highlighted this, which is the ALS Ice Bucket Challenge. Worthy cause; dump ice water, donate to charity, fine. But it was super algorithm-friendly. The machine made this decision for us. A very important but difficult conversation might have been smothered, had Facebook been the only channel. 相反的,在同一周,F(xiàn)acebook的算法熱推了ALS冰桶挑戰(zhàn)的信息。這很有意義,倒冰水,為慈善捐款,很好。 這個(gè)事件對算法是很適用的,機(jī)器幫我們做了這個(gè)決定。非常重要但艱澀的新聞事件可能會(huì)被埋沒掉,因?yàn)镕acebook已經(jīng)成為主要的信息來源。 Now, finally, these systems can also be wrong in ways that don't resemble human systems. Do you guys remember Watson, IBM's machine-intelligence system that wiped the floor with human contestants on Jeopardy? It was a great player. But then, for Final Jeopardy, Watson was asked this question: 'Its largest airport is named for a World War II hero, its second-largest for a World War II battle.' 最后,這些系統(tǒng)也可能會(huì)在一些不同于人力系統(tǒng)的那些事情上搞錯(cuò)。你們記得Watson吧,那個(gè)在智力競賽《危險(xiǎn)邊緣》中橫掃人類選手的IBM機(jī)器智能系統(tǒng),它是個(gè)很厲害的選手。但是,在最后一輪比賽中,Watson 被問道: “它最大的機(jī)場是以二戰(zhàn)英雄命名的,它第二大機(jī)場是以二戰(zhàn)戰(zhàn)場命名的。” (Hums Final Jeopardy music) Chicago. The two humans got it right. Watson, on the other hand, answered 'Toronto' -- for a US city category! The impressive system also made an error that a human would never make, a second-grader wouldn't make. 芝加哥。兩位人類選手答對了,但Watson答的是, “多倫多”,這是個(gè)猜美國城市的環(huán)節(jié)!這個(gè)厲害的系統(tǒng)也會(huì)犯人類都不會(huì)犯的,二年級小孩都不會(huì)犯的錯(cuò)誤。 Our machine intelligence can fail in ways that don't fit error patterns of humans, in ways we won't expect and be prepared for. It'd be lousy not to get a job one is qualified for, but it would triple suck if it was because of stack overflow in some subroutine. 我們的機(jī)器智能系統(tǒng),會(huì)在一些不符合人類出錯(cuò)模式的問題上出錯(cuò),這些問題都是我們 無法預(yù)料和準(zhǔn)備的。丟失一份完全有能力勝任 的工作時(shí),人們會(huì)感到很糟,但如果是因?yàn)闄C(jī)器子程序的過度堆積,就簡直糟透了。 In May of 2010, a flash crash on Wall Street fueled by a feedback loop in Wall Street's 'sell' algorithm wiped a trillion dollars of value in 36 minutes. I don't even want to think what 'error' means in the context of lethal autonomous weapons. 在2010年五月,華爾街出現(xiàn)一次股票閃電崩盤,原因是“賣出”算法的反饋回路導(dǎo)致,在36分鐘內(nèi)損失了幾十億美金。我甚至不敢想,致命的自動(dòng)化武器發(fā)生“錯(cuò)誤”會(huì)是什么后果。 So yes, humans have always made biases. Decision makers and gatekeepers, in courts, in news, in war ... they make mistakes; but that's exactly my point. We cannot escape these difficult questions. We cannot outsource our responsibilities to machines. Artificial intelligence does not give us a 'Get out of ethics free' card. 是的,人類總是會(huì)有偏見,法庭上、新聞機(jī)構(gòu)、戰(zhàn)爭中的,決策者、看門人…他們都會(huì)犯錯(cuò),但這恰恰是我要說的。我們無法拋開這些困難的問題,我們不能把我們自身該承擔(dān)的責(zé)任推給機(jī)器。人工智能不會(huì)給我們一張“倫理免責(zé)卡”。 Data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency. We need to accept that bringing math and computation to messy, value-laden human affairs does not bring objectivity; rather, the complexity of human affairs invades the algorithms. 數(shù)據(jù)科學(xué)家Fred Benenson稱之為“數(shù)學(xué)粉飾”。我們需要是相反的東西。我們需要培養(yǎng)算法的懷疑、復(fù)查和調(diào)研能力。我們需要確保有人為算法負(fù)責(zé),為算法審查,并切實(shí)的公開透明。我們必須認(rèn)識到,把數(shù)學(xué)和計(jì)算引入解決復(fù)雜的、高價(jià)值的人類事務(wù)中,并不能帶來客觀性,相反,人類事務(wù)的復(fù)雜性會(huì)擾亂算法。Yes, we can and we should use computation to help us make better decisions. But we have to own up to our moral responsibility to judgment, and use algorithms within that framework, not as a means to abdicate and outsource our responsibilities to one another as human to human. 是的,我們可以并且需要使用計(jì)算機(jī)來幫助我們做更好的決策,但我們也需要在判斷中加入道德義務(wù),在這個(gè)框架下使用算法,而不是像人與人之間相互推卸那樣,就把責(zé)任轉(zhuǎn)移給機(jī)器。 Machine intelligence is here. That means we must hold on ever tighter to human values and human ethics. Thank you. 人工智能到來了,這意味著我們要格外堅(jiān)守人類的價(jià)值觀和倫理。謝謝。
|