人類能否像計算機一樣思考?

來源:eurasiareview 時間:2019-03-28 編輯:

計算機,如那些為自動駕駛汽車提供動力的計算機,可能會被誤導為隨意涂鴉的火車,圍欄甚至校車。人們不應該看到這些圖像是如何絆倒計算機的,但在一項新的研究中,約翰霍普金斯大學的研究人員顯示大多數人實際上可以。

研究結果表明,現代計算機可能與我們認為的人類不同,并證明了人工智能的進步如何繼續縮小人與機器視覺能力之間的差距。這項研究今天發表在Nature Communications雜志上。

“大多數時候,我們這個領域的研究都是讓計算機像人一樣思考,”資深作家Chaz Firestone說,他是約翰霍普金斯大學心理與腦科學系的助理教授。 “我們的項目恰恰相反 - 我們在問人們是否可以像計算機一樣思考。”

人類容易對計算機來說很難。人工智能系統長期以來比數學或記憶大量信息的人更好;但幾十年來,人類在識別狗,貓,桌子或椅子等日常物品方面具有優勢。但最近,模仿大腦的“神經網絡”已經接近人類識別物體的能力,導致技術進步支持自動駕駛汽車,面部識別程序和幫助醫生發現放射掃描中的異常。

但即使有了這些技術進步,也存在一個關鍵的盲點:有可能故意制作神經網絡無法正確看到的圖像。而這些被稱為“對抗性”或“愚弄”圖像的圖像是一個大問題:它們不僅可以被黑客利用并導致安全風險,而且它們表明人類和機器實際上看到的圖像差別很大。

在某些情況下,計算機將蘋果稱為汽車所需要的只是重新配置一兩個像素。在其他情況下,機器看到犰狳和百吉餅看起來像無意義的電視靜態。

“這些機器似乎是以人類永遠不會想象的方式錯誤識別物體,”凡士通說。 “但令人驚訝的是,沒有人真正測試過這一點。我們怎么知道人們看不到計算機做了什么?“

為了測試這一點,凡世通和主要作者周正龍(約翰霍普金斯大學認知科學專業)主要要求人們“像機器一樣思考”?;髦揮邢嘍越閑〉拇駛惚砝疵枷?。因此,凡世通和周向人們展示了數十個已經騙過計算機的愚蠢圖像,并為人們提供了與機器相同的標簽選項。特別是,他們問人們計算機決定對象的兩個選項中的哪一個 - 一個是計算機的真實結論,另一個是隨機答案。 (這個blob被描繪成一個百吉餅還是一個風車?)事實證明,人們非常同意計算機的結論。

人們在75%的時間里選擇與計算機相同的答案。也許更值得注意的是,98%的人傾向于像計算機那樣回答。

接下來,研究人員通過讓人們在計算機最喜歡的答案和下一個最佳猜測之間做出選擇來提高賭注。 (這個blob是用百吉餅還是椒鹽卷餅?)人們再次驗證了計算機的選擇,其中91%的受試者同意該機器的首選。

即使當研究人員猜測對象的48種選擇之間,甚至當圖像與電視靜態相似時,絕大多數受試者都選擇了機器選擇遠遠超過隨機機會的速率。在各種實驗中共測試了1,800名受試者。

Firestone說:“我們發現,如果你把一個人放在與電腦相同的環境中,人類往往會同意這些機器。” “對于人工智能來說,這仍然是一個問題,但它并不像計算機所說的完全不像人類所說的那樣。”

英文版(原文)

Researchers Get Humans To Think Like Computers

河北20选5今晚开奖结果 www.ttacc.icu Computers, like those that power self-driving cars, can be trickedinto mistaking random scribbles for trains, fences and even schoolbusses. People aren’t supposed to be able to see how those images tripup computers but in a new study, Johns Hopkins University researchersshow most people actually can.

The findings suggest modern computers may not be as different fromhumans as we think, and demonstrate how advances in artificialintelligence continue to narrow the gap between the visual abilities ofpeople and machines. The research appears today in the journal Nature Communications.

“Most of the time, research in our field is about getting computersto think like people,” says senior author Chaz Firestone, an assistantprofessor in Johns Hopkins’ Department of Psychological and BrainSciences. “Our project does the opposite — we’re asking whether peoplecan think like computers.”

What’s easy for humans is often hard for computers. Artificialintelligence systems have long been better than people at doing math orremembering large quantities of information; but for decades humans havehad the edge at recognizing everyday objects such as dogs, cats, tablesor chairs. But recently, “neural networks” that mimic the brain haveapproached the human ability to identify objects, leading totechnological advances supporting self-driving cars, facial recognitionprograms and helping physicians to spot abnormalities in radiologicalscans.

But even with these technological advances, there’s a critical blindspot: It’s possible to purposely make images that neural networkscannot correctly see. And these images, called “adversarial” or“fooling” images, are a big problem: Not only could they be exploited byhackers and causes security risks, but they suggest that humans andmachines are actually seeing images very differently.

In some cases, all it takes for a computer to call an apple a car,is reconfiguring a pixel or two. In other cases, machines see armadillosand bagels in what looks like meaningless television static.

“These machines seem to be misidentifying objects in ways humansnever would,” Firestone says. “But surprisingly, nobody has reallytested this. How do we know people can’t see what the computers did?”

To test this, Firestone and lead author Zhenglong Zhou, a JohnsHopkins senior majoring in cognitive science, essentially asked peopleto “think like a machine”. Machines have only a relatively smallvocabulary for naming images. So, Firestone and Zhou showed peopledozens of fooling images that had already tricked computers, and gavepeople the same kinds of labeling options that the machine had. Inparticular, they asked people which of two options the computer decidedthe object was – one being the computer’s real conclusion and the other arandom answer. (Was the blob pictured a bagel or a pinwheel?) It turnsout, people strongly agreed with the conclusions of the computers.

People chose the same answer as computers 75 percent of the time.Perhaps even more remarkably, 98 percent of people tended to answer likethe computers did.

Next, researchers upped the ante by giving people a choice betweenthe computer’s favorite answer and its next-best guess. (Was the blobpictured a bagel or a pretzel?) People again validated the computer’schoices, with 91 percent of those tested agreeing with the machine’sfirst choice.

Even when the researchers had people guess between 48 choices forwhat the object was, and even when the pictures resembled televisionstatic, an overwhelming proportion of the subjects chose what themachine chose well above the rates for random chance. A total of 1,800subjects were tested throughout the various experiments.

“We found if you put a person in the same circumstance as acomputer, suddenly the humans tend to agree with the machines,”Firestone says. “This is still a problem for artificial intelligence,but it’s not like the computer is saying something completely unlikewhat a human would say.”

返回

聯系我們

  • 熱線:400-840-2800
  • 電話:(022)24102119
  • 傳真:(022)24102164
  • E-mail:[email protected]
  • 地址:天津市河東區龍潭路15號海委檔案樓

版權所有:天津市龍網科技發展有限公司 河北20选5今晚开奖结果 津公安備案12010202000392