羅伯特·艾略特·史密斯
The election season is winding up, and my social media is once again awash with political stories. Headlines stream: “Warren and Bernies awkward truce...”, “Trump sees his base growing...” and “The Feds real message...”. This is the America I see today.
The trouble is, its not the America you see or anyone else sees. It is my personally-curated version of reality. A constantly shifting mirage, evolving in real-time, depending on my likes and dislikes, what I click on, and what I share.
A recent Pew Research Center study found black social media users are more likely to see race-related news. The Mueller report suggests Russian efforts against Hillary Clinton targeted Bernie Sanders supporters. In October 2016, Brad Parscale, then President Trumps 2016 digital director, told Bloomberg News that he targeted Facebook and media posts at possible Clinton supporters so that they would sit the election out.
Parscale―who, as of early August, has spent more ($9.2 million) on Facebook ads for Trump 2020 than the four top Democratic candidates combined―said that in 2016 he typically ran 50,000 ad variations each day, micro-targeting different segments of the electorate.
Algorithms are prejudiced
While political operatives exploiting yellow journalism is nothing new, the coupling of their manipulative techniques to a technologically-driven world is a substantial change. Algorithms are now the most powerful curators of information, whose actions enable such manipulation by creating our fractured informational multiverse.
And those algorithms are prejudiced. That may sound extreme, but let me explain.
In analyses conducted by myself and colleagues at University College London (UCL), we modeled the behavior of social networks, using binary signals (1s and 0s) passed between simplified “agents” that represented people sharing of opinions about a divisive issue (say pro-life versus pro-choice or the merits of building a wall or not).
Most “agents” in this model determine the signals they broadcast based on the signals they receive from those surrounding them (as we do sharing news and stories online). But we added in a small number of agents we called “motivated reasoners,” who, regardless of what they hear, only broadcast their own pre-determined opinion.
Our results showed that in every case, motivated reasoners came to dominate the conversation, driving all other agents to fixed opinions, thus polarizing the network. This suggests that “echo chambers” are an inevitable consequence of social networks that include motivated reasoners.
It goes deeper than you think: Two years after Charlottesville1, Im fighting the conspiracy theory industrial complex.
So who are these motivated reasoners? You might assume they are political campaigners, lobbyists or even just your most dogmatic Facebook friend. But, in reality, the most motivated reasoners online are the algorithms that curate our online news.
How technology generalizes
In the online media economy, the artificial intelligence in algorithms are single-minded in achieving their profit-driven agendas by ensuring the maximum frequency of human interaction by getting the user to click on an advertisement. But AIs are not only economically single-minded, they are also statistically simple-minded.
Take, for example, the 2016 story in The Guardian about Google searches for “unprofessional hair” returning images predominantly of black women.
Does this reveal a deep social bias towards racism and sexism? To conclude this, one would have to believe that people are using the term “unprofessional hair” in close correlation with images of black women to such an extent as to suggest most people feel their hairstyles define “unprofessional.” Regardless of societal bias (which certainly exists), this seems doubtful.
It isnt all bad news for newspapers: Im a journalism student in an era of closing newsrooms, ‘fake news. But I still want in.
Having worked in AI for 30 years, I know it is probably more statistically reliable for algorithms to recognize black womens hairstyles than those of black men, white women, etc. This is simply an aspect of how algorithms “see,” by using overall features of color, shape, and size. Just as with real-world racism, resorting to simple features is easier for algorithms than deriving any real understanding of people. AIs codify this effect.
To be prejudiced means to pre-judge on simplified features, and then draw generalizations from those assumptions. This process is precisely what algorithms do technically. It is how they parse the incomprehensible “Big Data” from our online interactions into something digestible. AI engineers like me explicitly program generalization as a goal of the algorithms we design.
Given the simplifying features that algorithms use (gender, race, political persuasion, religion, age, etc.) and the statistical generalizations they draw, the real-life consequence is informational segregation, not unlike previous racial and social segregation.
Dangerous, divisive consequences
Groups striving for economic and political power will inevitably exploit these divisions, using techniques such as targeted marketing and digital gerrymandering to categorize groups. The consequence is not merely the outcome in an election, but the propagation of deep divisions in the real world we inhabit.
Recently, Sen. Kamala Harris spoke about how federally-mandated desegregation busing transformed her life opportunities. Like her, I benefited from that conscious effort to mix segregated communities, when as a child in 1970s Birmingham, Alabama, black children were bused to my all white elementary school. Those first real interactions I had with children of a different race radically altered my perspective of the world.
It never gets easier: How many more birthdays will our journalist son, Austin Tice2, spend captive in Syria?
The busing of the past ought now inspire efforts to overcome the digital segregation we see today. Our studies at UCL indicate that the key to counteracting the natural tendency of algorithmically-mediated social networks to segregate is to technically promote mixing of ideas, through greater informational connectivity between people.
Practically, this may mean the regulation of online media, and an imperative for AI engineers to design algorithms around new principles that balance optimization with the promotion of diverse ideas. This scientific shift in perspective will ensure a healthier mix of information, particularly around polarizing issues, just like those buses enabled racial and social mixing in my youth.
選舉季行將結(jié)束,我的社交媒體則再次充斥著政壇故事。新聞?lì)^條:“沃倫和伯尼的尷尬休戰(zhàn)……”“特朗普喜看基本盤擴(kuò)張……”“美聯(lián)儲(chǔ)的真實(shí)訊息……”。這就是我今天看到的美國。
問題是,這并非您看到的美國或任何其他人看到的美國。這是我的個(gè)人定制版現(xiàn)實(shí)。一幅不斷移動(dòng)的海市蜃樓圖,根據(jù)我的贊和踩、點(diǎn)擊及分享而實(shí)時(shí)演化。
皮尤研究中心最近的研究發(fā)現(xiàn),黑人社交媒體用戶更有可能看到種族相關(guān)的新聞。穆勒?qǐng)?bào)告表明,俄羅斯人搞的反希拉里·克林頓動(dòng)作的對(duì)象是伯尼·桑德斯的支持者。2016年10月,布拉德·帕斯卡爾——時(shí)任特朗普2016年總統(tǒng)競(jìng)選數(shù)字總監(jiān)——向彭博新聞社透露,他以克林頓的潛在支持者為其在臉書和媒體帖子的受眾,以便這些人選舉時(shí)不去投票。
截至8月初,帕斯卡爾為特朗普2020年競(jìng)選在臉書廣告上的花費(fèi)(920萬美元)比4名民主黨支持率最高候選人的總和還多。他說,2016年,他通常每天投放5萬個(gè)廣告變體,精準(zhǔn)定位不同的選民群體。
算法有偏見
盡管利用小報(bào)新聞的政治特工并非新鮮事物,但將他們的操縱技術(shù)與技術(shù)驅(qū)動(dòng)的世界相結(jié)合卻是實(shí)質(zhì)巨變。算法乃現(xiàn)時(shí)最強(qiáng)大的信息管理員,通過創(chuàng)建破碎的信息多重宇宙,使這種操縱成為可能。
而且這些算法帶有偏見。聽起來可能有些極端,請(qǐng)容我解釋。
我本人和倫敦大學(xué)學(xué)院的同事進(jìn)行了多項(xiàng)分析。研究中,我們使用經(jīng)簡(jiǎn)化的“代理人”之間傳遞的二進(jìn)制信號(hào)(1和0)對(duì)社交網(wǎng)絡(luò)的行為建模,這些信號(hào)代表人們就分歧問題發(fā)表意見(例如,生存優(yōu)先還是選擇優(yōu)先,建墻不建墻到底哪個(gè)好)。
模型中,大多數(shù)“代理人”都是根據(jù)從周圍人那里收到的信號(hào)來確定他們廣播的信號(hào)(恰如我們?cè)诰€分享新聞和故事時(shí)的行為)。但是,我們添加了少數(shù)稱為“有動(dòng)機(jī)的推理者”的代理人,他們無論聽到什么都只會(huì)發(fā)表自己預(yù)設(shè)的意見。
我們的研究結(jié)果表明,在每種情況下,有動(dòng)機(jī)的推理者最終都會(huì)主導(dǎo)對(duì)話,將所有其他代理人推向固定的觀點(diǎn),從而使網(wǎng)絡(luò)兩極化。這表明,只要社交網(wǎng)絡(luò)存在有動(dòng)機(jī)的推理者,“回聲室”就是必然結(jié)果。
事情比您想得更深:夏洛茨維爾事件兩年后,我還在與陰謀論產(chǎn)業(yè)復(fù)合體斗爭(zhēng)。
那么,這些有動(dòng)機(jī)的推理者究竟為何人?讀者可能會(huì)認(rèn)為是政治活動(dòng)家、說客乃至其最自以為是的臉書好友。但實(shí)際上,網(wǎng)上有著最強(qiáng)動(dòng)機(jī)的推理者是管理我們?cè)诰€新聞的算法。
技術(shù)如何概括
在線媒體經(jīng)濟(jì)中,算法的人工智能一心一意通過讓用戶點(diǎn)擊廣告來確保最高頻率的人機(jī)交互,從而實(shí)現(xiàn)其以利潤(rùn)為導(dǎo)向的議程。但是,人工智能不僅在經(jīng)濟(jì)上一心一意,在統(tǒng)計(jì)上也是一心一意。
以2016年《衛(wèi)報(bào)》中有關(guān)谷歌搜索“不專業(yè)發(fā)型”的故事為例,反饋的圖像主要是黑人女性。
這是否揭示出趨向種族主義和性別歧視的某種深層的社會(huì)偏見?要得出這個(gè)結(jié)論,必須得相信人們使用的“不專業(yè)的發(fā)型”一詞與黑人女性的形象密切相關(guān),以至暗示大多數(shù)人認(rèn)為她們的發(fā)型定義了何為“不專業(yè)”。拋開社會(huì)偏見(確實(shí)存在)不談,這似乎是可疑的。
對(duì)報(bào)紙而言,這并非全然是壞消息:我是一個(gè)生活在新聞編輯室日漸關(guān)閉(“假新聞”)時(shí)代的新聞專業(yè)學(xué)生。但我還是希望入局。
在人工智能領(lǐng)域工作了30年,我明白算法在識(shí)別黑人女性發(fā)型上可能要比識(shí)別黑人男性、白人女性等人群的發(fā)型在統(tǒng)計(jì)學(xué)上更為靠譜。這只是算法的一個(gè)方面,即使用顏色、形狀和大小這些整體特征來“觀看”。恰如現(xiàn)實(shí)世界中的種族主義,對(duì)于算法而言,訴諸簡(jiǎn)單特征要比真正理解人容易許多。人工智能將這種效應(yīng)程序化。
帶有偏見意味著基于簡(jiǎn)化的特征進(jìn)行預(yù)判,并將此類假設(shè)進(jìn)行概括。這個(gè)過程正是算法在技術(shù)上所做的。這是他們將在線交流中無法理解的“大數(shù)據(jù)”解讀為可消化內(nèi)容的過程。對(duì)我這樣的人工智能工程師而言,很明確,將這種概括設(shè)定為我們所設(shè)計(jì)的算法的一個(gè)目標(biāo)。
鑒于算法使用的簡(jiǎn)化特征(性別、種族、政治立場(chǎng)、宗教、年齡等)以及它們得出的統(tǒng)計(jì)概括,現(xiàn)實(shí)生活所受到的影響就是信息隔離,與以往的種族隔離和社會(huì)隔離并無二致。
危險(xiǎn)且分裂性的后果
旨在攫取經(jīng)濟(jì)和政治權(quán)力的團(tuán)體將無可避免地利用這種細(xì)分,使用定向營銷和不正當(dāng)?shù)臄?shù)字劃分等技術(shù)將團(tuán)體歸類。這種做法不僅影響個(gè)別選舉的結(jié)果,還在我們所處的現(xiàn)實(shí)世界中散播深層次的分裂。
參議員卡瑪拉·哈里斯近期曾談及聯(lián)邦政府強(qiáng)制實(shí)行的廢除種族隔離校車制度如何改變了她的人生機(jī)遇。筆者兒時(shí)生活在1970年代的亞拉巴馬州伯明翰,當(dāng)黑人兒童乘校車來到我所在的全白人小學(xué)時(shí),和哈里斯一樣,我也從有意識(shí)消除種族隔離社區(qū)的努力中得益。那些與來自另一種族的孩子們最初的真正互動(dòng),從根本上刷新了我的世界觀。
事情從來就不容易:不知道我們的記者小伙兒奧斯汀·蒂斯還有多少個(gè)生日將在敘利亞的囚禁中度過?
過往的廢除種族隔離校車制度理當(dāng)激發(fā)我們現(xiàn)在去克服今日所見的數(shù)字隔離。我們?cè)趥惗卮髮W(xué)學(xué)院的研究表明,要抵抗算法調(diào)制的社交網(wǎng)絡(luò)隔離的自然趨勢(shì),關(guān)鍵在于通過人與人之間更強(qiáng)的信息互聯(lián)來從技術(shù)上促進(jìn)觀念的交融。
實(shí)際上,這可能意味著對(duì)在線媒體的監(jiān)管,以及要求人工智能工程師圍繞新原則設(shè)計(jì)算法,這些原則應(yīng)當(dāng)在最優(yōu)結(jié)果與多元觀念推廣之間達(dá)致平衡。科學(xué)轉(zhuǎn)變視角將確保更健康的信息融合,尤其事關(guān)兩極分化的問題,恰如那些在我青少年時(shí)代實(shí)現(xiàn)了種族和社會(huì)融合的校車一樣。