![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
最近的一篇研究論文表明,它更加複雜。不幸的是,偏見不僅是一個錯誤 - 如果沒有適當的加密護欄,這是一個持久的功能。
Can you trust your AI to be unbiased? A recent research paper suggests it’s a little more complicated.
您能相信您的AI公正嗎?最近的一篇研究論文表明,它更加複雜。
Unfortunately, bias isn’t just a bug — it’s a persistent feature without proper cryptographic guardrails. A September 2024 study from Imperial College London shows how zero-knowledge proofs (ZKPs) can help companies verify that their machine learning (ML) models treat all demographic groups equally while still keeping model details and user data private.
不幸的是,偏見不僅是一個錯誤 - 如果沒有適當的加密護欄,這是一個持久的功能。 2024年9月來自倫敦帝國學院的一項研究表明,零知識證明(ZKP)如何幫助公司驗證其機器學習(ML)模型是否平等對待所有人口組,同時仍將模型詳細信息和用戶數據保密。
We recently covered how ZKPs are being used to detect fraud and anomalies in financial transactions. But in this case, ZKPs can be applied to verify the fairness property of ML models.
最近,我們介紹瞭如何使用ZKP來檢測金融交易中的欺詐和異常。但是在這種情況下,可以應用ZKP來驗證ML模型的公平屬性。
When discussing "fairness," we're entering a complicated area. There are several mathematical definitions of fairness, and the preferred definition shifts with the political landscape. For instance, consider the US government's approach to fairness over the past two administrations.
在討論“公平”時,我們進入一個複雜的領域。公平性有幾種數學定義,而首選的定義隨著政治格局的轉變。例如,考慮過去兩個政府中美國政府的公平方法。
The previous administration was focused on diversity, equity and inclusion. They used demographic parity as a key measure of fairness, aiming to ensure the output probability of a specific prediction is the same across different groups.
上一個政府的重點是多樣性,公平和包容性。他們使用人口統計學奇偶校驗作為公平的關鍵衡量標準,旨在確保特定預測的輸出概率在不同的組之間相同。
But as we integrate more ML models into critical systems like college admissions, home loans and future job prospects, we could use a little more reassurance that AI is treating us fairly.
但是,隨著我們將更多的ML模型整合到諸如大學錄取,房屋貸款和未來的工作前景之類的關鍵系統中,我們可以更保證AI對我們進行公平對待。
We need to be sure that any attestations of fairness keep the underlying ML models and training data confidential. They need to protect intellectual property and users’ privacy while providing enough access for users to know that their model is not discriminatory. Not an easy task.
我們需要確保任何公平證明都可以使基本的ML模型和培訓數據機密。他們需要保護知識產權和用戶的隱私,同時提供足夠的訪問權限,以使用戶知道他們的模型沒有歧視性。這不是一件容易的事。
Enter, zero-knowledge proofs.
輸入,零知識證明。
ZKML (zero knowledge machine learning) is how we use zero-knowledge proofs to verify that an ML model is what it says on the box. ZKML combines zero-knowledge cryptography with machine learning to create systems that can verify AI properties without exposing the underlying models or data. We can also take that concept and use ZKPs to identify ML models that treat everyone equally and fairly.
ZKML(零知識機學習)是我們如何使用零知識證明來驗證ML模型是否在框中所說的。 ZKML將零知識加密與機器學習結合在一起,以創建可以驗證AI屬性的系統,而無需暴露基礎模型或數據。我們還可以採用該概念並使用ZKP來識別ML模型,這些模型可以平等,公平地對待每個人。
Recently, we covered how ZKPs were becoming more efficient to perform at scale. Previously, using ZKPs to prove AI fairness was extremely limited because it could only focus on one phase of the ML pipeline. This made it possible for dishonest model providers to construct data sets that would satisfy the fairness requirements, even if the model failed to do so. The ZKPs would also introduce unrealistic computational demands and long wait times to produce proofs of fairness.
最近,我們介紹了ZKP如何變得更加有效地進行大規模執行。以前,使用ZKP證明AI公平性非常有限,因為它只能集中在ML管道的一個階段上。這使得不誠實的模型提供商可以構建可以滿足公平要求的數據集,即使模型未能這樣做。 ZKP還將引入不切實際的計算需求和漫長的等待時間,以產生公平的證明。
But in recent months, ZK frameworks have become more efficient to scale ZKPs to be able to perform synthesis tasks like quickly generating diverse pieces of content or merging large amounts of data. This makes it possible to integrate ZKPs to detect fraud or anomalies in financial transactions, which is a critical step toward large-scale adoption.
但是最近幾個月,ZK框架已經變得更有效地擴展了ZKP,以便能夠執行綜合任務,例如快速生成各種內容或合併大量數據。這使得整合ZKP以檢測金融交易中的欺詐或異常情況,這是邁向大規模採用的關鍵一步。
So how do we measure whether an AI is fair? Let's break down three of the most common group fairness definitions:
那麼,我們如何衡量AI是否公平?讓我們分解三個最常見的群體公平定義:
* Demographic parity
*人口統計
* Equality of opportunity
*機會平等
* Predictive equality
*預測平等
As we mentioned, diversity, equity and inclusion departments often use demographic parity as a measurement to attempt to reflect the demographics of a population in a company's workforce. It's not the ideal fairness metric for ML models because it's used to measure the probability of a specific prediction. For example, we wouldn't necessarily expect that every group will have the same outcomes.
正如我們提到的那樣,多樣性,公平和包容性部門通常會使用人口統計學奇偶校驗來試圖反映公司勞動力中人口的人口統計。對於ML模型而言,它不是理想的公平度量標準,因為它用於衡量特定預測的概率。例如,我們不一定希望每個小組都會有相同的結果。
Equality of opportunity is easy for most people to understand. It gives every group the same chance to have a positive outcome, assuming they are equally qualified. For instance, it is not optimizing for outcomes — only that every demographic should have the same opportunity to get a job or a home loan.
對於大多數人來說,機會平等很容易理解。假設他們具有同樣的資格,它使每個小組都有相同的機會獲得積極的結果。例如,它不是針對結果的優化 - 只是每個人群都應該有相同的機會獲得工作或房屋貸款。
Likewise, predictive equality measures if an ML model makes predictions with the same accuracy across various demographics, so no one is penalized simply for being part of a group. So in both cases, the ML model is not putting its thumb on the scale for equity reasons but only to ensure that groups are not being systematically discriminated against in any way. And that is an eminently sensible fix.
同樣,如果ML模型在各種人群中具有相同精度的預測,那麼預測性平等措施,因此沒有人僅僅因為成為組的一部分而受到懲罰。因此,在這兩種情況下,出於公平原因,ML模型都沒有將其拇指放在規模上,而只是確保沒有以任何方式對群體進行系統的歧視。這是一個非常明智的修復。
Over the past year, the US government and other countries have issued statements and mandates around AI fairness and protecting the public from ML bias. Now, with a new administration in the US, there will likely be a different approach to AI fairness, shifting the focus back to equality of opportunity and away from equity.
在過去的一年中,美國政府和其他國家發布了圍繞AI公平性的聲明和授權,並保護公眾免受ML偏見的侵害。現在,隨著美國的新政府,AI公平性可能會有不同的方法,將重點轉移到了機會平等和遠離公平的情況下。
As political landscapes change, so do the definitions of fairness in AI, moving between those focused on equity and those focused on opportunity. We are proponents of ML models that treat everyone equally without needing to put a thumb on the scale. And ZKPs can serve as an airtight way to verify that ML models are doing this without revealing private data.
隨著政治景觀的變化,人工智能公平性的定義也會在關注公平的人和專注於機會的人之間移動。我們是ML模型的支持者,他們同樣對待每個人,而無需將拇指放在規模上。 ZKP可以用作驗證ML模型在不揭示私人數據的情況下執行此操作的一種密封方法。
While ZKPs have faced plenty of scalability challenges over the years, the technology is finally becoming more affordable for mainstream use cases. We can use ZKPs to verify training data integrity, protect privacy, and ensure the models we’re using are what they say they are.
儘管多年來,ZKP面臨著許多可擴展性挑戰,但該技術終於在主流用例中變得越來越負擔得起。我們可以使用ZKP來驗證培訓數據完整性,保護隱私並確保我們使用的模型就是他們所說的。
免責聲明:info@kdj.com
所提供的資訊並非交易建議。 kDJ.com對任何基於本文提供的資訊進行的投資不承擔任何責任。加密貨幣波動性較大,建議您充分研究後謹慎投資!
如果您認為本網站使用的內容侵犯了您的版權,請立即聯絡我們(info@kdj.com),我們將及時刪除。
-
-
-
- 加密貨幣市場上週繪製了令人印象深刻的恢復
- 2025-04-14 15:15:12
- 上週,加密貨幣市場在上週繪製了令人印象深刻的恢復,這在唐納德·特朗普總統的關稅措施的幫助下得到了幫助。
-
-
- 在特朗普與世界自由金融的77.7萬美元購買之後,SEI令牌浪潮
- 2025-04-14 15:10:12
- SEI Network的本地令牌SEI在過去一周中的價格上漲了27%以上,目前的交易價格高於0.17美元。
-
-
- 預計比特幣倉庫(NASDAQ:BTM)將在4月20日星期一開放之前宣布其收益結果
- 2025-04-14 15:05:13
- 預計比特幣倉庫(NASDAQ:BTM)將在4月21日星期一開放之前宣布其收益結果。
-
-
- 加密貨幣市場從關稅戰爭動盪中恢復,並以強烈的看漲反彈
- 2025-04-14 15:00:13
- 由於我們與其他國家之間的關稅戰爭持續了數週的動盪,加密貨幣市場表現出強烈的看漲彈跳