市值: $2.5012T 1.900%
成交额(24h): $148.6232B 6.620%
  • 市值: $2.5012T 1.900%
  • 成交额(24h): $148.6232B 6.620%
  • 恐惧与贪婪指数:
  • 市值: $2.5012T 1.900%
加密货币
话题
百科
资讯
加密话题
视频
热门新闻
加密货币
话题
百科
资讯
加密话题
视频
bitcoin
bitcoin

$81582.964513 USD

7.87%

ethereum
ethereum

$1608.086988 USD

13.28%

tether
tether

$0.999726 USD

0.05%

xrp
xrp

$1.980469 USD

12.45%

bnb
bnb

$574.061663 USD

5.17%

usd-coin
usd-coin

$0.999912 USD

-0.02%

solana
solana

$115.417458 USD

11.49%

dogecoin
dogecoin

$0.154518 USD

10.41%

tron
tron

$0.238185 USD

4.49%

cardano
cardano

$0.611545 USD

10.46%

unus-sed-leo
unus-sed-leo

$9.390006 USD

2.82%

chainlink
chainlink

$12.255909 USD

14.28%

toncoin
toncoin

$3.030692 USD

1.96%

avalanche
avalanche

$17.937379 USD

11.65%

stellar
stellar

$0.234331 USD

7.41%

加密货币新闻

您能相信您的AI公正吗?作者:Rob Viglione

2025/04/03 23:24

最近的一篇研究论文表明,它更加复杂。不幸的是,偏见不仅是一个错误 - 如果没有适当的加密护栏,这是一个持久的功能。

Can you trust your AI to be unbiased? A recent research paper suggests it’s a little more complicated.

您能相信您的AI公正吗?最近的一篇研究论文表明,它更加复杂。

Unfortunately, bias isn’t just a bug — it’s a persistent feature without proper cryptographic guardrails. A September 2024 study from Imperial College London shows how zero-knowledge proofs (ZKPs) can help companies verify that their machine learning (ML) models treat all demographic groups equally while still keeping model details and user data private.

不幸的是,偏见不仅是一个错误 - 如果没有适当的加密护栏,这是一个持久的功能。 2024年9月来自伦敦帝国学院的一项研究表明,零知识证明(ZKP)如何帮助公司验证其机器学习(ML)模型是否平等对待所有人口组,同时仍将模型详细信息和用户数据保密。

We recently covered how ZKPs are being used to detect fraud and anomalies in financial transactions. But in this case, ZKPs can be applied to verify the fairness property of ML models.

最近,我们介绍了如何使用ZKP来检测金融交易中的欺诈和异常。但是在这种情况下,可以应用ZKP来验证ML模型的公平属性。

When discussing "fairness," we're entering a complicated area. There are several mathematical definitions of fairness, and the preferred definition shifts with the political landscape. For instance, consider the US government's approach to fairness over the past two administrations.

在讨论“公平”时,我们进入一个复杂的领域。公平性有几种数学定义,而首选的定义随着政治格局的转变。例如,考虑过去两个政府中美国政府的公平方法。

The previous administration was focused on diversity, equity and inclusion. They used demographic parity as a key measure of fairness, aiming to ensure the output probability of a specific prediction is the same across different groups.

上一个政府的重点是多样性,公平和包容性。他们使用人口统计学奇偶校验作为公平的关键衡量标准,旨在确保特定预测的输出概率在不同的组之间相同。

But as we integrate more ML models into critical systems like college admissions, home loans and future job prospects, we could use a little more reassurance that AI is treating us fairly.

但是,随着我们将更多的ML模型整合到诸如大学录取,房屋贷款和未来的工作前景之类的关键系统中,我们可以更保证AI对我们进行公平对待。

We need to be sure that any attestations of fairness keep the underlying ML models and training data confidential. They need to protect intellectual property and users’ privacy while providing enough access for users to know that their model is not discriminatory. Not an easy task.

我们需要确保任何公平证明都可以使基本的ML模型和培训数据机密。他们需要保护知识产权和用户的隐私,同时提供足够的访问权限,以使用户知道他们的模型没有歧视性。这不是一件容易的事。

Enter, zero-knowledge proofs.

输入,零知识证明。

ZKML (zero knowledge machine learning) is how we use zero-knowledge proofs to verify that an ML model is what it says on the box. ZKML combines zero-knowledge cryptography with machine learning to create systems that can verify AI properties without exposing the underlying models or data. We can also take that concept and use ZKPs to identify ML models that treat everyone equally and fairly.

ZKML(零知识机学习)是我们如何使用零知识证明来验证ML模型是否在框中所说的。 ZKML将零知识加密与机器学习结合在一起,以创建可以验证AI属性的系统,而无需暴露基础模型或数据。我们还可以采用该概念并使用ZKP来识别ML模型,这些模型可以平等,公平地对待每个人。

Recently, we covered how ZKPs were becoming more efficient to perform at scale. Previously, using ZKPs to prove AI fairness was extremely limited because it could only focus on one phase of the ML pipeline. This made it possible for dishonest model providers to construct data sets that would satisfy the fairness requirements, even if the model failed to do so. The ZKPs would also introduce unrealistic computational demands and long wait times to produce proofs of fairness.

最近,我们介绍了ZKP如何变得更加有效地进行大规模执行。以前,使用ZKP证明AI公平性非常有限,因为它只能集中在ML管道的一个阶段上。这使得不诚实的模型提供商可以构建可以满足公平要求的数据集,即使模型未能这样做。 ZKP还将引入不切实际的计算需求和漫长的等待时间,以产生公平的证明。

But in recent months, ZK frameworks have become more efficient to scale ZKPs to be able to perform synthesis tasks like quickly generating diverse pieces of content or merging large amounts of data. This makes it possible to integrate ZKPs to detect fraud or anomalies in financial transactions, which is a critical step toward large-scale adoption.

但是最近几个月,ZK框架已经变得更有效地扩展了ZKP,以便能够执行综合任务,例如快速生成各种内容或合并大量数据。这使得整合ZKP以检测金融交易中的欺诈或异常情况,这是迈向大规模采用的关键一步。

So how do we measure whether an AI is fair? Let's break down three of the most common group fairness definitions:

那么,我们如何衡量AI是否公平?让我们分解三个最常见的群体公平定义:

* Demographic parity

*人口统计

* Equality of opportunity

*机会平等

* Predictive equality

*预测平等

As we mentioned, diversity, equity and inclusion departments often use demographic parity as a measurement to attempt to reflect the demographics of a population in a company's workforce. It's not the ideal fairness metric for ML models because it's used to measure the probability of a specific prediction. For example, we wouldn't necessarily expect that every group will have the same outcomes.

正如我们提到的那样,多样性,公平和包容性部门通常会使用人口统计学奇偶校验来试图反映公司劳动力中人口的人口统计。对于ML模型而言,它不是理想的公平度量标准,因为它用于衡量特定预测的概率。例如,我们不一定希望每个小组都会有相同的结果。

Equality of opportunity is easy for most people to understand. It gives every group the same chance to have a positive outcome, assuming they are equally qualified. For instance, it is not optimizing for outcomes — only that every demographic should have the same opportunity to get a job or a home loan.

对于大多数人来说,机会平等很容易理解。假设他们具有同样的资格,它使每个小组都有相同的机会获得积极的结果。例如,它不是针对结果的优化 - 只是每个人群都应该有相同的机会获得工作或房屋贷款。

Likewise, predictive equality measures if an ML model makes predictions with the same accuracy across various demographics, so no one is penalized simply for being part of a group. So in both cases, the ML model is not putting its thumb on the scale for equity reasons but only to ensure that groups are not being systematically discriminated against in any way. And that is an eminently sensible fix.

同样,如果ML模型在各种人群中具有相同精度的预测,那么预测性平等措施,因此没有人仅仅因为成为组的一部分而受到惩罚。因此,在这两种情况下,出于公平原因,ML模型都没有将其拇指放在规模上,而只是确保没有以任何方式对群体进行系统的歧视。这是一个非常明智的修复。

Over the past year, the US government and other countries have issued statements and mandates around AI fairness and protecting the public from ML bias. Now, with a new administration in the US, there will likely be a different approach to AI fairness, shifting the focus back to equality of opportunity and away from equity.

在过去的一年中,美国政府和其他国家发布了围绕AI公平性的声明和授权,并保护公众免受ML偏见的侵害。现在,随着美国的新政府,AI公平性可能会有不同的方法,将重点转移到了机会平等和远离公平的情况下。

As political landscapes change, so do the definitions of fairness in AI, moving between those focused on equity and those focused on opportunity. We are proponents of ML models that treat everyone equally without needing to put a thumb on the scale. And ZKPs can serve as an airtight way to verify that ML models are doing this without revealing private data.

随着政治景观的变化,人工智能公平性的定义也会在关注公平的人和专注于机会的人之间移动。我们是ML模型的支持者,他们同样对待每个人,而无需将拇指放在规模上。 ZKP可以用作验证ML模型在不揭示私人数据的情况下执行此操作的一种密封方法。

While ZKPs have faced plenty of scalability challenges over the years, the technology is finally becoming more affordable for mainstream use cases. We can use ZKPs to verify training data integrity, protect privacy, and ensure the models we’re using are what they say they are.

尽管多年来,ZKP面临着许多可扩展性挑战,但该技术终于在主流用例中变得越来越负担得起。我们可以使用ZKP来验证培训数据完整性,保护隐私并确保我们使用的模型就是他们所说的。

免责声明:info@kdj.com

所提供的信息并非交易建议。根据本文提供的信息进行的任何投资,kdj.com不承担任何责任。加密货币具有高波动性,强烈建议您深入研究后,谨慎投资!

如您认为本网站上使用的内容侵犯了您的版权,请立即联系我们(info@kdj.com),我们将及时删除。

2025年04月11日 发表的其他文章