bitcoin
bitcoin

$71990.68 USD 

0.33%

ethereum
ethereum

$2694.92 USD 

2.53%

tether
tether

$0.999687 USD 

0.01%

bnb
bnb

$597.44 USD 

-1.48%

solana
solana

$174.14 USD 

-3.03%

usd-coin
usd-coin

$0.999937 USD 

-0.01%

xrp
xrp

$0.522528 USD 

-0.34%

dogecoin
dogecoin

$0.166008 USD 

-1.51%

tron
tron

$0.168618 USD 

1.79%

toncoin
toncoin

$4.99 USD 

-0.81%

cardano
cardano

$0.353730 USD 

1.76%

shiba-inu
shiba-inu

$0.000018 USD 

-2.10%

avalanche
avalanche

$26.15 USD 

-1.22%

chainlink
chainlink

$12.20 USD 

6.23%

bitcoin-cash
bitcoin-cash

$371.36 USD 

-2.91%

Cryptocurrency News Articles

Nvidia CEO Predicts Human-Level AI in Five Years, Unveils Solution to AI Hallucinations

Mar 23, 2024 at 12:12 am

Nvidia CEO Jensen Huang predicts that human-level artificial intelligence (AI) could be achieved within the next five years. He suggests that defining AGI through specific tests, where software outperforms humans by a significant margin, could lead to its realization. Huang also addresses the issue of "hallucinations" in AI, proposing a solution that requires AI to verify answers by conducting research before providing responses. Resolving this problem could have profound implications for industries like finance and cryptocurrency, where accuracy is paramount and generative AI systems are currently limited.

Nvidia CEO Predicts Human-Level AI in Five Years, Unveils Solution to AI Hallucinations

Nvidia CEO Predicts Human-Level AI Within Five Years, Unveils Solution to AI Hallucinations

San Jose, California - March 20, 2023 - In a groundbreaking speech at the Nvidia GTC developers conference, Nvidia CEO Jensen Huang has forecast that human-level artificial intelligence (AI) could be attainable within the next five years. Huang's announcement comes amidst growing optimism in the AI community, with significant advancements being made in the development of large language models (LLMs).

One of the key challenges facing the realization of human-level AI is the phenomenon of "hallucination," where AI systems generate inaccurate or fictitious information. This issue arises due to the limitations of current training techniques for LLMs, which struggle to distinguish between real-world facts and hallucinations.

However, Huang presented a straightforward solution to this problem during his keynote address: requiring AI systems to verify their answers by conducting research before providing responses. This approach aims to address the tendency of AI systems to generate responses based on limited or inaccurate data, leading to the production of false or misleading information.

"We can solve the hallucination problem by requiring AI to do research," Huang stated. "By forcing AI to check its answers before providing them, we can significantly reduce the risk of hallucinations."

While several AI models currently offer features to provide sources for their outputs, including Microsoft's CoPilot AI, Google's Gemini, OpenAI's ChatGPT, and Anthropic's Claude 3, the complete resolution of the hallucination problem could have profound implications for industries such as finance and cryptocurrency.

Currently, the use of generative AI systems in contexts where accuracy is essential requires caution. For instance, ChatGPT's user interface warns users about potential errors and advises cross-checking crucial information.

In finance and cryptocurrency, accuracy is paramount, as it directly affects profits and losses. Consequently, the current reliance on generative AI systems is limited in these areas.

However, Huang's proposed solution could change this dynamic. If the issue of AI hallucinations were fully resolved, these AI models could potentially operate independently, executing trades and making financial decisions without human intervention.

"Solving the problem of AI hallucinations would open the door to fully automated trading systems," Huang said. "The implications for the finance and cryptocurrency industries would be immense."

Huang also emphasized the importance of benchmarking in the development of AI. He suggested that defining AGI through specific tests, where software outperforms humans by a substantial margin, could accelerate its realization.

"We need to set clear benchmarks for AGI," Huang noted. "By establishing measurable goals, we can track our progress and ensure that we are on the right path."

The potential for human-level AI within the next five years represents a significant milestone in the evolution of technology. Huang's proposed solution to AI hallucinations provides a promising path forward, addressing a major obstacle that has hindered the development of truly intelligent systems.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Oct 30, 2024