|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Cryptocurrency News Articles
Artificially intelligent chatbots are making more mistakes over time as newer models are released, a recent research study revealed
Oct 05, 2024 at 06:04 am
Lexin Zhou, one of the study's authors, theorized that because AI models are optimized to always provide believable answers, the seemingly correct
Artificially intelligent chatbots are making more mistakes over time, a recent research study titled "Larger and more instructable language models become less reliable" in the Nature Scientific Journal has found.
The study, which was conducted by a team of researchers from the University of California, Berkeley, the University of Washington, and DeepMind, evaluated the performance of several different chatbot models on a range of natural language processing tasks. They found that the newer, larger models performed worse on many of the tasks than the older, smaller models.
One of the study's authors, Lexin Zhou, theorized that this decline in performance is due to the way that AI models are optimized. He explained that because these models are designed to always provide believable answers, they tend to prioritize and push the seemingly correct responses to the end user, regardless of whether or not they are actually accurate.
"The models are getting better at generating hallucinated text that sounds plausible and consistent with the context, but they are not necessarily getting better at generating true and factual text," Zhou said in a statement.
These AI hallucinations are self-reinforcing and tend to compound over time, a phenomenon that is further exacerbated by the common practice of using older large language models to train newer large language models, a process known as "model collapse."
"The worrying part is that these hallucinations are often difficult to detect, even for humans," Zhou added. "This could lead to people relying on and trusting the output of these models too much, which could have dangerous consequences."
Mathieu Roy, an editor and writer who covers artificial intelligence for Interesting Engineering, cautioned users not to rely too heavily on these tools and to always check AI-generated search results for inconsistencies, especially if the information being presented seems surprising or too good to be true.
"To make matters worse, there’s often no way to check the information except by asking the chatbot itself," Roy asserted in an article about the study's findings.
Related: OpenAI raises an additional $6.6B at a 157B valuation
The stubborn problem of AI hallucinations
The issue of AI hallucinations has been a persistent problem in the development of large language models, despite efforts by researchers and industry leaders to mitigate this tendency.
In February 2024, Google's artificial intelligence platform drew ridicule after the AI started producing historically inaccurate images. Among other things, the AI was seen portraying people of color as Nazi officers and creating wildly inaccurate images of well-known historical figures.
Unfortunately, incidents like this are far too common with the current iteration of artificial intelligence and large language models. Several industry executives, including Nvidia CEO Jensen Huang, have proposed possible solutions to this problem, such as forcing AI models to conduct research and provide sources for every single answer that is given to a user.
However, these measures are already featured in the most popular AI and large language models, yet the problem of AI hallucinations still persists.
More recently, in September, HyperWrite AI CEO Matt Shumer announced that the company's new 70B model uses a method called “Reflection-Tuning” — which purportedly gives the AI bot a way of learning by analyzing its own mistakes and adjusting its responses over time.
Disclaimer:info@kdj.com
The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!
If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.
-
- Mark Cuban: All Meme Coins Are “Rug Pulls”
- Oct 05, 2024 at 02:35 pm
- Billionaire investor and Shark Tank star, Mark Cuban lately declared that every one meme coins are rug pulls in a podcast. Nevertheless, this assertion contradicts his well-known assist and love for Dogecoin (DOGE), the first-ever meme coin and one of the crucial well-liked cryptocurrencies out there.
-
- Is Len Sassaman Really Satoshi Nakamoto? HBO Documentary Reignites Bitcoin Inventor Speculation
- Oct 05, 2024 at 02:30 pm
- Deceased computer scientist and privacy advocate Len Sassaman is unexpectedly in the limelight as bettors speculate on the upcoming HBO documentary that advertises itself as revealing the identity of the inventor of Bitcoin.
-
- A bold prediction for 2024 sparks excitement in the crypto ETF market; analysts see ETFSwap surging, rivaling Dogecoin
- Oct 05, 2024 at 02:20 pm
- An explosive new prediction has shaken the crypto ETF world. Analysts are calling for a massive surge in 2024 for ETFSwap (ETFS), the rapidly rising competitor to Dogecoin (DOGE).