bitcoin
bitcoin

$98836.469829 USD

0.25%

ethereum
ethereum

$3473.377251 USD

-0.75%

tether
tether

$0.999134 USD

-0.04%

xrp
xrp

$2.293843 USD

-1.57%

bnb
bnb

$701.956671 USD

0.88%

solana
solana

$198.450307 USD

1.17%

dogecoin
dogecoin

$0.331507 USD

-1.22%

usd-coin
usd-coin

$1.000162 USD

0.00%

cardano
cardano

$0.913368 USD

-2.55%

tron
tron

$0.257513 USD

-0.12%

avalanche
avalanche

$40.384597 USD

-2.15%

chainlink
chainlink

$24.548243 USD

-0.91%

toncoin
toncoin

$5.985215 USD

3.84%

shiba-inu
shiba-inu

$0.000023 USD

-0.96%

sui
sui

$4.530543 USD

-2.52%

Cryptocurrency News Articles

Past Perspectives Provide Future Glimmers: Novel Technique Enhances ChatGPT's Prediction Accuracy

Apr 19, 2024 at 01:15 am

Researchers have discovered that prompting ChatGPT to create a story predicting the future, as opposed to a direct prediction, leads to more accurate results. This technique was tested by users on Reddit, suggesting an interest rate hike in June and a financial crisis in 2030. However, concerns arise due to ChatGPT's limited training data and potential intentional limitations imposed by OpenAI.

Past Perspectives Provide Future Glimmers: Novel Technique Enhances ChatGPT's Prediction Accuracy

Predicting the Future with a Glimpse into the Past

A novel prompting technique has emerged to elicit predictions from ChatGPT, a language model renowned for its aversion to forecasting. Researchers have discovered that prompting ChatGPT to narrate future events, as if reflecting upon them from a bygone era, yields more accurate predictions.

To validate this technique, researchers evaluated 100 prompts, contrasting direct predictions (e.g., identifying the Oscar winner for Best Actor in 2022) with narrative prompts (e.g., asking ChatGPT to recount a family watching the 2022 Oscars and describe the moment the Best Actor winner is announced). The narrative prompts produced more accurate results.

Similarly, to forecast interest rate movements, researchers instructed ChatGPT to imagine Fed Chair Jerome Powell reflecting on past events. Redditors have experimented with this technique, suggesting a possible interest rate hike in June and a financial crisis in 2030.

Theoretically, prompting ChatGPT to create a Cointelegraph news article set in 2025, recounting Bitcoin's significant price fluctuations, would result in a more accurate price forecast than a direct prediction query.

However, the research faces potential limitations. The researchers selected the 2022 Oscars as a known event, assuming ChatGPT's knowledge cutoff in September 2021. However, ChatGPT has been known to generate information beyond its training data.

Additionally, OpenAI appears to have intentionally hindered ChatGPT's predictive capabilities, suggesting that this technique may be a temporary workaround rather than a reliable solution.

Related Findings in the Realm of AI

A parallel study revealed that the best approach to solving 50 math problems with LLama2, a cousin of ChatGPT, was to present it as a mission to guide the Enterprise spaceship through turbulence in Star Trek.

However, this method proved inconsistent. Solving 100 math problems required telling the AI that the President's advisor would perish if it failed to provide correct answers.

Boston Dynamics' Uncanny Atlas Robot

Boston Dynamics has unveiled its latest Atlas robot, showcasing remarkable agility reminiscent of the possessed child in The Exorcist.

"It's going to be capable of a set of motions that people aren't," said CEO Robert Playter in an interview with TechCrunch. "There will be very practical uses for that."

This iteration of Atlas is more compact, fully electric, and devoid of hydraulics. Hyundai plans to test Atlas as a robotic worker in its factories early next year.

Humane's AI Pin Faces Scathing Reviews

Wearable AI devices, exemplified by the Humane AI pin, have generated buzz but struggled to demonstrate their worth.

The Humane AI pin, worn on the chest, interacts with voice commands and projects text onto the user's hand. However, tech reviewer Marques Brownlee condemned it as "the worst product I've ever reviewed," citing frequent errors, poor interface, limited battery life, and sluggishness compared to Google.

Other reviewers echo Brownlee's dissatisfaction. Wired gave it 4 out of 10, criticizing its slow performance, poor camera, daylight-impaired projector, and overheating issues. However, it praised its real-time translation and phone call capabilities.

The Verge acknowledged the potential of the concept but deemed the actual device "thoroughly unfinished and... broken in so many unacceptable ways."

Another AI wearable, The Rabbit r1, aims to replace multiple phone apps with an AI assistant. However, reviewers have questioned its advantages over smartphones.

"The voice control interface that does away with apps completely is a good starting point, but again, that's something my Pixel 8 could feasibly do in the future," concluded TechRadar's preview of the device.

AI Perpetuates Holocaust Survivors' Legacy

The Sydney Jewish Museum has unveiled an AI-powered interactive exhibition that enables visitors to engage with Holocaust survivors and receive real-time responses.

Before his passing in 2021, death camp survivor Eddie Jaku spent days answering over 1,000 questions about his experiences before a 23-camera rig. The system transforms visitor inquiries into search terms, matching them with appropriate answers from Jaku's recorded footage, creating a conversational experience.

Amidst rising antisemitism, this AI application serves as a powerful tool to preserve the first-hand accounts of Holocaust survivors for future generations.

AI-Generated Content: A Potential Pandora's Box

Approximately 10% of Google search results now lead to AI-generated spam content. Spammers have long exploited SEO to promote websites filled with low-quality articles, but generative AI has exacerbated the problem.

Besides rendering Google search unreliable, there are concerns that if AI-generated content becomes the dominant form of online material, we may face "model collapse," where AIs trained on subpar AI content produce even lower-quality results.

A related phenomenon, known as "knowledge collapse," has been identified in humans, according to a recent Cornell research paper. The author, Andrew J. Peterson, argues that AIs gravitate toward "mid-curve" ideas, neglecting less common or eccentric perspectives:

"While large language models are trained on vast amounts of diverse data, they naturally generate output towards the 'center of the distribution."

Peterson suggests that the diversity of human thought could diminish over time as AIs homogenize ideas. The paper advocates for subsidies to protect the diversity of knowledge, similar to those supporting less popular academic and artistic endeavors.

Users Should Demand Paid AI Services

Google has been promoting its Gemini 1.5 model to businesses, emphasizing its safety features and ideological neutrality, which were lacking in the consumer version.

While the consumer version was shut down due to concerns over images of diverse Nazis, the enterprise version reportedly remained unaffected.

This raises concerns that an ad-driven "free" model for AI, akin to the disastrous model for the web, could lead to users becoming the product, bombarded with advertisements as services deteriorate.

Non-Coders and AI Collaboration: A Complex Journey

Author and futurist Daniel Jeffries attempted to create a complex app with AI assistance, despite his limited coding skills. He encountered challenges and frustration but ultimately succeeded.

However, Jeffries concluded that AI will not displace coders. Instead, skilled coders with the ability to clearly articulate their requirements will be in even higher demand.

AI News Roundup

  • Token holders have approved the merger of Fetch.ai, SingularityNET, and Ocean Protocol, forming the Artificial Superintelligence Alliance.
  • Google DeepMind CEO Demis Hassabis remains tight-lipped about the rumored $100 billion supercomputer project codenamed "Stargate," but has confirmed significant investments in AI.
  • Baidu's Chinese ChatGPT counterpart, Ernie, has doubled its user base to 200 million since October.
  • Researchers found that AI image generators could create election disinformation four out of 10 times, highlighting the need for stronger safety measures or improved watermarking systems.
  • Instagram is recruiting influencers for a program where their AI-generated avatars can interact with fans.
  • Guardian columnist Alex Hern theorizes that ChatGPT's frequent use of the word "delve" may be a side effect of the input provided by low-cost human workers in Nigeria, where the term is common.
  • OpenAI has released an enhanced version of GPT-4 Turbo through ChatGPT Plus, offering improved problem-solving capabilities, better conversation flow, and reduced verbosity. It also includes a 50% discount for off-peak batch processing tasks.

Disclaimer:info@kdj.com

The information provided is not trading advice. kdj.com does not assume any responsibility for any investments made based on the information provided in this article. Cryptocurrencies are highly volatile and it is highly recommended that you invest with caution after thorough research!

If you believe that the content used on this website infringes your copyright, please contact us immediately (info@kdj.com) and we will delete it promptly.

Other articles published on Dec 26, 2024