Futurology Today
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
LughMA to FuturologyEnglish · 6 months ago

Multiple LLMs voting together on content validation catch each other’s mistakes to achieve 95.6% accuracy.

arxiv.org

external-link
message-square
27
link
fedilink
48
external-link

Multiple LLMs voting together on content validation catch each other’s mistakes to achieve 95.6% accuracy.

arxiv.org

LughMA to FuturologyEnglish · 6 months ago
message-square
27
link
fedilink
Probabilistic Consensus through Ensemble Validation: A Framework for LLM Reliability
arxiv.org
external-link
Large Language Models (LLMs) have shown significant advances in text generation but often lack the reliability needed for autonomous deployment in high-stakes domains like healthcare, law, and finance. Existing approaches rely on external knowledge or human oversight, limiting scalability. We introduce a novel framework that repurposes ensemble methods for content validation through model consensus. In tests across 78 complex cases requiring factual accuracy and causal consistency, our framework improved precision from 73.1% to 93.9% with two models (95% CI: 83.5%-97.9%) and to 95.6% with three models (95% CI: 85.2%-98.8%). Statistical analysis indicates strong inter-model agreement ($κ$ > 0.76) while preserving sufficient independence to catch errors through disagreement. We outline a clear pathway to further enhance precision with additional validators and refinements. Although the current approach is constrained by multiple-choice format requirements and processing latency, it offers immediate value for enabling reliable autonomous AI systems in critical applications.
  • LughOPMA
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    5
    ·
    6 months ago

    Large language models surpass human experts in predicting neuroscience results

    A small study found ChatGPT outdid human physicians when assessing medical case histories, even when those doctors were using a chatbot.

    • massive_bereavement@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      6 months ago

      Are you kidding me? How did NYT reach those conclusions when the chair flipping conclusions of said study quite clearly states that [sic]“The use of an LLM did not significantly enhance diagnostic reasoning performance compared with the availability of only conventional resources.”

      https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2825395

      I mean, c’mon!

      On the Nature one:

      “we constructed a new forward-looking (Fig. 2) benchmark, BrainBench.”

      and

      “Instead, our analyses suggested that LLMs discovered the fundamental patterns that underlie neuroscience studies, which enabled LLMs to predict the outcomes of studies that were novel to them.”

      and

      “We found that LLMs outperform human experts on BrainBench”

      Is in reality saying: we made this benchmark that LLMs know how to cheat around our benchmark better than experts do, nothing more, nothing else.

Futurology

futurology

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !futurology@futurology.today
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 99 users / day
  • 705 users / week
  • 1.92K users / month
  • 6.32K users / 6 months
  • 90 local subscribers
  • 2.58K subscribers
  • 1.83K Posts
  • 11.5K Comments
  • Modlog
  • mods:
  • voidx
  • Lugh
  • Espiritdescali
  • AwesomeLowlander
  • BE: 0.19.11
  • Modlog
  • Legal
  • Instances
  • Docs
  • Code
  • join-lemmy.org