Large language models (LLMs) can synthesize vast amounts of information. Luo et al. show that LLMs—especially BrainGPT, an LLM the authors tuned on the neuroscience literature—outperform experts in predicting neuroscience results and could assist scientists in making future discoveries.
I think this isn’t really about predicting something. That’s just a means to benchmark AI. You can either ask it questions to probe knowledge. Or test if it can look forward, reason and jump to some conclusions. In other words predict something. They tried how well it performed at that. Not because these predictions itself are useful. But because you can use them to measure the AI’s capabilities at similar tasks.
I think this isn’t really about predicting something. That’s just a means to benchmark AI. You can either ask it questions to probe knowledge. Or test if it can look forward, reason and jump to some conclusions. In other words predict something. They tried how well it performed at that. Not because these predictions itself are useful. But because you can use them to measure the AI’s capabilities at similar tasks.