In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.
I think it’s saying that LLMs won’t crack public key cryptography no matter how many times you ask them to please do it and they’ll sooner make up something instead
I think it’s saying that LLMs won’t crack public key cryptography no matter how many times you ask them to please do it and they’ll sooner make up something instead