[R] Interested in recent research into recall vs recognition in LLMs
About this article
I've casually seen LLMs correctly verify exact quotations that they either couldn't or wouldn't quote directly for me. I'm aware that they're trained to avoid quoting potentially copywritten content, and the implications of that, but it made me wonder a few things: Can LLMs verify knowledge more (or less) accurately than they can recall knowledge? 1b. Can LLMs verify more (or less) knowledge accurately than they can recall accurately? What research exists into LLM accuracy in recalling facts ...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket