We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision.

https://arxiv.org/abs/2311.07590

  • I fucking love when my students bring “chat” in as their tutor and show me the logic they followed… Bro, ChatGPT knows the correct answer, but you asked a bad question and it gave you its best guess hidden as a factual statement.

    To be fair, I spend a lot of time teaching my students how to use LLMs to get the best results while avoiding “leading the witness.”

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      ·
      11 months ago

      ChatGPT knows the correct answer

      It doesn’t “know” the correct answer. It may have been trained on text which contains the answer, and you may be able to coax it into generating a version of that text. But, it will just as happily generate something that sounds somewhat like what it was trained on, with words that are almost as probable as the originals, but with completely different meanings.