Gbr app logo

  • AI alignment
  • How to train your large language model
  • A new technique is speeding up the process
  • IT IS NO secret that building a large language model (LLM) requires vast amounts of data. In conventional training, an LLM is fed mountains of text, and encouraged to guess each word before it appears. With each prediction, the LLM makes small adjustments to improve its chances of guessing right. The end result is something that has a certain statistical “understanding” of what is proper language and what isn’t.

    But an LLM that has only undergone this so-called “pretraining” is not yet particularly useful. When asked for a joke to cheer your correspondent up, for instance, the pretrained model GPT-2 just repeated the question back three times. When asked who the American president was, it responded: “The answer is no. The president is not the president.” Clearly, teaching an LLM to do what humans want requires something more.

    Register or log in with an email and password

    (You may log in GBR APP with this email and password)

    Promotion image

    Download GBR APPs Now

    Ios app link

    Subscribe now to enjoy all the membership benefit