• Transformer Models are programmable. You can provide them with enough context (i.e., Memory), possibly from a data store (i.e., Virtual Memory). They then perform some processing and give you a response. For more info on prompt engineering see this

  • Transformers are so effective because learning from human language is a cheat. Human language encodes human thinking, and transformers learn to exploit this.

Topics

Links