DeepMind’s new 280 billion-parameter language model kicks GPT-3’s butt in accuracy

DeepMind’s new 280 billion-parameter language model kicks GPT-3’s butt in accuracy


Move over GPT-3, there’s a scrappy new contender for the crown of world’s greatest language model and it’s from our old pals over at DeepMind. Up front: The Alphabet-owned UK outfit that answered the question of whether humans or computers are better at chess once and for all – the machines won – has now set its sights on the world of large language models (LLM). To that end, today it announced “Gopher,” a language model that’s about 60% larger, parameter-wise, than GPT-3 and a little over a quarter of the size of Google’s massive trillion-parameter LLM. Per a press…

This story continues at The Next Web
  1. No comment added yet!
  2. Leave Your Comment

    Your email address will not be published. Required fields are marked *

AtSign Innovations