‘No one knows what makes humans so much more efficient’: small language models based on Homo Sapiens could help explain how we learn and improve AI efficiency — for better or for worse

Pls share this post


Listen to this article

Tech companies are shifting focus from building the largest language models (LLMs) to developing smaller ones (SLMs) that can match or even outperform them. 

Meta’s Llama 3 (400 billion parameters), OpenAI’s GPT-3.5 (175 billion parameters), and GPT-4 (an estimated 1.8 trillion parameters) are famously larger models, while Microsoft’s Phi-3 family ranges from 3.8 billion to 14 billion parameters, and Apple Intelligence “only” has around 3 billion parameters.

It may seem like a downgrade to have models with far fewer parameters, but the appeal of SLMs is understandable. They consume less energy, can run locally on devices like smartphones and laptops, and are a good fit for smaller businesses and labs that cannot afford expensive hardware setups.

READ ALSO  Thousands of Linux servers infected by Ebury malware

David vs. Goliath

As IEEE Spectrum reports, “The rise of SLMs comes at a time when the performance gap between LLMs is quickly narrowing, and tech companies look to deviate from standard scaling laws and explore other avenues for performance upgrades.”

In a recent round of tests conducted by Microsoft, Phi-3-mini, the tech giant’s smallest model with 3.8 billion parameters, rivaled Mixtral (8x 7 billion) and GPT-3.5 in some areas, despite being small enough to fit on a phone. Its success was down to the dataset used for training, which was composed of “heavily filtered publicly available web data and synthetic data.”

While SLMs achieve a similar level of language understanding and reasoning as much larger models, they are still limited by their size for certain tasks and can’t store too much “factual” knowledge. This is an issue that can be addressed by combining the SLM with an online search engine.

READ ALSO 
Intel's formidable 288 core CPU now has a proper family name — Granite Rapids and Sierra Forest are Xeon 6 processors but is it just becoming too confusing?

IEEE Spectrum’s Shubham Agarwal compares SLMs with how children learn language and says, “By the time children turn 13, they’re exposed to about 100 million words and are better than chatbots at language, with access to only 0.01 percent of the data.” Although, as Agarwal points out, “No one knows what makes humans so much more efficient,” Alex Warstadt, a computer science researcher at ETH Zurich, suggests “reverse engineering efficient humanlike learning at small scales could lead to huge improvements when scaled up to LLM scales.”

More from TechRadar Pro

READ ALSO  Chinese MIPS rival to Intel and AMD set to launch a 128-core server CPU to compete in the busy market — EPYC and Xeon fans need not worry for now but do keep an eye on the NVLink-like tech

Source



Pls share this post
Previous articleTurns out, people don’t really want console games on iPhone – but I think that misses the point
Next articleJust 1 piece of evidence could scuttle Trump’s hush-money conviction and force a time-sucking retrial