New Delhi, December 17: Microsoft has released its newest compact “small language model” titled Phi-2 that continues to perform at par or better than certain larger open-source Llama 2 models with less than 13 billion parameters. Over the past few months, the Machine Learning Foundations team at Microsoft Research has released a suite of small language models (SLMs) called “Phi” that achieve remarkable performance on a variety of benchmarks.

The first model, the 1.3 billion parameter Phi-1 achieved state-of-the-art performance on Python coding among existing SLMs (specifically on the HumanEval and MBPP benchmarks). Microsoft AI Skills Initiative: US Tech Giant Launches New Programme To Help People Learn Generative AI Tools.

"We are now releasing Phi-2, a 2.7 billion-parameter language model that demonstrates outstanding reasoning and language understanding capabilities, showcasing state-of-the-art performance among base language models with less than 13 billion parameters,” the company said in an update.

Phi-2 is an ideal playground for researchers, including for exploration around mechanistic interpretability, safety improvements, or fine-tuning experimentation on a variety of tasks. “We have made Phi-2 available in the Azure AI Studio model catalog to foster research and development on language models,” said Microsoft. Microsoft Copilot AI Chatbot Announces New Features as It Observes First Anniversary, Details Here.

The massive increase in the size of language models to hundreds of billions of parameters has unlocked a host of emerging capabilities that have redefined the landscape of natural language processing.

However, a question remains whether such emergent abilities can be achieved at a smaller scale using strategic choices for training, e.g., data selection. “Our line of work with the Phi models aims to answer this question by training SLMs that achieve performance on par with models of much higher scale (yet still far from the frontier models),” said Microsoft.

The company has also performed extensive testing on commonly used prompts from the research community. “We observed a behaviour in accordance with the expectation we had given the benchmark results,” said the tech giant.

(The above story first appeared on LatestLY on Dec 17, 2023 02:29 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).