Mark Zuckerberg's Meta released new quantized versions of Llama 3.2 1B and 3B models. These models could deliver up to 2-4x inference speed while reducing model size by an average of 56%. The company said the new Meta Llama 3.2 1B and Meta Llama 3B models could also reduce 41% of the memory footprint. Utilizing Quantization-Aware Training with LoRA adaptors, these new Meta AI models balance performance, accuracy and portability, making them suitable for resource-constrained devices. Developers can now download the latest models from Meta and Hugging Face. Apple October 2024 Event: iPhone-Maker Confirms Launch Starting on October 28, Likely Introduce New M4-Powerd MacBooks Pro, iMac and Mac Mini.
Meta Launched Meta Llama 3.2 1B, Meta Llama 3.2 3B Quantized AI Models
We want to make it easier for more people to build with Llama — so today we’re releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% reduction in memory footprint.
Details… pic.twitter.com/GWETOfhCTD
— AI at Meta (@AIatMeta) October 24, 2024
(SocialLY brings you all the latest breaking news, viral trends and information from social media world, including Twitter (X), Instagram and Youtube. The above post is embeded directly from the user's social media account and LatestLY Staff may not have modified or edited the content body. The views and facts appearing in the social media post do not reflect the opinions of LatestLY, also LatestLY does not assume any responsibility or liability for the same.)