Meta
"Meta partners with Reuters while NYT litigates, showing data access divide: only AI companies with 'deep pockets' can afford premium content deals."
"Quantised Llama 3.2 achieves 56% size reduction using QLoRA and SpinQuant, with 4-bit weights and 8-bit activations for mobile deployment."
"SAM 2 enables real-time object segmentation, processing 60M+ polygons across 350M images for manufacturing QA and scientific research."
Meta's Llama model uses split inference: 1st layer processes on user devices, remaining 31 in cloud, enabling privacy while avoiding quantisation
Meta FAIR releases SAM 2.1 for improved image segmentation, Spirit LM for multimodal speech/text, Layer Skip for faster LLM inference, and tools for cryptography.
Meta's AI video tool, Movie Gen, is being tested with Blumhouse and select filmmakers. It generates custom videos from text, edits existing ones, and transforms images into videos.