News
The guide suggests fine-tuning, particularly parameter-efficient fine-tuning, as a more viable approach for smaller teams with limited resources compared to pre-training methods.
The guide identifies five scenarios where fine-tuning excels: customising tone and format, improving accuracy, addressing niche domains, reducing costs via distillation, and developing new abilities.
Meta AI's guide emphasises dataset quality for fine-tuning LLMs, suggesting small high-quality datasets often outperform larger low-quality ones. It compares full fine-tuning and PEFT techniques.
Google launched its AI Academy for American Infrastructure, selecting 15 startups to transform critical infrastructure using AI. The 12-week programme offers mentorship, AI training, and resources. Startups address issues like water management, transportation safety, and sustainable manufacturing.
Ornek develops AI robot guide dog IC4U using NVIDIA Jetson. Features sound sensors, 3D camera, detects objects, and assists shopping. Aims to help visually impaired navigate.
Stanford study reveals Western bias in AI chatbot alignment, compromising global effectiveness. Examined nine languages and regional dialects, finding cultural nuances lead to misunderstandings. Researchers explore causes and solutions for more inclusive AI.