New joint safety testing from UK-based nonprofit Apollo Research and OpenAI set out to reduce secretive behaviors like scheming in AI models. What researchers found could complicate promising ...
OpenAI is shuffling the team that shapes its AI models' behavior, and its leader is moving on to another project within the ...
Tech Xplore on MSN
AI scaling laws: Universal guide estimates how LLMs will perform based on smaller models in same family
When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational ...
Naomi Saphra thinks that most research into language models focuses too much on the finished product. She’s mining the ...
Research shows advanced models like ChatGPT, Claude and Gemini can act deceptively in lab tests. OpenAI insists it's a rarity ...
Discover how Meta's Code World Model transforms coding with its neural debugger and groundbreaking semantic understanding. CWM-32B ...
New OpenAI study reveals AI deception risks, highlighting “scheming” where systems knowingly mask actions to succeed.
The Parallel-R1 framework uses reinforcement learning to teach models how to explore multiple reasoning paths at once, ...
Artificial Intelligence (AI) has moved from research labs into our daily lives. It powers search engines, filters content on social media, diagnoses diseases, and guides self-driving cars. These ...
Atlas, Boston Dynamics’ dancing humanoid, can now use a single model for walking and grasping—a significant step toward general-purpose robot algorithms.
OpenAI says its AI models are prone to secretly break the rules and is testing ways to prevent it before AI becomes more ...
Learn how AI log analysis enhances security. Reduce the load on SOC teams so they can focus on judgment, context, and ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results