Apple has released the public beta of watchOS 26, offering users a first look at the new features and enhancements coming to the Apple Watch. Among the most anticipated updates is a more refined and subtle integration of artificial intelligence into the operating system, marking a shift from the more overt AI features seen in recent versions. The beta allows developers and early adopters to test the upcoming software and provide feedback before its official release. The latest version of watchOS introduces a range of improvements focused on health and fitness tracking, with more personalized insights powered by AI. Users can expect smarter notifications and contextual suggestions based on their daily routines, making the experience more intuitive. Additionally, the update includes enhanced voice recognition and improved performance for apps, offering a smoother and more responsive interface. While the AI features in watchOS 26 are designed to be less intrusive, they still provide meaningful assistance, such as better support for health monitoring and more accurate workout tracking. Apple is reportedly working on integrating AI more seamlessly into the watch’s core functions, emphasizing privacy and on-device processing to protect user data. Despite the progress, the beta is still in early stages, and not all features may be fully functional. Users are advised to install it on a secondary device or one that isn’t their primary Apple Watch to avoid potential issues. The public beta is available for download through the Apple Developer website, and feedback will be crucial in shaping the final version of the software. watchOS 26 is expected to launch later this year, likely alongside the next generation of Apple Watches. The update highlights Apple’s continued investment in AI, but with a focus on user experience and discretion rather than overt technological showmanship.
A U.S. federal judge has ruled in favor of AI company Anthropic, finding that training its AI models on legally purchased physical books without authors’ permission qualifies as fair use under copyright law. This landmark decision marks the first time a court has supported the AI industry’s claim that copyrighted works can be used for training large language models (LLMs) without explicit authorization, as long as the use is transformative and limited to training. The case was brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of using their works without permission to train its Claude AI chatbot. Judge William Alsup of California’s Northern District ruled that Anthropic’s process of buying physical books, dismantling them, and scanning pages to create a digital library for training is a fair use. He compared this to a student learning to write by reading many works rather than copying any single author’s style. However, Alsup also found that Anthropic may have broken the law when it separately downloaded millions of pirated books and said it will face a separate trial in December over this issue. The ruling is a major development for AI companies such as OpenAI, Meta, and Google, which face numerous lawsuits alleging unauthorized use of copyrighted materials for AI training. The fair use doctrine, dating back to 1976, has never been updated to address AI or digital content, leaving courts to interpret its application to new technology. Alsup emphasized the transformative nature of AI training, stating that the AI’s output does not replicate specific authors’ creative expression but rather creates something new based on learning from many sources. However, the decision did not address whether AI-generated content infringes copyright, an issue under consideration in other cases. Anthropic expressed satisfaction with the ruling, saying it aligns with copyright’s goal to foster creativity and scientific progress. The company stressed that its models aim not to replicate or replace works but to “turn a hard corner and create something different.” Despite the partial victory, Anthropic’s legal challenges continue, particularly concerning its use of pirated books. The upcoming trial will determine damages related to these alleged infringements. Founded in 2021 by former OpenAI executives and valued at over $60 billion with Amazon backing, Anthropic promotes AI safety and responsible innovation. Its Claude chatbot, launched in 2023, is trained on millions of books and other materials, including some contested in this lawsuit. The case highlights ongoing tensions between the AI industry’s data needs and copyright holders’ rights, a debate likely to shape the future of AI development and content licensing. Meanwhile, some publishers are exploring licensing agreements with AI firms to legally monetize their works. As AI continues to evolve rapidly, courts worldwide face the challenge of balancing innovation with protecting intellectual property, making this ruling a crucial precedent in AI copyright law.