HyperAI초신경
Back to Headlines

X's New Policy Bans Using Posts to Train AI Models

2일 전

X, formerly known as Twitter, has recently updated its developer agreement to impose a significant new restriction on AI training. The change, first reported by TechCrunch, stipulates that developers are no longer permitted to use content from X or its API to "fine-tune or train a foundation or frontier model." This move follows xAI's acquisition of X in March 2023, where the deal was reportedly valued at $33 billion. The policy adjustment appears to align with xAI's strategy to protect its proprietary data from being freely accessed by rival AI companies. The timing of this update is noteworthy, especially in the context of a similar legal battle involving Reddit. Just days before X's announcement, Reddit filed a lawsuit against Anthropic, alleging that the AI firm had accessed its site more than 100,000 times since July 2024, in violation of its policies. Reddit's action underscores the growing concern among social media platforms about the ethical and legal implications of AI data scraping. However, there is a notable discrepancy in X's policies. While the developer agreement now prohibits the use of X's content for AI training, the platform's privacy policy still allows third-party "collaborators" to train AI models on user data unless users opt out. This distinction means that while outside developers cannot scrape X for AI training, certain trusted entities may still have access to this data. Additionally, X itself continues to feed user data into its own AI model, named Grok, for training purposes. The evolution of X's policies regarding AI has been ongoing. In 2023, the platform amended its privacy policy to permit the use of public data to train AI models. Then, in October of the same year, it further relaxed these rules to allow third parties to train their models on X's data. However, the latest update represents a significant pivot, likely driven by the strategic interests of xAI and the broader industry trend toward tighter control over data usage. The broader tech community is reacting to this shift with mixed opinions. On one hand, critics argue that it stifles innovation and limits the availability of diverse datasets for AI development. Developers and researchers often rely on open and accessible data to improve their models, and any restrictions could hinder progress. On the other hand, proponents of the change highlight the importance of protecting user privacy and ensuring that data is used ethically and responsibly. They point out that the data generated on X is not just content but a valuable resource that should be managed carefully. Industry insiders suggest that this move could position X to negotiate lucrative AI training deals with third-party companies. Similar to Reddit's deal with Google, X may be seeking to monetize its vast troves of data through selective partnerships. This approach could help the platform generate additional revenue and maintain a competitive edge in the rapidly evolving AI landscape. The legal and ethical implications of AI data scraping have been a topic of intense debate in recent months. Platforms like Reddit and The Browser Company have taken proactive steps to safeguard their data, with The Browser Company adding a similar clause to its AI-focused browser Dia's terms of use. These actions reflect a growing trend among tech companies to assert greater control over how their user-generated content is utilized by AI systems. For X, the updated developer agreement is a strategic move to balance the interests of its parent company, xAI, with the needs of its user base and the broader ethical considerations in the AI community. By prohibiting general data scraping while retaining options for selective collaborations, X aims to safeguard its data while still remaining relevant in the AI market. The acquisition of X by xAI has brought significant changes to the platform's policies and operations. xAI, led by Elon Musk, has been vocal about its commitment to developing AI technologies that are both powerful and ethical. Musk's involvement has added a layer of public scrutiny, with both supporters and detractors closely watching how the company manages user data and interacts with the tech industry. The privacy policy of X, while it does allow for AI training by third-party collaborators with user consent, emphasizes the importance of transparency. Users can opt out of having their data used for AI training, a feature that addresses some concerns about data privacy. However, the default setting remains that data can be used, which some users and privacy advocates may find problematic. Despite the restrictions, X continues to use user data to enhance its own AI capabilities through Grok. The company argues that this internal use is necessary for improving user experience and maintaining the platform's relevance in the age of AI. Grok, which is designed to understand and engage with human conversations on X, benefits from a rich dataset that includes a wide range of user interactions and content. The overall impact of this policy change on the AI community is yet to be fully realized. Nevertheless, it is clear that platforms like X are increasingly aware of the value of their data and are taking steps to protect it. These moves could lead to a more gated and regulated environment where companies must pay for data access, potentially altering the landscape of AI development and research. In the wake of this update, developers and researchers will have to explore alternative sources for training datasets or negotiate agreements with platforms like X. For companies looking to train large language models, the cost and accessibility of data will become increasingly critical factors. Companies like Alphabet and Meta, which are major players in the AI field, are likely to monitor these developments closely. They may follow suit with similar policies, given the strategic advantages and potential revenue opportunities. The tech industry's response will shape future norms and practices in AI data usage, with implications for both innovation and user privacy. In conclusion, X's updated developer agreement reflects a broader industry trend toward tighter control over AI data scraping. It signals a shift in how social media platforms value and manage their data, potentially opening the door to new business models and partnerships. While the policy change may face criticism from some quarters, it is likely to be seen as a necessary step by many, given the rapid advancements in AI and the associated ethical and commercial considerations. Evaluation and Company Profiles: Industry experts view X's move as a strategic effort to protect its intellectual property and user data, aligning with the broader trend of platforms asserting control over their content. This could lead to increased revenue through selective data-sharing deals, similar to those seen with Reddit and Google. xAI, under Elon Musk's leadership, is known for its focus on ethical AI development, and this policy update supports that mission. Musk's vision for AI has gained significant attention, and his moves are closely scrutinized by both critics and supporters in the tech community. Reddit's legal action against Anthropic highlights the serious consequences of violating these new policies, emphasizing the need for companies to navigate this changing landscape carefully.

Related Links