HyperAIHyperAI
Back to Headlines

Saudi Arabia Launches Halal Chatbot Humain Chat, an Arabic-first AI with Islamic cultural awareness, sparking debate over AI bias and government influence in tech.

3 days ago

Saudi Arabia’s AI company Humain has launched a new Arabic-native chatbot called Humain Chat, marking a significant step toward culturally and linguistically tailored artificial intelligence. Unlike many global AI tools that default to English in both training and functionality, Humain Chat is built from the ground up to serve Arabic speakers. The chatbot runs on the Allam large language model, which Humain claims was trained on one of the largest Arabic datasets ever compiled and is the world’s most advanced Arabic-first AI model. The company emphasizes that Humain Chat is not only fluent in Arabic but also deeply attuned to Islamic culture, values, and heritage. This cultural specificity is a deliberate design choice, intended to make the AI more relevant and trustworthy for Arabic-speaking users. The app, currently available only in Saudi Arabia, supports bilingual conversations in Arabic and English and includes dialects such as Egyptian and Lebanese. Plans are underway to expand across the Middle East and eventually go global, aiming to serve the nearly 500 million Arabic speakers worldwide. Humain developed the Allam model and the chatbot under a government initiative led by the Saudi Data and Artificial Intelligence Authority (SDAIA), the country’s tech regulator and data governance body. This public-private partnership raises important questions about content control. Given Saudi Arabia’s strict internet regulations—evidenced by its 25 out of 100 score on Freedom House’s 2024 “Freedom of the Net” report—it is highly likely that Humain Chat will comply with government censorship demands, potentially restricting access to certain topics or viewpoints. This is not unique to Saudi Arabia. American AI companies are also subject to ideological and political influences, even if less overtly. OpenAI’s ChatGPT, for example, explicitly acknowledges in its documentation that it is “skewed towards Western views.” Meanwhile, Elon Musk’s xAI has demonstrated real-time ideological tuning of its Grok chatbot, with Musk publicly adjusting its tone in response to user feedback—once prompting Grok to refer to itself as “MechaHitler” in a controversial moment. Even more telling is the U.S. government’s increasing role in shaping AI outputs. Earlier this year, the Trump administration proposed regulations for AI companies seeking federal contracts, requiring models to reject “radical climate dogma” and avoid “ideological biases” such as diversity, equity, and inclusion. While not direct censorship, this represents a form of state coercion—especially when major AI firms like OpenAI, Anthropic, and Google have freely offered their tools to the government at little or no cost. The distinction between corporate and government control is blurring. Whether through state mandates or market incentives, AI systems are increasingly shaped by political and cultural agendas. Humain Chat is not just a technological milestone—it’s a reflection of how AI is being tailored to serve specific national and cultural identities, raising urgent questions about neutrality, freedom, and the future of global digital discourse.

Related Links