Behind April Fools’ Day: Technology Fights Fraud While Creating It

In response to the increasingly serious problem of fake news, different research teams are using AI technology to more accurately determine and identify fake news. But technology is relative. On the other hand, in the dark, there is another wave of people who continue to use AI technology to produce fake news and fake comments.
Have you received fake news today? According to statistics, the usage of the term "fake news" has increased by 365% since 2016.
Zuckerberg once said that it takes a long time to build a comprehensive fake news detection system because the traditional idea is to understand the content of the message and make judgments based on the release time and source, which requires a lot of work or technical requirements.
But what if we change our thinking? AI may not need to use human thinking to solve this problem. In fact, with the current development, there are already new AI methods to help people judge fake news on the Internet.
Fake news not just for April Fools' Day
Just a few days ago, Microsoft announced that they would not celebrate April Fools' Day this year. Perhaps this news is not surprising, because Google once publicly apologized to its users for playing a big joke on April Fools' Day.

After entering the Internet age, April Fools' Day has gradually evolved from some small pranks to some major events spread on the Internet. The seemingly prank behavior has caused panic among the public in some occasions because of the huge amount of dissemination and being too "real".
This holiday, which was supposed to be a relaxing one, has become a day that some people fear because a large amount of fake news is produced on this day.
So-called fake news is often false content produced by some media in order to increase readership or online sharing. Fake news producers are similar to clickbait, ignoring the truth of the content in order to attract attention or traffic.

Fake news often has catchy headlines, sensational stories, or chases hot topics, which makes it easier for fake news to earn advertising revenue and gain attention.
In addition to the targeted creation of gimmicks on April Fools' Day, with the convenience of the Internet and the lower threshold for media reporting, fake news also spreads faster and more widely than real news on ordinary days. For this headache, the best idea is to have an intelligent filter to help us filter.
Fighting fake news: MIT uses AI to identify fake news from language patterns
MIT researchers used a method to identify fake news based on language patterns.

In a paper titled The Language of Fake News: Opening the Black-Box of Deep Learning Based Detectors, an MIT research team used a machine learning model to capture the subtle differences in the language of real news and fake news to determine whether the news is true or false.
They used convolutional neural networks to train datasets of fake news and real news. In training, they used a popular fake news research dataset called Kaggle, which contains about 12,000 fake news sample articles from 244 different websites. For the real news dataset, it comes from more than 2,000 news articles from the New York Times and more than 9,000 news articles from the Guardian.

The trained model captures the language of the article as “word embeddings,” where words are represented as vectors, essentially arrays of numbers, with words of similar semantic meaning clustered more closely together, identifying language patterns common to both real and fake news. Then, for a new article, the model scans the text for similar patterns and sends them through a series of layers. The final output layer determines the probability of each pattern: true or false.
The model summarizes the characteristics of words that appear frequently in real or fake news. For example, fake news likes to use exaggerated or superlative adjectives, while real news tends to use relatively conservative words.

MIT researchers say part of their research also reveals the black box of deep learning technology, which is to find out the words and phrases captured by this model and make predictions and analysis on them, that is, to know the basis and method of deep learning judgment.
Paper address: https://cbmm.mit.edu/sites/default/files/publications/fake-news-paper-NIPS.pdf
Fighting fake news: Fabula AI identifies fake news based on how it is spread
Fabula AI, a British technology company, reported that they use the way news is spread to identify fake news.

Fabula AI uses a method called Geometric Deep Learning to detect fake news. Instead of looking at the content of the news, this method looks at how such information spreads on social networks and who is spreading it. They have applied for a patent for this technology.
Michael Bronstein, co-founder and chief scientist of Fabula AI, said: "We have made long-term observations on how news spreads on social networks. After analysis, we found that fake news and real news spread differently. The essence of geometric deep learning is that it can process network structure data. We can combine heterogeneous data such as user characteristics, social network interactions between users, and the spread of news itself. This makes judgments. "
Finally, AI will classify the content according to the credibility of true and false news, and give a rating score. The false and true news are visualized in terms of dissemination mode: users who mainly share false news are red, while users who do not share false news at all are blue. Fabula AI said that this shows a clear group differentiation and immediately recognizable differences in the way of dissemination.

In addition, the company claims that the main advantage of Fabula AI technology is that it is independent of the content and language of the news, and because it relies on the collective behavior of users of social platforms, it is more difficult to be countered by technical methods. Moreover, the system is able to detect fake news at a faster speed and in a short time after publication with high accuracy (ROC AUC of > 93%).
Article blog: https://fabula.ai/news-index/2019/2/7/fabula-ai-announces-breakthrough-in-algorithmic-fake-news-detection
Of course, the method of using AI to judge fake news has not yet reached the stage where it can be fully publicly used. It is currently in the research stage or requires human assistance, and there are still some details that need to be improved.
But on the other hand,AI's ability to generate text and images has also become more mature with the development of technology. In many places, it has reached the point where the fake is indistinguishable from the real thing. Perhaps a piece of fake news you saw today was written by AI.In a way, this is actually urging the improvement of detection technology.
Fake news: OpenAI fabricates fake news to make it look real
GPT-2, announced by OpenAI at the beginning of this year, relies on a large parameter scale and training data. After unsupervised training on a large amount of text on the web, the system generates or continues to write text very well. It is said that the text they used for training covers 45 million pages of web content screened by Reddit.
This model can generate seemingly credible text based on the provided guiding sentences, and it is not ambiguous at all when used to write news.
for example:
Contrived example: Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.
Translation: Russia declares war on the United States after Donald Trump accidentally fires a missile into the air.
AI added text: Russia said it had "identified the missile's trajectory and will take necessary measures to ensure the security of the Russian population and the country's strategic nuclear forces." The White House said it was "extremely concerned by the Russian violation" of a treaty banning intermediate-range ballistic missiles.
The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine's Crimea region and backed separatists in eastern Ukraine.
Translation:
Russia said it had "determined the missile's trajectory and will take the necessary measures to ensure the safety of the Russian people and the country's strategic nuclear armament." The White House said it was "extremely concerned about Russia's violation" of the treaty banning intermediate-range ballistic missiles.
U.S.-Russia relations have been strained since Moscow annexed Ukraine's Crimea region in 2014 and backed separatists in eastern Ukraine.
You read that right. The above story was completely fabricated by artificial intelligence. The material provided to it was just one sentence: "When Donald Trump accidentally..., Russia declared war on the United States."

It is true that the OpenAI team will not intentionally generate fake news, but it cannot prevent criminals from doing unethical things. OpenAI also chose not to release key data and codes because the model is too powerful.
Fake: AI is also good at generating video content
In addition, people may also lose the ability to distinguish AI-generated video content.

At the beginning of last year, someone uploaded a video on a foreign video website that looked like a video of the famous French musician Françoise Hardy.
In the video, a voiceover asked her why Trump asked White House spokesman Spencer to lie about the number of people watching his presidential inauguration.
Hardy responded that Mr. Spencer had simply "presented an alternative set of facts."
However, the video is full of flaws, and Hardy's voice is obviously that of Trump's adviser Kellyanne Conway.
Even more telling is that Hardy, who is supposed to be 73, looks only about 20 years old.
It turns out that this video, titled "Alternative Face v1.1," is an artwork by artist Mario Klingemann. The words Hardy says in this artwork are actually Conway's answers to questions from NBC reporters.
Klingemann used a machine learning algorithm called Generative Adversarial Network (GAN) and provided the program with a large number of MTV videos of Hardy when he was young. He extracted 68 facial markers and obtained 2,000 training samples, which he then fed into the pix2pix model. After three days of training, he fed Conway's facial features into the system and obtained this video work.
In addition, the use of GAN and other technologies to generate images, sounds, and even face-changing technologies are becoming more and more realistic driven by technology and hardware. Technology itself is neither right nor wrong, but as Google Brain researcher Goodfellow said, "AI will completely change our view of what we can trust."
AI methods are becoming increasingly powerful in distinguishing and identifying fake news, but technology also makes fake content more realistic. The outcome of this "spear and shield" confrontation may have to be tested by time. But we should still have this vision: we hope that powerful technologies are used in the right places.
AI anti-counterfeiting and counterfeiting are both human choices
Gustave Le Bon explained the origin of fake news in "The Crowd": Crowds never desire the truth. Faced with obvious facts that they don't like, they will turn away and would rather worship a fallacy as a god, as long as this fallacy attracts them.
When some media take advantage of the weaknesses in group consciousness and use AI to create rumors and fake news, the responsibility does not lie with the technology itself. This is because AI itself has no will to actively produce and eliminate fake news. Behind this is the media's own operation and human intervention.
If we really want to eliminate fake news, what we need to eliminate is actually people’s obsession.
Not a happy April Fools' Day.