HyperAI

Four Facebook Office Buildings Were Evacuated Urgently. Was It a Terrorist Revenge or a Mistake?

6 years ago
Information
神经小兮
特色图像

Social media has become the main propaganda platform for terrorist organizations, which has caused headaches for social media such as Facebook and Twitter. In order to combat these terrorists and extremists, they will use AI technology to automatically delete related posts in a timely manner. However, as a result, they have also forged grievances with terrorists...

According to foreign media reports, at around 11 a.m. Pacific Time on July 1, the Facebook office in Menlo Park, California, received a "suspicious package." Sarin gas (a deadly nerve gas) was detected during routine testing, leading to suspected terrorist attacks and causing panic.

Currently, the Federal Bureau of Investigation (FBI) has been requested to assist in this case.

Sarin gas attack suspected at Facebook headquarters

The New York Police Department received an anonymous tip about a bomb threat at Facebook's Menlo Park, California, office campus and alerted local authorities around 4:30 p.m., according to a Menlo Park, California, police spokesman.

The report said that the suspicious package received by Facebook was a Large mail bag, which contained a lot of mail, was tested positive for sarin after being passed through a mail scanner at a Menlo Park facility. Although it may be a false positive, its danger cannot be ruled out.

After Facebook evacuated four buildings, the FBI has intervened in the investigation

Four buildings on Facebook's Silicon Valley campus were evacuated Tuesday over concerns that packages at a mail facility contained the nerve agent sarin.

Sarin is an extremely volatile nerve agent because it evaporates from a liquid to a gas. Sarin is clear, colorless, odorless, and tasteless. A pinprick-sized drop of sarin can kill an adult quickly if exposed through the skin, eyes, or breath.

This isn't the first time Facebook has received a terrorist threat!

On the afternoon of December 11, 2018, local time, U.S. police evacuated a building on Facebook's headquarters campus due to an explosion threat, but a search that lasted several hours found no signs of an explosive device and the alarm was subsequently lifted.

These incidents encountered by Facebook have a lot to do with their fight against violent and discordant speech on the Internet. As a major traffic user of social media, Facebook has spent a lot of effort in dealing with the network environment, and may even risk its life.

Any extremist group that praises, supports or spreads hatred and racism on behalf of the world must be deleted on Facebook. Therefore, Facebook, which stands on the government's side, has become the target of these groups.

A short story: The formation of extreme ideas

Around 2014, ISIS was on the rise, and social media and mobile Internet information were exploding.

Although the ideas they advocate are extremely conservative, ISIS uses a variety of communication methods, such as Twitter, Facebook and YouTube, to spread extremist ideas and attract supporters from all over the world to join them.

Facebook and Twitter are both hardest hit by ISIS

They even know how to build their own persona. In addition to posting brutal punishment videos, they also try to please young netizens.

Even created a Islamic State Catsaccounts that posted photos and videos of kittens in their lives, until these accounts were closed one by one by Twitter and Facebook.

Before being banned, this was one of ISIS's most popular accounts on Twitter.

In the first quarter of 2018, Facebook deleted a total of 28.8837 million posts and closed and deleted approximately 583 million fake accounts, mainly information and accounts related to terrorism and hate speech.

The power of giants: quietly erasing traces of disharmony

Facebook said it has become less and less reliant on human intervention in the process of blocking and deleting posts. 99.5 % Posts related to terrorism are found by Facebook through artificial intelligence technology.

One of these techniques isImage recognition and matchingOnce a picture posted by a user is suspected of being suspicious, Facebook will automatically match it through an algorithm to find out whether the picture is related to ISIS propaganda videos, or whether it can be associated with deleted extreme pictures or videos, and then take ban measures.

Facebook's technical team once blogged"Rosetta: Understanding text in images and videos with machine learning", describes how the image recognition tool Rosetta works.

Image recognition tool Rosetta has made special processing for Arabic recognition

Rosetta uses Faster R-CNN to detect characters, and then uses the ResNet-18 fully convolutional model with CTC (Connectionist Temporal Classification) loss to perform text recognition, and uses LSTM to enhance accuracy.

The final generated text recognition model structure

In addition, Facebook is also conductingText analysis research, analyze the language that terrorists may use on the website, and take corresponding countermeasures immediately once the content published involves terrorism.

In addition to increasingly powerful AI review tools, Facebook also has a strong manual review team. Their security team is divided into two teams: community operations and community integrity. The community integrity team is mainly responsible for establishing automated tools for the reporting-response mechanism.

Currently, this manual review team has reached more than 20,000 people. Based on Facebook's current 2 billion active users, each security officer needs to cover 100,000 users.

With great power comes great responsibility

All kinds of information we come into contact with are constructed after being screened through layers of algorithms. Platforms such as Facebook, Twitter, Weibo, and TikTok block and remove a large amount of negative news content involving terrorism, reaction, pornography, etc. every day, which inevitably affects the interests of some organizations.

What's more, in this world, forces from different governments, organizations, nationalities, and cultures are interfering with the presentation of information.

In the past, ISIS attacked Charlie Hebdo in Paris, France, and now Facebook has been hit by many unexpected incidents. Whether it is technology or platform, they often promote their neutral position, but they cannot completely distance themselves from it. In today's world, good people can only continue to fight against terrorist forces by striving to protect themselves.

In the Marvel world, more than one superhero has said this:

With graet power comes great responsibility.
With great power comes great responsibility

--over--

Click to read the original article