HyperAI

The Inner Secrets Are Written on the Face. Scientists Use Facial Recognition to Predict Sexual Orientation and Criminal Tendencies

6 years ago
Headlines
Recommended List
Dao Wei
特色图像

By Super Neuro

There is an old Chinese saying that "appearance reflects the heart", which was recently verified by Stanford scientists using facial recognition technology.

In their research, they proposed a bold guess: a person's sexual orientation and criminal tendencies can be analyzed through facial recognition. It sounds incredible. Of course, some people are curious and hope that it can be put into practice as soon as possible, while others strongly criticize it, believing that it is a great discrimination.

Facial recognition is most widely used in security, surveillance, financial risk control and other fields, mainly for real-person authentication. Now, facial recognition has found a new direction under the leadership of a Stanford University research team. They hope to use facial recognition to study people's sexual orientation and criminal tendencies.

The reason why we study sexual orientation is that social progress has allowed marginal cultures to develop, further integrate into mainstream culture and be accepted by more people. Gender identities of different majorities are increasingly accepted, and different sexual orientations are also recognized by society and even by law.

Especially for social software, there are not only many apps designed specifically for people of minority sexual orientations, but Facebook also began to pay attention to the distinction between gender and sexual orientation as early as 2014, and increased the original two choices in the user's gender options to 58, including Androgyne, Male to Female, Transgender Female, Two-spirit, etc. 

The Facebook registration page had 71 gender options (left), and the Google I/O 2019 registration page had 5 gender options and custom (right)

However, these options are no longer available on the current website. In addition to male and female, there is also a custom option, which means you can fill in any gender. 

But if someone told you that they could tell you how likely you are to be gay based on your photo, what would your reaction be? And what if they could infer that you have a high chance of committing a crime based on your photo? 

When AI becomes Gaydar  

A team from Stanford University published a paper in 2017 《Deep neural networks are more accurate than humans at detecting sexual orientation from facial images》, for the first time, AI, facial images, and homosexuality were linked together.

Sexual orientation face images and facial features synthesized from multiple images

By taking facial photos from social networking sites, using deep neural networks (DNN) to learn some features, and then using some machine learning methods, the purpose of distinguishing sexual orientation can be achieved. 

Their final result was that the AI model's recognition accuracy was better than that of humans, and the machine algorithm could distinguish a person's sexual orientation based on some facial features. 

In less than a year, a master's student named John Leuner reproduced and improved this research in his paper "A Replication Study: Machine Learning Models Are Capable of Predicting Sexual Orientation From Facial Images".

Basically, a similar method is used, but some improvements are made at the same time, using different data sets, increasing the ability to resist interference with pictures of people, and so on. 

What did this study find? 

Stanford University's method used two models, one based on deep neural network (DNN) and one based on facial morphology (FM), and John added a model of blurred photo classifier.

In his paper, he trained and determined the predictive capabilities of three models using a dataset of 20,910 photos from an online dating website. 

Since the model uses data from different countries and races, its applicability has become better. In addition, research has also been done on blurred photos. For highly blurred photos, AI can make predictions based on the combined information of the face and background. 

Moreover, even if the person deliberately wears makeup, wears glasses, hides his facial hair, or changes the angle of the face, the test found that the model's prediction results will not change. In other words, even if a straight man dresses up as a crossdresser, DNN can still tell that you are actually a straight man.

Test photos of the same person in different outfits and angles

Overall, Leuner's research is a repetition and improvement of the Stanford study, and both use data to prove that facial information can be used to determine a person's sexual orientation. 

Why use AI to reveal other people’s secrets?

So why does such a study, which shows that AI is superior to humans and can promote human cognition and positioning in some aspects, or help the law avoid tragedies, cause such an uproar? 

On the one hand, it involves the sensitive topic of homosexuality. When the Stanford paper was published on arXiv, it caused an uproar. Some gay groups even resisted with all their might, believing that they were being infringed upon by technology.

In 2016, a similar study triggered widespread social discussion.

Two researchers from Shanghai Jiao Tong University submitted a paper on arXiv, "Automated Inference on Criminality using Face Images", and the research content was to use facial recognition to identify criminals.

Several sample photos used in the study, the top row shows the perpetrators

Using facial information to reveal a person's sexual orientation and criminal information is enough to make people worried. If we go a step further and extend it to predicting emotions, IQ and even political stance, will it eventually lead to serious prejudice and discrimination? 

For example, in areas where homosexuality is illegal, once technology is used to enforce the law, it will become a bloody story. And predicting a person's criminal tendencies and arresting them in advance is also a scene that has appeared in science fiction films. 

Some people compare them to "physiognomy". 

Another point that many people question and criticize is whether this type of research is actually "pseudoscience" in the form of AI.

In the eyes of some people, the method of using facial features to judge human behavior makes them associate it with physiognomy, but their research uses more data as support, making AI's predictions look more "scientific." 

Interestingly, the 2017 crime research paper listed a book on physiognomy, “Complete Collection of Divine Physiognomy,” in its references. 

Therefore, there are many people who question whether these studies, even if the data has a high accuracy rate, really reveal the connection between the two? 

Some of the features that were of interest in the study

When the Stanford University research paper was released, many people questioned its conclusions, saying that the data was too small and the conclusions were too certain. The same is true for crime prediction.

Dr Richard Tynan of Privacy International puts it this way: “As an individual, you can’t possibly know how a machine makes a judgment about you.

On small data sets, algorithms, AI, and machine learning can create arbitrary and absurd correlations. It’s not the machines’ fault, but it’s dangerous to apply complex systems in inappropriate ways.” 

Technology is not scary, what is scary is prejudice and evil intentions

We might as well make a bold assumption that perhaps people are subconsciously not afraid of what AI will discover, but are afraid that someone will make a big fuss about the results of AI analysis. 

The way AI handles problems relies on more data to improve its persuasiveness. But don’t forget that it is humans who train and design AI and are inherently biased. 

What AI derives from data are nothing more than numbers, so its interpretation will often be used in some way.

A study conducted by Princeton University Information Technology shows that machine learning and AI algorithms can inadvertently reinforce and expand existing biases in society or in the user's subconscious. For example, in many scenarios, "doctor" and "he" are matched together, while "nurse" and "she" are matched together. 

Back to the research on AI determining homosexuality, if the thing being predicted was changed, such as a certain disease, then with the same method and similar conclusions, perhaps many people who currently oppose it would praise it as a great job.

If it simply reveals the laws, there is nothing to be afraid of. However, if prejudice and evil intentions are injected, the more powerful the technology is, the greater the destruction it will bring. 

In response to the question, the Stanford researchers tweeted: "If you discover a hot technology that contains a threat, what do you do? Keep it secret, study it, let your peers review it, and issue a warning?"