The First AI War of Words in 2019 Is Taking Place on Twitter

By Super Neuro
In the new year of 2019, the first war of words in the field of artificial intelligence has started on Twitter. This time the theme was triggered by a mistake made by the media website Venturebeat.
Yann LeCun, Andrew Ng, and Nobel Prize winner James D. Watson were all named in this war of words, and the topic ultimately focused on gender bias in the AI workplace.
The protagonist who personally took part in the fight this time was Anima Anandkumar. Although she is not as famous as the big guys in the first echelon, she is also an important figure in the field of machine learning.
Anima is currently the director of machine learning research at NVIDIA and a tenured professor in the Department of Computer Science and Mathematics at Caltech.

Prior to NVIDIA, Anima worked at AWS as chief scientist and also worked at MIT and Microsoft Research.
Her research involves large-scale learning, deep learning, probabilistic models, and non-convex optimization. She is also a professional reviewer for NeurIPS. Anima also did a lot to promote the name change of NeurIPS.
Just today, Anima posted and responded to nearly 50 tweets, joining other netizens in a frenzy of protest against the media website Venturebeat for using an inappropriately processed photo in a press release.
In addition to this photo, Anima also expressed indignation about the gender discrimination that has always existed in the field of artificial intelligence research and the treatment she had experienced.
Source of the dispute: the "headless" AI female scientist
On January 2, Venturebeat, a well-known American technology media, published a news article about the predictions of four AI leaders on the development of artificial intelligence in 2019. It mentioned Yann Lecun, Andrew Ng, Hilary Mason and Rumman Chowdhury's views on the development of artificial intelligence this year.
There is nothing wrong with the title and content of this news, but there is a big mistake in the title picture.

On the Venturebeat website, the picture accompanying this article shows the heads of two female scientists being cut off.
These twoOnly the body is seen, but not the headThe two female scientists are Hilary Mason, head of machine learning at cloud service giant Cloudera, and Rumman Chowdhury, senior supervisor of Accenture’s responsible artificial intelligence business unit.
First, a netizen discovered this problem and posted a tweet to complain:

——Twitter user @Mraginsky
Many netizens also felt that no matter what the reason, it was inappropriate for a media website to have such a picture. However, up to now, this article has not been modified on the official website of Venturebeat (https://venturebeat.com/).
Are female scientists too sensitive?
The discussion about this matter has been going on since yesterday afternoon, China time, and it is still going on now. Anima expressed her opinion on this article on Twitter. She believes Venturebeat's behavior is inappropriate and should be corrected as soon as possible.
A stone stirs up a thousand waves, and another AI master jumped out and said that this is Twitter should be blamed, it's just that the images they captured were incomplete.

The guy who jumped out is Thomas G. Dietterich, whose Twitter followers include Andrew Ng, Fei-Fei Li, and Lecun. Thomas was also an active participant in the recent war of words about "AI ethics" triggered by Lecun.
Anima said that she felt the failure of Venturebeat’s article was also due to the fact that women have not been valued in the technology industry. He also cited several examples he had encountered in the workplace, such as:

Anima has posted many tweets about her career
In an interview with Nature, she talked about #protestNIPS Topics (NIPS renamed NeurIPS) and @InclusionInML When Nature published an article in the journal Nature Machine Learning (an organization that advocates for greater inclusion and the elimination of bias in machine learning), it did not highlight the contributions of female scientists in these works, but instead spent a lot of time writing about the internet-famous robot Sophia.
She believes that in order to attract attention, the media also uses a patriarchal perspective to process news content, which is one of the reasons why women are not taken seriously at science and technology festivals.
NeurIPS’s survival desire: adding female chairs
Compared to Venturebeat’s minor editorial mistake, NeurIPS demonstrated its great desire to survive last year.
At the conference that just ended, NeurIPS not only urgently changed its name that had been used for many years, but also added two female chairmen to the four conference projects in addition to the chairman.

NeurIPS has one chair, one program chair, and three co-program chairs.
The conference program chair is the position second only to the conference chair, which includes a person in charge and three co-chairs. This year's conference chair is Hanna Wallach (38 years old) from Microsoft Research.

In addition to her many achievements in the field of AI as an engineer, Hanna is also one of the initiators of the Women in Machine Learning Conference WiML (women in the machine learning).
This conference originated from more than ten years ago. When Hanna attended NIPS, she found that there were only four female engineers present., so the idea of the Women in Machine Learning Conference was born.
Project leaders like Hanna Wallach, who have positive values and excellent backgrounds, are the best spokespersons for NeurIPS to whitewash itself as quickly as possible.
Another important conference project co-leader is Kristen Grauman (39 years old) from Facebook AI. She is the winner of the 2011 Marr Award and has conducted research at the famous CSAIL (MIT's Computer Vision and Machine Learning Laboratory). She has published 7 CVPR and 4 ICCV articles in one year.

Her main research areas are computer vision and machine learning, and more specifically, visual search and object recognition.
Her most influential research result: Pyramid Match Kernel, is an algorithm for image matching. Her 2005 co-authored paper "The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features" is one of the classic algorithms in this field.
Her academic productivity and quality are beyond the reach of many male researchers.
Eliminating AI bias is not just about eliminating social bias
In the engineering industry, the gender ratio is already very disparate, and it is rare to find female engineers or scientists who can stand out.
Some time ago, the AI tool used by Amazon to screen resumes was suspected of gender discrimination and was urgently taken offline. Although it was later stated that this was a training error, it also objectively showed that women are still a minority and neglected group in this field.
The data and models learned by AI programs based on data learning come from real society. If this field is generally dominated by male perspectives, even if it is transmitted unconsciously, AI will learn the biases of the public.
However, with the development of society, more and more female scientists have emerged in the field of AI, and people's attention has also made women occupy a more important position in this field.