CVPR 20 Results Released! 1470 Papers Were Accepted, and a Team From Huawei Received 7 out of 11 Submissions

Today, the top computer vision conference CVPR released the list of papers, with a total of 1,470 papers on the list. Some experts have already posted their results, let's take a look.
The top computer vision conference CVPR has released the list of accepted papers! This afternoon, CVPR announced the IDs of accepted papers.A total of 1470 papers were included.
This year, the number of valid submissions is 6656, soThe acceptance rate is 22%——Lower than 25% in 2019 and 29% in 2018.
Accepted paper ID list address:
http://cvpr2020.thecvf.com/sites/default/files/2020-02/accepted_list.txt
It is reported that CVPR 2020 has a total of 3,664 reviewers and 198 field chairs to review papers to control the quality of papers. Despite the increasing number of submissions, quality control remains strict.
The masters like to show their report cards
After the results were announced, they sparked extensive discussion among scholars at home and abroad.
On Zhihu, an author had 7 articles selected and posted his report card with joy overflowing the screen.

The experts on Twitter are also happily showing off their results:

Looking around, when the top conferences announce their results every year, there will always be some who are happy and some who are sad. I hope everyone can face it with a good attitude.
The number of submissions has been increasing year by year, but the acceptance rate has been declining in recent years
As the top conference in computer vision, CVPR has set new records in the number of valid submissions each year in recent years.
Last year's CVPR 2019 conference released the following data: the number of valid submissions to the conference before 2005 was less than 1,000, and the number of accepted papers was less than 500.
However, by 2017, the number of valid paper submissions exceeded 2,500, and increased to 3,500 in 2018.In 2019, the number directly exceeded 5,000.

Although the number of submissions has been increasing year by year, the acceptance rate of papers by reviewers has been declining year by year.
A reviewer commented on the release of the CVPR results.

He once said on Weibo that the competition in the CV field is too fierce. The appendix of a CVPR paper he reviewed was 20 pages long, while the conference required only 8 pages of main text (excluding references). In such a competitive environment, if the method is not innovative enough, no matter how many supplementary materials there are, it will be useless.
CVPR 2019 Classic Review
Although the official IDs of the included papers have been announced, the detailed information of the papers has not yet been released. On the occasion of the announcement of the list, let's review the classic award-winning works of CVPR 2019.
Best Paper

summary: The researchers proposed aNew theory of light Fermat path,This light is between the known visible scene and the unknown objects that are not in the line of sight of the transient camera. The paths of these lights are either reflected from mirror surfaces or reflected by the boundaries of objects, thus encoding the shape of the hidden objects.
The researchers showed that the Fermat path corresponds to a discontinuity in transient measurements. They then derived aNew constraint that relates the spatial derivative of the path length at these discontinuities to the surface normal.
Based on this theory, the researchers proposed a Fermat Flow's algorithm is used to estimate the shape of non-line-of-sight objects.The method accurately recovers for the first time the shapes of complex objects, from diffuse to specular reflections, that are hidden around corners and behind diffusers.
Finally, the method is independent of the specific technology used for transient imaging. Thus, the researchers demonstrated millimeter-scale shape recovery from picosecond-scale transients using SPADs and ultrafast lasers, as well as micrometer-scale reconstruction from femtosecond-scale transients using interferometry.
Best Student Paper

summary:Vision-Language Navigation (VLN) is the task of navigating a concrete agent in a realistic 3D environment to follow natural language instructions.
In this paper, researchers study how to address three key challenges of this task:Cross-modal grounding, ill-posed feedback, and the generalization problem.
First, they proposed a novel Reinforced Cross-Modal Matching (RCM) method that enforces cross-modal grounding locally and globally via reinforcement learning (RL). In particular, a matching critic is used to provide intrinsic rewards to encourage global matching between instructions and trajectories, and a reasoning navigator is used to perform cross-modal grounding in local visual scenes.
Evaluation on the VLN benchmark dataset shows that their RCM model significantly outperforms previous methods by 10% on SPL and achieves new state-of-the-art performance.
In order to improve the generalizability of the policies they learned, theyWe further introduce a self-supervised imitation learning (SIL) method to explore unseen environments by imitating our own past correct decisions.
We ultimately demonstrate that SIL can approximate a better and more efficient policy, minimizing the performance gap in success rate between seen and unseen environments (from 30.7% to 11.7%).
Longuet-Higgins Award
In addition, it is worth mentioning that in CVPR 2019, the paper that is higher than the Best Paper The Longuet-Higgins Award was given to Deng Jia, Li Feifei, Li Jia and others for their ImageNet work "ImageNet: A Large-Scale Hierarchical Image Database".

This paperPublished in CVPR 2009, and has 11,508 citations so far.
In the second year after this paper was published, the ImageNet Challenge, a grand event in the field of computer vision, kicked off. ImageNet has since become a benchmark in the field of computer vision recognition and has promoted great breakthroughs in this field.
-- over--