arXiv Selection: Five of the Hottest Papers in June

Founded in 1991, arXiv.org has collected more than 1 million preprints so far. In recent years, the number of submissions per month has exceeded 10,000. It has become a huge learning treasure trove. This article lists the hottest papers on artificial intelligence on arXiv.org in the past month for your reference.
As a dedicated "occupancy" system for researchers,arXiv.org It contains a large number of research papers in various fields such as physics, mathematics, computer science, etc. Researchers from all over the world contribute to arXiv.
Since 2016, its monthly submissions have exceeded 10,000A huge number of papers constitute a realA treasure trove of learning methods, you can use it to solve data science problems. But this variety of resources also makes it difficult to sift through.

To this end, we have screened some of the latest research papers on arXiv.org.Artificial Intelligence, Machine Learning, and Deep LearningWe have compiled a list of very attractive subjects in the field, including statistics, mathematics and computer science.HottestList of papers.
We hope to save you some time by selecting articles that are typical of data scientists. The articles listed below represent a small portion of all the articles that appear on arXiv. They are listed in no particular order, and each paper is accompanied by a link and a brief summary.
Since these are academic research papers, they are usuallyGraduate students, postdocs and experienced professionals.This usually involves advanced math, so be prepared. Then, enjoy!
Monte Carlo Gradient Estimation in Machine Learning
Paper link:https://arxiv.org/pdf/1906.10652.pdf
Recommended level: ★★★★★
This article is an exploration of our work in machine learning and statistical science.Monte Carlo Gradient EstimationA broad and accessible survey of the methods used: the integration of the problem of computing the gradient of an expectation of a function with the parameters of a defined distribution, and the problem of sensitivity analysis.
In machine learning research, this gradient problem is at the heart of many learning problems (including supervised, unsupervised, and reinforcement learning).Google researchersIt is often sought to rewrite such gradients in a form that allows Monte Carlo estimation, so that they can be used and analyzed conveniently and efficiently.

"An Introduction to Variational Autoencoders"
Introduction to Variational Autoencoders
Paper link:https://arxiv.org/pdf/1906.02691v1.pdf
Recommended level:★★★★★
Variational AutoencoderThis paper provides a principled framework for learning deep latent variable models and corresponding inference models. This paper introduces variational autoencoders and some important extensions.

"Generative Adversarial Networks: A Survey and Taxonomy"
Generative Adversarial Networks:Investigation and classification
Paper link:https://arxiv.org/pdf/1906.01529v1.pdf
Recommended level: ★★★★★
In the past few years, there has been a lot ofGenerative Adversarial Networks (GANs)The most revolutionary technologies have emerged in the field of computer vision, such as image generation, image-to-image conversion, and facial feature changes.
Although GAN research has made some breakthroughs, it faces difficulties when applied to practical problems. 3 main challenges:(1) High-quality image generation; (2) Diverse image generation; (3) Stability training.
The authors proposed a method to classify most popular GANs intoArchitecture variants(architecture-variants) andLoss variants(loss-variants), and then deal with three challenges from these two perspectives.
In this paper, 7 architecture variants GANs and 9 loss variants GANs are reviewed and discussed. The purpose of the paper is to provide an in-depth analysis of current research on GAN performance improvement.

"Learning Causal State Representations of Partially Observable Environments"
Learning causal state representations in partially observable environments
Paper link:https://arxiv.org/pdf/1906.10437.pdf
Recommended level: ★★★★
Intelligent agents can cope with complex and changing environments by learning state-independent abstractions. In this paper, we propose a mechanism to approximate causal states to optimally promote the union of actions and observations in an observable Markov decision process. The proposed algorithm extracts causal state representations from an RNN that is trained to predict subsequent observations from history. The authors show thatBy learning the abstraction of unknowable states, strategy planning for reinforcement learning problems can be effectively learned.

"The Functional Neural Process"
Functional neural process
Paper link:https://arxiv.org/pdf/1906.08324.pdf
Recommended level: ★★★★
This paper proposes a random exchangeable process calledFunctional Neural Processes (FNPs)The FNP model is trained on a given dataset to simulate the distribution of functions through a dependency graph on the latent representation.
In doing so, the usual approach only defines the Bayesian model and neglects to set prior distributions on the global parameters; to improve on this,This paper gives a priori about the relational structure of the dataset to simplify this task.
The authors show how to learn these models from data bySmall batch optimization test, showing that they scale to large datasets, and describing how predictions for new points can be made via the posterior predictive distribution.
To verify the performance of FNPs, tests were conducted on toy regression and image classification. The results showed that FNPs can provide better competitive predictions and more robust uncertainty estimates compared to those parameters of the baseline.
