CR-LSO: Convex Neural Architecture Optimization in the Latent Space of Graph Variational Autoencoder with Input Convex Neural Networks

In neural architecture search (NAS) methods based on latent spaceoptimization (LSO), a deep generative model is trained to embed discrete neuralarchitectures into a continuous latent space. In this case, differentoptimization algorithms that operate in the continuous space can be implementedto search neural architectures. However, the optimization of latent variablesis challenging for gradient-based LSO since the mapping from the latent spaceto the architecture performance is generally non-convex. To tackle thisproblem, this paper develops a convexity regularized latent space optimization(CR-LSO) method, which aims to regularize the learning process of latent spacein order to obtain a convex architecture performance mapping. Specifically,CR-LSO trains a graph variational autoencoder (G-VAE) to learn the continuousrepresentations of discrete architectures. Simultaneously, the learning processof latent space is regularized by the guaranteed convexity of input convexneural networks (ICNNs). In this way, the G-VAE is forced to learn a convexmapping from the architecture representation to the architecture performance.Hereafter, the CR-LSO approximates the performance mapping using the ICNN andleverages the estimated gradient to optimize neural architecturerepresentations. Experimental results on three popular NAS benchmarks show thatCR-LSO achieves competitive evaluation results in terms of both computationalcomplexity and architecture performance.