🕸 Edge#165: AutoRegressive Networks
In this issue:
we discuss AutoRegressive Networks;Â
we explore DeepMind’s PixelRNN and PixelCNN, two of the most important autoregressive models for image generation;Â
we overview MMGeneration, a new toolkit for simplifying the implementation of generative models.     Â
Happy Valentine! The last TWO days to subscribe with 50% OFF. We 🫀 and 🧠you  Â
💡 ML Concept of the Day: AutoRegressive Networks  Â
Continuing our series about generative models, we would like to discuss one of their simplest forms. AutoRegressive (AR) models are one of the generative methods that have recently seen a significant adoption, mostly from AI power labs like DeepMind and Meta. As their name indicates, AR models draw inspiration from the world of time series modeling by producing outputs over sequential data →learn more about AR
🔎 ML Research You Should Know: DeepMind’s PixelRNN and PixelCNN, two of the Most Important Autoregressive Models for Image Generation  Â
One of the most important challenges of unsupervised generative techniques such as GANs or VAEs is to build large models that are both tractable and scalable. RNNs are particularly efficient in this particular problem. In their paper, DeepMind introduced two RNN architectures applied to image generation using autoregressive techniques. PixelRNN is a model →read our explanation
🤖 ML Technology to Follow: MMGeneration is a New Toolkit for Simplifying the Implementation of Generative ModelsÂ
OpenMMLab is one of the most important open-source contributors in the area of computer vision. Recently, they released MMGeneration, a new framework for implementing generative models in PyTorch. MMGeneration is certainly a new release but has the makeup of becoming an important framework in a market that lacks options for implementing generative models →subscribe to read the full article