Arsenii (Senya) Ashukha

I am a Research Scientist at Samsung AI Center Moscow supervised by Dmitriy Vetrov & Max Welling. Currently, I'm working on reinforcement learning and probabilistic deep learning. Also, I'm a teacher assistant for several Machine Learning courses of our scientific group at Moscow State Universit, National Research University HSE and Yandex School of Data Analysis.

Email  /  CV  /  Google Scholar  /  GitHub  /  Twitter  /  LinkedIn

Avatar
News
Research

I'm interested in probabilistic inference, reinforcement learning and deep learning.

The Deep Weight Prior
Andrei Atanov*, Arsenii Ashukha*, Kirill Struminsky, D. Vetrov, M. Welling
International Conference on Learning Representations (ICLR), 2019
bibtex  /  code  /  poster

The generative model for kernels of convolutional neural networks, that acts as a prior distribution while training on new datasets.

Variance Networks: When Expectation Does Not Meet Your Expectations
Kirill Neklyudov*, Dmitry Molchanov*, Arsenii Ashukha*, Dmitriy Vetrov
International Conference on Learning Representations (ICLR), 2019
bibtex  /  code  /  poster

It is possible to learn a zero-centered Gaussian distribution over the weights of a neural network by learning only variances, and it works surprisingly well.

Structured Bayesian Pruning via Log-Normal Multiplicative Noise
Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, Dmitriy Vetrov
Neural Information Processing Systems (NeurIPS), 2017
bibtex  /  code  /  poster /  spotlight video

Structured sparsification of deep neural networks via stochastic variational inference.

Variational Dropout Sparsifies Deep Neural Networks
Dmitry Molchanov*, Arsenii Ashukha*, Dmitriy Vetrov,
International Conference on Machine Learning (ICML), 2017
bibtex  /  code  /  poster  /  slides

Sparsification of deep neural networks via stochastic variational inference.


This guy makes a nice webpage.