Deep Parametric Style Transfer

Published:

I spent the summers of 2018 with Adobe Research working on a deep network that transfers parametrised styles between images. Our end-to-end trainable system embeds a non-differentiable style renderer in the network that allows us to provide a better supervision in terms of an image loss instead of a regression loss on parameters as was the case with previous approaches. We also adapted our method to be webly supervised by exploiting the large number of movie-trailers available in the public domain.

I created a small poster to present my work during the summers to a general audience, you can have a look at it here: