Wolfram Neural Net Repository
Immediate Computable Access to Neural Net Models
Turn a Monet-style painting into a photo
Released in 2017, this model exploits a novel technique for image translation, in which two models translating from A to B and vice versa are trained jointly with adversarial training. In addition to the adversarial loss, cycle consistency is also enforced in the loss function: when the output of the first translator is fed into the second, the final result is encouraged to match the input of the first translator. This allows successful training for image translation tasks in which only unpaired training data can be collected. This model was trained to translate Monet-style paintings into photos.
Number of layers: 94 | Parameter count: 2,855,811 | Trained size: 12 MB |
Wolfram Language 11.3 (March 2018) or above