CycleGAN Monet-to-Photo Translation

Turn a Monet-style painting into a photo

Released in 2017, this model exploits a novel technique for image translation, in which two models translating from A to B and vice versa are trained jointly with adversarial training. In addition to the adversarial loss, cycle consistency is also enforced in the loss function: when the output of the first translator is fed into the second, the final result is encouraged to match the input of the first translator. This allows successful training for image translation tasks in which only unpaired training data can be collected. This model was trained to translate Monet-style paintings into photos.

Number of layers: 94 | Parameter count: 2,855,811 | Trained size: 12 MB |

Training Set Information

Examples

Resource retrieval

Get the pre-trained net:

In[1]:=

Adapt to any size

Automatic image resizing can be avoided by replacing the net encoders. First get the net:

In[3]:=
netEnc = NetEncoder[{"Image", ImageDimensions[img]}]
Out[5]=
resizedNet = NetReplacePart[
  net, {"Input" -> netEnc, "Output" -> NetDecoder[{"Image"}]}]
Out[6]=
NetInformation[
 NetModel["CycleGAN Monet-to-Photo Translation"], "ArraysSizes"]
Out[8]=
NetInformation[
 NetModel["CycleGAN Monet-to-Photo Translation"], \
"ArraysTotalElementCount"]
Out[9]=
NetInformation[
 NetModel["CycleGAN Monet-to-Photo Translation"], "LayerTypeCounts"]
Out[10]=
NetInformation[
 NetModel["CycleGAN Monet-to-Photo Translation"], "SummaryGraphic"]
Out[11]=

Out[14]=
Import[jsonPath, {"MXNet", "NodeGraphPlot"}]
Out[16]=

Requirements

Wolfram Language 11.3 (March 2018) or above

Resource History

Reference

Follow Lee on X/Twitter - Father, Husband, Serial builder creating AI, crypto, games & web tools. We are friends :) AI Will Come To Life!

Check out: eBank.nz (Art Generator) | Netwrck.com (AI Tools) | Text-Generator.io (AI API) | BitBank.nz (Crypto AI) | ReadingTime (Kids Reading) | RewordGame | BigMultiplayerChess | WebFiddle | How.nz | Helix AI Assistant