Wappu

Participate in training of Wappu AI!

Have you always wanted to see Wappu with eyes of Artificial Intelligence? Well, it’s alright, now you have the chance.

During Wappu, we are training artificial intelligence to produce Wappu-like pictures of two categories.

One AI is trained with pictures of Teekkari dipping and another AI is trained with pictures related to Doges of Wappu event. AIs themselves learn slowly to produce pictures that resemble these categorical pictures fed to them. Pictures produced by these AIs are added to this page.

It is interesting to see how these AIs are evolving and what kind of pictures they are producing in different phases of training. Generated pictures are firstly just noise but eventually identifiable shapes and characters should be generated. More about technologies below.

WHEN?

AIs are trained, material is collected and generated pictures of AIs are published during 15.4-1.5.2019.

When and how?

You can upload your own pictures of dogs from this page. 

Uploaded pictures are used as training material for AI so they affect on generated pictures.

Why?

Because it is easy and fun!

You can participate in training of AI and see how the pictures you sent have affected on the pictures AI produces.

+
Training pics
x2
Training hours
Neural networks
Wappu

UPLOADING PICTURES

Doges of Wappu category is now being trained and new pictures can’t be sent anymore. Thank you everyone who sent pictures!

Just select the pictures you want to send and then press the Upload button. Easy!

It is also possible to take a picture with a mobile device. This way you can easily take a new photo in case there happens to be a dog nearby.

You should only send pictures of dogs. For this purpose front angle of view suits the best. All other pictures will be filtered! JPG, JPEG ja PNG -formats are supported, maximum file size for a single picture is 10MB.

WAPPU AI GALLERY

Generated pictures by Wappu AIs can be found below. The latest pictures are shown first. Pictures can be magnified by clicking them.

New pictures are published daily during Wappu! Pictures generated by Wappu AIs are free to use for non-commercial use! (License: Creative Commons CC BY-NC 4.0)

Teekkari dipping

Training progress 100%: Happy May Day, from the Wappu AI! It took over 3 days of training to get this far. With longer training the pictures might have gotten clear faces
Training progress 93%: Again! Water is gushing out of the dipping basket like a... stream! TIY also seems to be involved somehow. The Wappu-AI didn't say what it means because neural networks don't explain their decisions
Training progress 89%: A Teekkari cap can also float in the air! The picture is a little less clear than the previous one which is normal because training a neural network doesn't progress completely linearly towards better quality
Training progress 88%: Holding Teekkari caps in the air is a phenomenon that Wappu-AI has noticed
Training progress 83%: The water is beginning to look quite realistic with its tiny waves and reflections
Training progress 82%: Back in the water, it's not that cold! The Wappu-AI is doing it's best to also generate a sponsor ad but neural networks often have problems with text
Training progress 74%: The background is starting to form
Training progress 64%: The dipping basket has gotten edges and lifted off! Even people are now separated from each other
Training progress 20%: It is a dipping basket, outlines are still missing
Training progress 10%: It's definitely starting to look like a Teekkari dipping basket in water
Training progress 6%: Wow, Teekkari caps are starting to form
Training progress 3%: The colors are beginning to form clusters with better definition
Training progress 2%: Where are these pixels coming from?! There seems to be some blue water or an advertisement in the corner?
Training progress 1%. By squinting your eyes a bit there can clearly be seen yellow dipping shirts and red-white-striped barrels!
Training progress: under 1%. There are more pixels and their color has dimmed a little. Maybe tomorrow the picture will have something vaguely recognizable 😉
Training progress: under 1%. PGGAN-network grows the resolution used in training and generating pictures from 16 pixels in the beginning until full size 256x256.
Training progress: 0%. As promised, without training generated images are just noise caused by random numbers.

Doges of Wappu

Wuff! - Doge AI
Perhaps it wasn't a bear
I can see a rabbit and a bear with a long tongue
Let's start the Doges of Wappu series a bit more swiftly so we can get more doges posted! Every pic has four doggos.

Technical facts

If you are interested in what happens under the hood here is a short technical brief.

Base training material has been collected from TT-kamerat pictures of Wappu. Thanks for TT-kamerat and TTYY! Pictures can be viewed in: https://ttyy.kuvat.fi/kuvat. Teekkari dipping pictures are from 2018 and Doges of Wappu pictures from years 2017 and 2018.

Wappu AIs are trained in Google Cloud environment where we have four NVIDIA Tesla P100 GPUs. Training of AI requires a lot of processing power and depending on the task at least one high-end GPU is required. Especially if training material consists of pictures, CPU only systems should not be used. Utilizing these P100 GPUs costs around 1.5$ / hour / GPU. One Wappu AI requires training of few days even with these four powerful GPUs. After that we are able to generate good 256×256 pictures. Better resolution extends the training time remarkably. 

Pictures used for training material are fed to GAN (Generative Adversarial Network) neural network. GAN is a combination of two neural networks which train each others (More information: https://en.wikipedia.org/wiki/Generative_adversarial_network). In contrast to many other types of neural networks, it is possible to achieve good results with GAN networks when the amount of training material is small. One downside is that training of GAN is extremely heavy. As a base implementation of Wappu AIs PGGAN network is used: Progressive Growing of GANs for Improved Quality, Stability, and Variation (Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen), git.

GAN network is trained only with positive dataset and it does not need negative examples (meaning pictures that do not represent Wappu). One of the neural networks tries to produce new Wappu-like pictures and the another one identifies if given pictures are real or generated. By competing with each other both neural networks evolve at the same time to be better for their respective tasks.

Images generated by GAN are usually quite small by their resolution because the training process requires a lot of processing power and resolution affects on this heavily. This is why we have additionally used RDN (Residual Dense Network) neural network. It makes increasing of image resolution possible just like in crime scene investigation series where additional pixels are magically added into pictures without affecting on image quality. (More information: https://en.wikipedia.org/wiki/Super-resolution_imaging). RDN network is trained with high resolution pictures and automatically downscaled pictures of these. Neither does this network require a separate negative example pictures. In reality, adding pixels is not so fancy as described above: the more the resolution is increased, the more the picture will have weird stuff in it. For example, pictures start looking more cartoonish when the details of pictures smooth out and contours start mixing with each other. 

It is worth to notice in these neural networks that manual labeling or annotating is not required for datasets which is very common when training a model with supervised learning (https://en.wikipedia.org/wiki/Supervised_learning). This makes using and updating training datasets much more faster. GAN networks are categorized directly under unsupervised learning (https://en.wikipedia.org/wiki/Unsupervised_learning). Wappu AIs are utilizing a pretrained RDN network so there is no need to train this network again and the network can be directly used as a magnifying tool. It should be mentioned that the training material for RDN network should be similar to the pictures it is used to magnify. If the enhanced pictures are completely different compared to the ones used in training, magnifying will add more of these weird looking spots in the enhanced picture.