Participate in training of Wappu AI!
Have you always wanted to see Wappu with eyes of Artificial Intelligence? Well, it’s alright, now you have the chance.
During Wappu, we are training artificial intelligence to produce Wappu-like pictures of two categories.
One AI is trained with pictures of Teekkari dipping and another AI is trained with pictures related to Doges of Wappu event. AIs themselves learn slowly to produce pictures that resemble these categorical pictures fed to them. Pictures produced by these AIs are added to this page.
It is interesting to see how these AIs are evolving and what kind of pictures they are producing in different phases of training. Generated pictures are firstly just noise but eventually identifiable shapes and characters should be generated. More about technologies below.
AIs are trained, material is collected and generated pictures of AIs are published during 15.4-1.5.2019.
When and how?
You can upload your own pictures of dogs from this page.
Uploaded pictures are used as training material for AI so they affect on generated pictures.
Because it is easy and fun!
You can participate in training of AI and see how the pictures you sent have affected on the pictures AI produces.
Doges of Wappu category is now being trained and new pictures can’t be sent anymore. Thank you everyone who sent pictures!
Just select the pictures you want to send and then press the Upload button. Easy!
It is also possible to take a picture with a mobile device. This way you can easily take a new photo in case there happens to be a dog nearby.
You should only send pictures of dogs. For this purpose front angle of view suits the best. All other pictures will be filtered! JPG, JPEG ja PNG -formats are supported, maximum file size for a single picture is 10MB.
WAPPU AI GALLERY
Generated pictures by Wappu AIs can be found below. The latest pictures are shown first. Pictures can be magnified by clicking them.
New pictures are published daily during Wappu! Pictures generated by Wappu AIs are free to use for non-commercial use! (License: Creative Commons CC BY-NC 4.0)
If you are interested in what happens under the hood here is a short technical brief.
Base training material has been collected from TT-kamerat pictures of Wappu. Thanks for TT-kamerat and TTYY! Pictures can be viewed in: https://ttyy.kuvat.fi/kuvat. Teekkari dipping pictures are from 2018 and Doges of Wappu pictures from years 2017 and 2018.
Wappu AIs are trained in Google Cloud environment where we have four NVIDIA Tesla P100 GPUs. Training of AI requires a lot of processing power and depending on the task at least one high-end GPU is required. Especially if training material consists of pictures, CPU only systems should not be used. Utilizing these P100 GPUs costs around 1.5$ / hour / GPU. One Wappu AI requires training of few days even with these four powerful GPUs. After that we are able to generate good 256×256 pictures. Better resolution extends the training time remarkably.
Pictures used for training material are fed to GAN (Generative Adversarial Network) neural network. GAN is a combination of two neural networks which train each others (More information: https://en.wikipedia.org/wiki/Generative_adversarial_network). In contrast to many other types of neural networks, it is possible to achieve good results with GAN networks when the amount of training material is small. One downside is that training of GAN is extremely heavy. As a base implementation of Wappu AIs PGGAN network is used: Progressive Growing of GANs for Improved Quality, Stability, and Variation (Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen), git.
GAN network is trained only with positive dataset and it does not need negative examples (meaning pictures that do not represent Wappu). One of the neural networks tries to produce new Wappu-like pictures and the another one identifies if given pictures are real or generated. By competing with each other both neural networks evolve at the same time to be better for their respective tasks.
Images generated by GAN are usually quite small by their resolution because the training process requires a lot of processing power and resolution affects on this heavily. This is why we have additionally used RDN (Residual Dense Network) neural network. It makes increasing of image resolution possible just like in crime scene investigation series where additional pixels are magically added into pictures without affecting on image quality. (More information: https://en.wikipedia.org/wiki/Super-resolution_imaging). RDN network is trained with high resolution pictures and automatically downscaled pictures of these. Neither does this network require a separate negative example pictures. In reality, adding pixels is not so fancy as described above: the more the resolution is increased, the more the picture will have weird stuff in it. For example, pictures start looking more cartoonish when the details of pictures smooth out and contours start mixing with each other.
It is worth to notice in these neural networks that manual labeling or annotating is not required for datasets which is very common when training a model with supervised learning (https://en.wikipedia.org/wiki/Supervised_learning). This makes using and updating training datasets much more faster. GAN networks are categorized directly under unsupervised learning (https://en.wikipedia.org/wiki/Unsupervised_learning). Wappu AIs are utilizing a pretrained RDN network so there is no need to train this network again and the network can be directly used as a magnifying tool. It should be mentioned that the training material for RDN network should be similar to the pictures it is used to magnify. If the enhanced pictures are completely different compared to the ones used in training, magnifying will add more of these weird looking spots in the enhanced picture.