Now, there’s a virtuoso pointillism to the practice of pixel art, so this isn’t going to replace all pixel art. But they still make for some pretty good images anyway. These, on the other hand, came out much better:Īs you can see, if there’s a huge mismatch between the content and the style, you may have trouble seeing the content in the result (though playing with the settings can sometimes find a better blend). You can see how this one doesn’t quite work: These were done with just a style image (the photographs, in this case) and a seed image (the pixel art blue rock), but no content image.Īlso using a content image results in a final output that more strongly resembles the content image, though with more discontinuities if there’s too great a mismatch between the content and style. Here’s a blue rock texture I made with Tilemancer getting interpreted as vines:Īnd here’s some more textures that used that same blue rock image as a seed. You can also give it just the style image and seed data and have it synthesize a new image based on what it sees in the original style image. NeuralDoodle generally takes a content image, and a style image, and translates the content into the style. It doesn’t work for every texture, and there’s still a few occasional issues, but the combinations that do work are pretty astonishing. I’ve been running my own experiments, as you can see above. Champandard added some tweaks to improve the support for the process, and now we’re off to the races. A little while ago, trying to use neural-doodle to upscale pixel art.
0 Comments
Leave a Reply. |