Pokemon BW Intro 1 Via Quadtrees+Palettes

Pokemon BW Intro 1 Via Quadtrees+Palettes

by Geotale

👁 1,812 ❤️ 134 ⭐ 109 🔄 0
Created: Mar 22, 2022 Last modified: May 2, 2022 Shared: Mar 22, 2022

Description

(Continuing from the end of the instructions -- TIL there's a character limit that can realistically be reached in these things -- Actually the instructions are currently at the max number of characters, 5000) I assure you this is easier than it sounds, but I wanted enough documentation for anyone who wanted to go through the horribly tedious process of making something like this for themselves. I don't feel like going through the decoding process -- Just reverse the binary encoding process lol. You could also see inside the project if you needed to. This isn't nearly as intense towards the 5mb limit as my other quadtree projects, but meh Beat this, 3D engines >:)

Instructions

https://turbowarp.org/664177303/fullscreen?turbo&hqpen&stuck Just click the green flag, wait for the project to load, and watch X/Y intro 1 is my favorite intro, but it's horrible to encode. I guess nostalgia is fine too :P Wow, that's a funny and intuitive title. Not that the way it works is any less intuitive, but I'll of course go in full depth below. This doesn't really do too much more, but it does something very useful still, and that's adding palettes, and making sure some other stuff (such as centering the video when encoding) didn't save space. Palettes take a bit to encode, but are *super* useful. This project uses a total of only 256 colors (compared to the ~900,000 used in the original video at a 240x180 resolution (non-quadtree)!), but as you can see, they match quite well, leaving only small artifacts like some gray not being absolutely perfect. Into more technically how this is encoded: This places the video on the top-left of the screen in a 256x256 area (the video is computed to be 240x180, and 256x256 is the smallest power-of-two-wide square that fits this rectangle). The following is done to compute how squares should be computed: (Where "good enough" or "acceptable" means, given the current "quality" value, the value you're testing is less than or equal to the quality) - If the current position of the square is off of the screen, return that this node has no children and doesn't draw anything - If the current size of the square is 1: - - If the difference between the previous frame and current frame's pixel is acceptable, don't draw anything (this is cheaper) - - Otherwise, take the color of the pixel, add it to the list of colors that have been used (will be needed for palettes), and return that this node has no children, with the color being the exact color of the current pixel - For every pixel in the square (and that's actually drawn(!)), sum up the difference between the previous pixel and current pixel's color, noting how many pixels are in the actual square. - If the previous frame is good enough, return that this node has no children, and doesn't fill in anything - Get the average color in the current square - Get the sum of the difference between the current frame at any given pixel in the square and the average color - If that value divided by the number of pixels (or, the average error) isn't good enough: - - Check if the previous frame's error is acceptable. If it is, then return that this node has no children, and doesn't draw anything. - - Otherwise, return that this node has four children, and compute the new top-left, top-right, bottom-left, and bottom-left squares/nodes in that order (this is the order of the encoding), with the size of the squares computed here cut in half, and the "quality" multiplied by some constant. - If the value was good enough, then: - - If the quality of the previous frame is better than the average frame, then return that this node has no children, and doesn't draw anything (You could just check if the previous frame was acceptable, but that ends up with more artifacts) - - Otherwise, add the average color to the list of colors that are used in the tree, and return that this node has no children, and fills with the color that represents the average in the square. Start this process from the top-left corner, with the full screen used as the square size, and the initial "quality" you chose. Not the least complicated process in the world, but man it works well -- Of course, this is just added to the frame data, and the new "previous frame" is set to the result of drawing this new data. For the palette computation, I use that table of all colors used that was generated earlier, and simply run K-means clustering with some number of bits as the total number of clusters, or colors in the palette (here, 8-bits). Finally, to encode these nodes to bits, for every frame joined together, starting with the top node: - If this node has children: - - If the square is larger than a 1x1 pixel, add "0" to the result - - If the square is off of the rendered section of the video, just continue to the next node in this process - - Append whether or not this node is drawn (where "1" is drawn, "0" is not) to the output - - If the square is not drawn, continue to the next node in the process - - Find the closest color to the current square's in the generated palettes - - Append the index of that color in the palette to the result - - Continue to the next node in the process - If the node does not have children: - - Compute the binary of all child squares - - If none of the squares have any children (not needed, but faster to compute) and all square binary is equal, just return the value of one of the squares (with 0 appended to the front if the child square would be 1x1) - - If you can't merge them all into one large square, append a "1" to the result, and append all child square binary into the result, in order, then continue to the next node in the process.

Project Details

Visibility
Visible
Comments
Enabled