☁️ Audio Renderer (custom audio upload) ✦ plays any audio YOU upload

☁️ Audio Renderer (custom audio upload) ✦ plays any audio YOU upload

by TimMcCool

👁 15,922 ❤️ 1,206 ⭐ 1,081 🔄 16
Created: Jun 11, 2023 Last modified: Oct 6, 2024 Shared: Sep 2, 2024

Instructions

(09/20/2024) Improved stereo quality (fixed a bug) On my YouTube audio player, there were problems with the youtube downloading API. Therefore I made this project that can play any audio that YOU upload on an external website. Try it out: https://turbowarp.org/864001370?clones=Infinity&limitless&fps=120 - Double click green flag. - Recommended browser: Firefox (much better bass quality than Chrome for some reason) - The project will display a link to a website where you can upload any audio file. The audio file will then be converted, sent back to the Scratch project using cloud variables and played in the Scratch project. - The uploaded audio file will be deleted from the backend when you leave the Scratch project. - The project might not play audio on all devices (reducing quality settings sometimes fixes this). I highly recommend using TurboWarp for better quality. Run the project at 120 FPS and with infinite clones enabled (automatically done when using the above link). ➥ Notes: [space] to change visualizer style. Can be used by multiple people simultaneously (no hard-coded limit). When using this project on TurboWarp, select the "US East" cloud server (selected by default). This is NOT a content sharing platform. You are NOT able to see the audio uploaded by other people. Getting this to work flawlessly and efficiently took longer than expected. The backend code will be published on GitHub soon. I think there's still room for optimization in terms of sound quality. ➥ Explanations: There's a python bot listening to TurboWarp's cloud variables and detecting requests (made with my python library scratchattach). When a request is received, the audio is converted to WAV (using ffmpeg), analyzed using ARSS (https://arss.sourceforge.net/) and then the resulting data is sent to the project using cloud variables. Credit to @gilbert_given_189 for the base of the audio renderer. While rendering audio, the project plays 384 frequencies (ranging from 50 Hz zu 12.8 kHz, on a logarithmic scale) concurrently. ARSS provides the volumes for each of these frequencies, these are received using complex math (FFTM / Fast Fourier transforms on multipoles - this goes way beyond anything I ever learned at school, luckily ARSS does the analysis almost fully automatically). The volumes are updated 30 or 60 times per second (depending on the selected audio framerate). This creates the sound you hear in the project. The backend is hosted on a hosting platform provided by @justablock and @Ryan_shamu_YT in a docker environment. The audio is only streamed because the analysis / synthesis process with ARSS is also streamed on the backend (the audio is cut in pieces that are analyzed seperately), this is to make sure you won't have to wait until the whole audio has been analyzed. The process is very ressource-intensive - to prevent the server from exceeding its ressource limits, some analysis processes will be queued when the server is under high load. The backend code will be published on GitHub soon. ➥ Credits: The project (the audio rendering and the creation of the freqs list) is based on this amazing Scratch project: - https://scratch.mit.edu/projects/791629208/ Dependencies used in the backend code: - ffmpeg - ARSS (https://arss.sourceforge.net/) @justablock and @Ryan_shamu_YT or providing the free hosting for the backend server ct

Project Details

Visibility
Visible
Comments
Enabled