rccyx 2 hours ago
Most visualizers rely on raw FFT data and linear scales that don't match how human biology experiences sound. The result is usually a twitchy, nervous flicker. I built Lookas to align the visuals with the math of hearing.
Here are some technical details (most of these are configurable):
- It uses a mel-scale filterbank to remap frequency bins so the visualization aligns with human loudness perception rather than linear frequency spacing.
- Animation is driven by a spring-damper model (zeta = 1.0 for critical damping) rather than raw amplitude changes. This gives the bars a sense of mass and weight.
- Energy diffuses laterally between neighboring bands to produce fluid motion and prevent jittery spikes.
- Input is windowed with a Hann function to reduce spectral leakage. Dynamic range is managed via continuous percentile tracking.
It runs at 60+ FPS using Unicode block characters and large contiguous terminal writes to avoid flicker.
It's configurable, but defaults work just fine.
It captures from mic, system loopback, or both.
Excited to hear what you guys think