AudioSight – a small tool I’m working on for science‑based tuning

AudioSight

Member
Thread Starter
Joined
May 6, 2026
Posts
5
Hi everyone,

First of all, thank you all so much. I’ve learned a great deal from this forum over the past few months by quietly following everyone’s discussions. I’ve finally gathered the courage to share a small personal project I’ve been working on in my spare time.

It’s called AudioSight, a simple tool focused on audio measurement. I created it with the goal of restoring genuine sound perception through intuitive and quantifiable data.

This project is purely something I developed out of personal interest. I hope it can be of some reference value to fellow audio enthusiasts who approach sound in a scientific way.

Here are its main features for now:

- Real-time spectrum analysis and transfer function measurement
- Basic AutoEQ assistance with several commonly used tuning curves

The current version 1.0, codenamed "Origin", is still in its early stages. I plan to gradually add FIR filtering, room correction, and multichannel support down the line, polishing every part step by step.

I would be very grateful if you’d like to try it out and share your thoughts. I welcome all feedback, positive or constructive. I’m simply here to share and communicate, with no intention of selling anything.

Download / more info:

Due to new account restrictions, I cannot post direct links yet. You can search "AudioSight" on YouTube for demo videos, and the Windows version is available by searching our official website in your browser. I will edit this post and add the links once my account has enough posts. Thank you for your understanding.

P.S. English is not my first language. This post was translated with AI help. Please forgive any awkward expressions.

Thank you for taking the time to read this. If this kind of post is not appropriate for the section, please let me know and I will remove it right away.

Best regards


屏幕截图 2026-05-07 122112.png

屏幕截图 2026-05-08 143904.png

屏幕截图 2026-05-07 122102.png




An interesting AutoEQ algorithm with multi‑strategy support.

Precise:

屏幕截图 2026-05-08 143022.png



Natural:
屏幕截图 2026-05-08 144050.png



GEQ:

屏幕截图 2026-05-08 143417.png
 
Last edited:
Hello, excellent work. Are you planning to add, in addition to the auto IIR EQ, an automatic alignment feature for FIR filters (amplitude and phase)?
 
Hello, excellent work. Are you planning to add, in addition to the auto IIR EQ, an automatic alignment feature for FIR filters (amplitude and phase)?
T
hank you for your kind words and for the great suggestion!

Yes, FIR filtering and automatic alignment (both amplitude and phase) are definitely on my roadmap. However, I want to be honest — they won't come very soon.

The 1.0 version focuses on building a solid PEQ-based ecosystem: measurement → AutoEQ → verification (visual + listening + real‑time feedback). FIR correction, room acoustics, and multichannel support are planned for future releases, but I need to make sure each feature is implemented properly before adding it.

So please stay tuned — it will take some time, but it's on the list. Thank you again for your interest and support!
 
Last edited:
Why do you call it fully free and open when there's already accounts and licenses and no source?

Thanks for your question — very direct, I like that.

Let me share my real thoughts:

About "open source" and accounts
I never said this is open source. By "open" I mean the features are freely available. Currently the software is free and open to everyone — no payment required. However, the account system is necessary. I've spent two years building this alone. I need to know how many people are actually using it, so I can decide whether to keep investing my time. If very few people use it, I'll just keep it as my personal tool and only add features I need. The account is only for counting active users — no data selling, no spying. For free software, asking for a registration is not unreasonable in my opinion.

About the value
The current free version of AudioSight v1.0 already includes almost all core features of Smaart (real‑time spectrum, transfer function, coherence, etc.), plus my own intelligent AutoEQ algorithm, along with a minimal interaction design and real‑time visual + auditory closed‑loop feedback. These features are paid in other commercial software.

About the future
I have a kind of architectural obsession — I don't allow myself to miss any details. Even possible commercial plans and ad systems (like the tooltip "Click thumbnail to view curve settings") are already designed. But please be assured: all currently announced features will be free forever. Will some advanced features be paid in the future? I don't know yet — it depends on user demand and willingness. Even if that happens, they will be completely separate and optional.

Thank you for raising this — it helps me explain things clearly.

P.S. English is not my first language. This post was translated with AI help. Please forgive any awkward phrasing. Thank you for your understanding.
 
Why do you call it fully free and open when there's already accounts and licenses and no source?

One more thing — I need to be honest about my English.

I'm not a native English speaker. Everything I write here is translated with AI help. So my wording might not be precise.

In my original Chinese understanding, "Fully Free & Open" means:
"All current features are freely available to everyone, with no payment required right now."
It does NOT mean "I promise they will be free forever."

The word "open" in Chinese here is exactly to avoid making a permanent free promise. I don't know what the future brings. I only commit to what is true today.

If my English caused confusion, I apologize. But this is my real position.

And I have a quick question:
In English, what would be a more accurate and natural way to say:
"The features are currently free and open to everyone, but I'm not making a permanent free promise."
My English is all AI‑translated, so I really don't know the right wording. If you have time to give me a better expression, I'd really appreciate it.

Thanks for bearing with my poor English — all made possible by AI translation.
 
Back
Top