I built something similar for Linux (yapyap — push-to-talk with whisper.cpp). The "local is too slow" argument doesn't hold up anymore if you have any GPU at all. whisper large-v3-turbo with CUDA on an RTX card transcribes a full paragraph in under a second. Even on CPU, parakeet is near-instant for short utterances.The "deep context" feature is clever, but screenshotting and sending to a cloud LLM feels like massive overkill for fixing name spelling. The accessibility API approach someone mentioned upthread is the right call — grab the focused field's content, nearby labels, window title. That's a tiny text prompt a 3B local model handles in milliseconds. No screenshots, no cloud, no latency.The real question with Groq-dependent tools: what happens when the free tier goes away? We've seen this movie before. Building on local models is slower today but doesn't have a rug-pull failure mode.
Yeah local works really fine. I tried this other tool: https://github.com/KoljaB/RealtimeVoiceChat which allows you to live chat with a (local) LLM. With local whisper and local LLM (8b llama in my case) it works phenomenally and it responds so quickly that it feels like it's interrupting me.
Too bad that tool no longer seems to be developed. Looking for something similar. But it's really nice to see what's possible with local models.
> The "local is too slow" argument doesn't hold up anymore if you have any GPU at all.
By "any GPU" you mean a physical, dedicated GPU card, right?
That's not a small requirement, especially on Macs.
No. Give it a try I think you’ll be surprised
My M1 16GB Mini and M2 16GB Air both deliver insane local transcription performance without eating up much memory - I think the M line + Parakeet delivers insane local performance and you get privacy for free
Yeah, that model is amazing. It even runs reasonably well on my mid-range Android phone with this quite simple but very useful application, as long as you don't speak for too long or interrupt yourself for transcribing every once in a while. I do have handy.computer on my Mac too.
https://news.ycombinator.com/item?id=46640855
I find the model works surprisingly well and in my opinion surpasses all other models I've tried. Finally a model that can mostly understand my not-so-perfect English and handle language switching mid sentence (compare that to Gemini's voice input, which is literally THE WORST, always trying to transcribe in the wrong language and even if the language is correct produces the uttermost crap imaginable).
I've installed murmure on my 2013 Mac, and it works through 1073 words/minute. I don't know about you, but that's plenty faster than me :D
Was searching for this this morning and settled on https://handy.computer/
Big fan of handy and it’s cross platform as well. Parakeet V3 gives the best experience with very fast and accurate-enough transcriptions when talking to AIs that can read between the lines. It does have stuttering issues though. My primary use of these is when talking to coding agents.
But a few weeks ago someone on HN pointed me to Hex, which also supports Parakeet-V3 , and incredibly enough, is even faster than Handy because it’s a native MacOS-only app that leverages CoreML/Neural Engine for extremely quick transcriptions. Long ramblings transcribed in under a second!
It’s now my favorite fully local STT for MacOS:
I installed a few different STT apps at the same time that used Parakeet and I think they disagreed with each other. But Hex otherwise would’ve won for me I think. Wanna reformat the Mac & try again (been a while anyway).
My comment on this from a month back: https://news.ycombinator.com/item?id=46637040
Hex is great and not trying to pull you away from them - would love to get your pov when you give these a spin next time. email or DM me
I didn't try Handy but been using Whisper-Key its super simple get out of your way all local single file executable (portable so zero install too) -- thats for Windows idk about the Mac version
the astroturfing here off topic of op post is unbearable
I just learned about Handy in this thread and it looks great!
I think the biggest difference between FreeFlow and Handy is that FreeFlow implements what Monologue calls "deep context", where it post-processes the raw transcription with context from your currently open window.
This fixes misspelled names if you're replying to an email / makes sure technical terms are spelled right / etc.
The original hope for FreeFlow was for it to use all local models like Handy does, but with the post-processing step the pipeline took 5-10 seconds instead of <1 second with Groq.
You can try ottex for this use case - it has both context capture (app screenshots), native LLMs support, meaning it can send audio AND screenshot directly to gemini 3 flash to produce the bespoke result.
There's an open PR in the repo which will be merged which adds this support. Post processing is an optional feature if you want to use it, and when using it, end to end latency can still be under 3 seconds easily
That’s awesome! The specific thing that was causing the long latency was the image LLM call to describe the current context. I’m not sure if you’ve tested Handy’s post-processing with images or if there’s a technique to get image calls to be faster locally.
Thank you for making Handy! It looks amazing and I wish I found it before making FreeFlow.
Could you go into a little more detail about the deep context - what does it grab, and which model is used to process it? Are you also using a groq model for the transcription?
It takes a screenshot of the current window and sends it to Llama in Groq asking it to describe what you’re doing and pull out any key info like names with spelling.
You can go to Settings > Run Logs in FreeFlow to see the full pipeline ran on each request with the exact prompt and LLM response to see exactly what is sent / returned.
As a very happy Handy user, it doesn't do that indeed. It will be interesting to see if it works better, I'll give FreeFlow a shot, thanks!
Handy's great! I find the latency to be just a bit too much for my taste. Like half the people on this thread, built my own but with a bit more emphasis on speed
Thanks for the recommendation! I picked the smallest model (Moonshine Base @ 58MB), and it works great for transcribing English.
Surprisingly, it produced a better output (at least I liked its version) than the recommended but heavy model (Parakeet V3 @ 478 MB).
Great feedback :) also support for the v2 versions of the moonshine models should be out today!
Handy rocks. I recently had minor surgery on my shoulder that required me to be in a sling for about a month, and I thought I'd give Handy a try for dictating notes and so on. It works phenomenally well for most text-to-speech use cases - homonyms included.
Handy is nothing short of fantastic, really brilliant when combined with Parakeet v2!
Handy is genuinely great and it supports Parakeet V3. It’s starting to change how I "type" on my computer.
Yes, I also use Handy. It supports local transcription via Nvidia Parakeet TDT2, which is extremely fast and accurate. I also use gemini 2.5 flash lite for post-processing via the free AI studio API (post-processing is optional and can also use a locally-hosted LM).
I use handy as well, and love it.
There's also an offline-running software called VoiceInk for macos. No need for groq or external AI.
+1, my experience improved quite a bit when I switched to the parakeet model, they should definitely use that as the default.
https://usetalkie.com - Parakeet and incredibly fast and made for devs
My favorite too. I use the parakeet model
To build your own STT (speech-to-text) with a local model and and modify it, just ask Claude code to build it for you with this workflow.
F12 -> sox for recording -> temp.wav -> faster-whisper -> pbcopy -> notify-send to know what’s happening
https://github.com/sathish316/soupawhisper
I found a Linux version with a similar workflow and forked it to build the Mac version. It look less than 15 mins to ask Claude to modify it as per my needs.
F12 Press → arecord (ALSA) → temp.wav → faster-whisper → xclip + xdotool
https://github.com/ksred/soupawhisper
Thanks to faster-whisper and local models using quantization, I use it in all places where I was previously using Superwhisper in Docs, Terminal etc.
I'm building in the same space, Workin on https://ottex.ai - It's a free STT app, with local models and BYOK support (OpenRouter, Groq, Mistral, and more).
The top feature is the per-app custom settings - you can peak different models and instructions for different apps and websites.
- I use the Parakeet fast model when working with Claude Code (VS Code app). - And I use a smart one when I draft notes in Obsidian. I have a prompt to clean up my rambling and format the result with proper Markdown, very convenient.
One more cool thing is that it allows me to use LLMs with audio input modalities directly (not as text post-processing). e.g. It sends the audio to Gemini and prompts it to transcribe, format, etc., in one run. I find it a bit slow to work with CC, but it is the absolute best model in terms of accuracy, understanding, and formatting. It is the only model I trust to understand what I meant and produce the correct result, even when I use multiple languages, tech terms, etc.
Since many are asking about apps with simillar capabilities I’m very happy with MacWhisper. Has Parakeet, near instant transcription of my lengthy monologues. All local.
Edit: Ah but Parakeet I think isn’t available for free. But very worthwhile single purchase app nonetheless!
I actually got MacWhisper originally for speech to text so I could talk to my machine like a crazy person. I realized I didn't like doing that but the actual killer feature for buying it that I really enjoy is the fully local transcription of meetings, with a nice little button to start recording that pops up when you launch zoom, teams, etc. It means I can safely record meetings and encrypt them locally and keep internal notes without handing off all of that to some nebulous cloud platform.
I had previously used Hyprnote to record meetings in this way - and indeed I still use that as a backup, it's a great free option - but the meeting prompting to record and better transcription offered by Macwhisper is a much better experience.
I initially built Talkie to talk to it like a crazy person when I was on long runs and ideas would pop into my head haha
Been a power user of SuperWhisper and Wispr Flow for a long time and eventually decided to unify those flows - memos & dictations, everything is a file and local first, BYOK
Does any of these solutions work reliably for non-English languages? I’ve had a lot of issues trying to transcribe Swedish with all the products I’ve used so far.
Parakeet doesn't work ? https://huggingface.co/nvidia/parakeet-tdt-0.6b-v3
If you are willing to use a service for transcriptions, Mistral (which is also European) works rather nicely if they support your language https://docs.mistral.ai/capabilities/audio_transcription#tra...
Try ottex with Gemini 3 flash as a transcription model. I'm bilingual as well and frequently switch between languages - Gemini handles this perfectly and even the case when I speak two languages in one transcription.
Sounds like there's plenty of interest in those kind of tools. I'm not a huge fun API transcriptions given great local models.
I build https://github.com/bwarzecha/Axii to keep EVERYTHING locally and be fully open source - can be easily used at any company. No data send anywhere.
looks good although Mistral Voxtral would be a good choice, wouldn't it?
i've used macwhisper (paid), superwhisper (paid), and handy (free) but now prefer hex (free):
https://github.com/kitlangton/Hex
for me it strikes the balance of good, fast, and cheap for everyday transcription. macwhisper is overkill, superwhisper too clever, and handy too buggy. hex fits just right for me (so far)
Tried to use it, installed, enabled permissions, downloaded the parakeet model for English and then it crashed every time I released the button after dictating. Completely unusable.
I just vibe coded a my own NaturalReader replacement. The subscription was $110/year... and I just canceled it.
Chatterbox TTS (from Resemble AI) does the voice generation, WhisperX gives word-level timestamps so you can click any word to jump, and FastAPI ties it all together with SSE streaming so audio starts playing before the whole thing is done generating.
There's a ~5s buffer up front while the first chunk generates, but after that each chunk streams in faster than realtime. So playback rarely stalls.
It took about 4 hours today... wild.
do you have a github?
Does anyone know of any macos transcription apps that allow you to do speech to text live? Eg, the text outputs as you are talking? Older tech like the macos dictation as well as dragon does this, but seems like theres nothing available that uses the new, better models.
This thread is a beautiful intro into our near future. Yet more and more custom coded software. Takes me back to the days of late 90s. Loving this!
For those using something like this daily, what key combinations do you use to record and cancel. I’m using my capslock right now but was curious about others
Someone told me the other day I should use a foot pedal, and then I remembered I already had an Elgato one under my desk connected with my Stream Deck. I got it very cheap used on eBay. So, that's an option too.
Scroll Lock is really good key for that in my opinion. If your keyboard does not have it exposed then you can use some remapping program like https://github.com/jtroo/kanata
I have a Stream Deck and made a dedicated button for this. So I tap the button speak and then tap it again and it pastes into wherever my cursor was at.
And then I set the button right below that as the enter key so it feels mostly handsoff the keyboard.
Right option. Push to talk
I also use the right option key on Mac, never miss it.
Great question. I'd love to know if anyone has had any success with handheld buttons/bluetooth remotes or similar, too.
Can you please teach me how to use the CAPS LOCK key as a push-to-talk?
For macos i found https://github.com/rselbach/jabber and was lately use that, but the iOS where I still need replacement.
Not free, but Whisper Memos (https://whispermemos.com/) is about half the price
I dont understand who this is for honestly. Unless you dont have hands, why would you want to talk to your computer. Maybe Im just autistic, but I would always prefer text over speaking out and have that translate to text.
you shouldn't use autism as a generic insult as you have here
MacOS only. May this help you skip a click.
If you're looking for free STT you can use Whistle across Windows/Mac/Linux and Android (iOS released soon)
Whispering [0] is Windows compatible and has gotten a lot better on Windows despite being extremely rough around the edges at first.
Not sure why you got downvoted. I wish this was a tag or something.
https://github.com/rabfulton/Auriscribe
My take for X11 Linux systems. Small and low dependency except for the model download.
Vowen
Quick question, what’s the state of vibe coding with Xcode? I remember there were some issues months ago trying to get a seem less integration working. Has it improved?
I created Voibe which takes a slightly different direction and uses gpt-4o-transcribe with a configurable custom prompt to achieve maximum accuracy (much better than Whisper). Requires your own OpenAI API key.
https://github.com/corlinp/voibe
I do see the name has since been taken by a paid service... shame.
Does anyone know of an effective alternative for Android?
I installed Whisper+ through FDroid and it works well for my basic needs. Only 30s at a time but you can append multiple recordings to the same transcript: https://github.com/woheller69/whisperIMEplus
I have been using VoiceFlow. It works incredibly well and uses Groq to transcribe using the Whisper V3 Turbo model. You can also use it in an offline scenario with an on-device model, but I am mostly connected to the internet whenever I am transcribing.
Check out the FUTO keyboard or FUTO voice input apps. It only uses the whisper models though so far.
Does the Android keyboard transcription not work for your needs?
Could you make it use Parakeet? That's an offline model that runs very quickly even without a GPU, so you could get much lower latency than using an API.
I love this idea, and originally planned to build it using local models, but to have post-processing (that's where you get correctly spelled names when replying to emails / etc), you need to have a local LLM too.
If you do that, the total pipeline takes too long for the UX to be good (5-10 seconds per transcription instead of <1s). I also had concerns around battery life.
Some day!
https://github.com/cjpais/Handy
It’s free and offline
Wow, Handy looks really great and super polished. Demo at https://handy.computer/
Do any of these works as an iOS keyboard to replace the awful voice transcription Apple is currently shipping?
https://x.com/usetalkieapp/status/2022341320090775647/photo/...
native app uses Parakeet (v2 or V3) on iOS
utter (utter.to) does.
Murmure is multiplatform, uses parakeet and can connect to your local llm (using ollama). https://murmure.al1x-ai.com/
Is there a tool that preserves the audio? I want both, the transcript and the audio.
Quick glance; FreeFlow already saves WAV recordings for every transcript to ~/Lib../App../FreeFlow/audio/ with UUIDs linking them to pipeline history entries in CoreData. Audio files are automatically deleted though, when their associated history entries are deleted. Shall be a quick fix. Recently did the same for hyprvoice, for debugging and auditing.
Nice! I vibe coded the same this weekend but for OpenAI however less polished https://github.com/sonu27/voicebardictate
Also look into voxtral, their new model is good and half the price if you can live without streaming.
Why do people find the need to market as "free alternative to xyz" when its a basic utility? I take it as an instant signal that the dev is a copycat and mostly interested in getting stars and eyeballs rather than making a genuinely useful high quality product.
Just use handy: https://github.com/cjpais/Handy
Really good to know Handy exists; it's the first I'm hearing about it. I use a speech-to-text app that I built for myself, and I know at least one co-worker pays $10 a month for (I think) Wispr. I think it's possible there was no intention to market, and the creator simply didn't know about Handy, just like me.
title lacks: for Mac
Is it possible to customise the key binding? Most of these services let you customise the binding, and also support toggle for push-to-talk mode.
Seeing this thread, sounds a blog post comparing the offerings would be useful
Good idea at first glance, but it would get outdated in hours.
But SuperWhisper is free with Parakeet as a local model?
That’s how I have it running too
Anything similar for iOS?
Saved you a click: Mac only and actually Grok; local inference too slow.
Won't be free when xAI starts charging.
If you're looking for free STT you can use Whistle across Windows/Mac/Linux and Android (iOS released soon)
groq ≠ grok
Spokenly?
Utter uses your OpenAI key (~$1/month). https://utter.to/. Has an iPhone app.