• 1bpp 2 days ago

    How would this prevent someone from just plugging ElevenLabs into it? Or the inevitable more realistic voice models? Or just a prerecorded spam message? It's already nearly impossible to tell if some speech is human or not. I do like the idea of recovering the emotional information lost in speech -> text, but I don't think it'd help the LLM issue.

    • SrslyJosh 2 days ago

      Detecting "human speech" means shutting out people who cannot speak and rely on TTS for verbal communication.

      • estimator7292 2 days ago

        Also speech impediments, accents, physical disabilities, etc etc.

        Tech culture just refuses to even be aware of people as physical beings. It's just spherical users in a vacuum and if you don't fit the mold, tough.

      • layman51 2 days ago

        Or also a genuine human voice reading a script that’s partially or almost entirely LLM written? I think there must be some video content creators who do that.

        • siim a day ago

          True. However making voice input has higher friction than typing chatgpt write me a reply.

        • teunlao 2 days ago

          Impressive tech execution, but the format has fundamental scaling issues.

          Clubhouse lost 93% of users from peak. WhatsApp sends 7 billion voice messages daily - but those are DMs, not feeds.

          The math doesn't work: reading is 50-80% faster than listening. You can skim 50 text posts in 100 seconds. 50 voice posts? 15 minutes.

          Voice works async 1-to-1. You built Twitter where every tweet is a 30-second voicemail nobody has time to listen to.

          The transcription proves it - users will read, not listen. Which makes this "text feed with worse UX"

          • siim 7 hours ago

            Speaking > typing for creation.

            Reading > listening for consumption.

            Talk to create, read to consume.

          • zahlman 2 days ago

            > I saw this tweet: "Hear me out: X but it's only voice messages (with AI transcriptions)" - and couldn't stop thinking about it.

            > Why this exists: AI-generated content is drowning social media.

            > Real-time transcription

            ... So you want to filter out AI content by requiring users to produce audio (not really any harder for AI than text), and you add AI content afterward (the transcriptions) anyway?

            I really think you should think this through more.

            The "authenticity" problem is fundamentally about how users discover each other. You get flooded with AI slop because the algorithm is pushing it in front of you. And that algorithm is easily gamed, and all the existing competitors are financially incentivized to implement such an algorithm and not care about the slop.

            Also, I looked at the page source and it gives a strong impression that you are using AI to code the project and also that your client fundamentally works by querying an LLM on the server. It really doesn't convey the attitude supposedly motivating the project.

            Nice tech demo though, I guess.

            • siim 7 hours ago

              Curious what made you think the backend uses LLMs for content generation?

              To clarify:

              1. transcription is local VOSK speech-to-text via WebSocket

              2. live transcript post-processing has optional Gemini Flash-lite turned on which tries to fix obvious transcription mistakes, nothing else. The real fix here is more accurate transcriber.

              3. backend: TypeGraphQL + MongoDB + Redis

              The anti-AI stance isn't "zero AI anywhere", it's about requiring human input.

              AI-generated audio is either too bad or too perfect. Real recorded voice has human imperfections.

            • monadoid a day ago

              Cool idea! You should make it so that I can only play one audio message at once (currently if I click to start two, they both play simultaneously)

              • cdrini 2 days ago

                Neat idea! Not sure if I'm willing to register just try it, though. Having the main feed public would be nice! Or even a sample feed.

                • siim a day ago

                  That's a good call. While there's no general public feed, individual profiles are public. For example, here's mine: https://voxconvo.com/siim

                • esafak 2 days ago

                  So you're going to reject recordings detected as computer generated, or human recorded from a computer-generated script?

                  I feel like you are making your users jump through hoops to do bot and slop detection, when you ought to be investing in technology to do the same. Here is a focusing question: would you still demand audio recordings if you had that technology?

                  Maybe you will court an interesting set of users when you do this? I just know I will not be one of them; ain't got time for that. Good luck.

                  • cjflog 2 days ago

                    Did you ever use AirChat?

                    • oulipo2 2 days ago

                      Idea is cool, but the STT is bad (at least with an accent), and the fact that you need to edit each word is too cumbersome

                      • jagged-chisel 2 days ago

                        “Sign in with Google”

                        :grimace:

                        Sorry, but I have to pass.