Hacker Newsnew | past | comments | ask | show | jobs | submit | mpaepper's commentslogin

Maybe, however, they might not care as the API is freely available anyways

You're right.. well hope it doesn't come to that

For example this post here becomes: https://news.gcombinator.com/item?id=46780583

How much memory do you need locally? Is a rtx 3090 with 24gb enough?

Yes, more than enough. I have rtx4080 laptop gpu with 12gb vram.

You should look into the new Nvidia model: https://research.nvidia.com/labs/adlr/personaplex/

It has dual channel input / output and a very permissible license


Thanks for sharing this! I'm going to put this on my list to play around with. I'm not really an expert in this tech, I come from the audio background, but recently was playing around with streaming Speech-to-Text (using Whisper) / Text-to-Speech (using Kokoro at the time) on a local machine.

The most challenging part in my build was tuning the inference batch sizing here. I was able to get it working well for Speech-to-Text down to batch sizes of 200ms. I even implement a basic local agreement algorithm and it was still very fast (inferencing time, I think, was around 10-20ms?). You're basically limited by the minimum batch size, NOT inference time. Maybe that's a missing "secret sauce" suggested in the original post?

In the use case listed above, the TTS probably isn't a bottleneck as long as OP can generate tokens quickly.

All this being said a wrapped model like this that is able to handle hand-offs between these parts of the process sounds really useful and I'll definitely be interested in seeing how it performs.

Let me know if you guys play with this and find success.


Oh man that space emergency example had me rolling

Ha --

and the "Customer Service - Banking" scenario claims that it demos "accent control" and the prompt gives the agent a definitely non-indian name, yet the agents sounds 100% Indian - I found that hilarious but also isn't it a bad example given they are claiming accent control as a feature?


"Sanni Virtanen", I guess it was meant to be Finnish? Maybe the "bank customer support" part threw the AI off, lmao.

Changing my title to "Astronaut" right now... I'll be using that line as well anytime someone asks me to do something.

Oh wow. Thats definitely something…

oh - very interesting indeed! thanks


You mentioned needing 40k tiles and renting a H100 for 3$/hour at 200tiles/hour, so am I right to assume that you spend 200*3=600$ for running the inference? That also means letting it run 25 nights a 8 hours or so?

Cool project!


Yup back of the napkin is probably about there - also spent a fair bit on the oxen.ai fine-tuning service (worth every penny)... paint ain't free, so to speak

Hi everyone, inspired by Alex' cool article about ASCII rendering (https://alexharri.com/blog/ascii-rendering), I revamped my hero section. It now consists of a matrix style rain of characters which transitions into a pseudo random ASCII rendering of an image of myself.


Seems like the merging with Replit didn't work so well :p


He might already have enough money, so doesn't care if he gets the absolute best resources.


What about the security aspects, can it run anything?


I assume by “it” you mean Claude code or codex-cli — that depends on how you launched them or how you modified the permissions within the CLI chat; that’s orthogonal to my CLI tools.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: