It seems it needs around a $2,500 GPU, do you have one?
I tried Qwen online via its website interface a few months ago, and found it to be very good.
I've run some offline models including Deepseek-R1 70B on CPU (pretty slow, my server has 128 GB of RAM but no GPU) and I'm looking into what kind of setup I would need to run an offline model on GPU myself.
So did you run the model offline on your own computer and get realtime audio?
Can you tell me the GPU or specifications you used?
I inquired with ChatGPT:
https://chatgpt.com/share/68d23c2c-2928-800b-bdde-040d8cb40b...
It seems it needs around a $2,500 GPU, do you have one?
I tried Qwen online via its website interface a few months ago, and found it to be very good.
I've run some offline models including Deepseek-R1 70B on CPU (pretty slow, my server has 128 GB of RAM but no GPU) and I'm looking into what kind of setup I would need to run an offline model on GPU myself.