Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any chance of running this nano model on my Mac?


I used Nemotron 3 nana on LM Studio yesterday on my 32G M2-Pro mac mini. It is fast and passed all of my personal tool use tests, and did a good job analyzing code. Love it.

Today I ran a few simple cases on Ollama, but not much real testing.


There's MLX versions of the model, so yes. LM Studio hasn't updated their mlx-lm runtime yet though, you'll get an exception.

But if you're OK running it without a UI wrapper, mlx_lm==0.30.0 will serve you fine.


Looks like LM Studio just updated the MLX runtime, so there's compatibility now.


Yep! 60t/s on the 8 bit MLX on an M4 Pro with 64GB of RAM.


LMStudio and 32+ gb of RAM.

https://lmstudio.ai/models/nemotron-3

Simplest to just install it from the app.


Kind of depends on your mac, but if it's a relatively recent apple silicon model… maybe, probably?

> Nemotron 3 Nano is a 3.2B active (3.6B with embeddings) 31.6B total parameter model.

So I don't know the exact math once you have a MoE, but 3.2b will run on most anything, 31.6b and you're looking at needing a pretty large amount of ram.


Given Mac bandwidth, you'll generally want to load the whole thing in RAM. You get speed benefits based on smaller-size active experts, since the Mac compute is slow compared to Nvidia hardware. This should be relatively snappy on a Mac, if you can load the entire thing.


running it on my M4 @ 90tps, takes 18GB of RAM.


If it uses 18GB of RAM, you're not using the official model (released in BF16 and FP8), but a quantization of unknown quality.

If you write "M4", you mean M4 and not M4 Pro or M4 Max?


M2 Max @ 17tps btw




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: