Hacker News new | past | comments | ask | show | jobs | submit login

LLM inference is mostly memory bound. An 12-channel Epyc Genoa with 4800MT/s DDR5 ram clocks at 460.8 GB/sec. It's more than the 400GB/s of the M3 Max, only part of that accessible for the CPU.

And on the Epyc System you can plug much more memory for when you need larger memory and PCI-E gpus, for when you need less faster memory.

Threadripper PRO is only 8-channel, but with memory overclocking it might reach numbers similar to those too.




I'm curious how the newer consumer Ryzens might fare. With LPDDR5X they have >100 GB/s memory bandwidth and the GPUs have been improved quite a bit (16 TFLOPS FP16 nominal in the 780M). There are likely all kinds of software problems but setting that aside the perf/$ and perf/watt might be decent.


Consumer Ryzens only have two-channel memory controllers. Two dual-rank (double sided) DIMMs per channel, which you would need to use to get enough RAM for LLMs, drops the memory bandwidth dramatically -- almost all the way back down to DDR4 speeds.


Yup. Strix Halo will change this, with a 256bit memory bus (4 channel) which CPU and GPU have access to. However it is only likely to be available in laptop designs and probably with soldered-down RAM to reduce timing and board space issues. So it won't be easy to get enough memory for large LLMs with either. But it should be faster than previous models for LLM work.


For consumer Ryzen to pencil out it would require a cluster of APU-equipped machines with the model striped across them. Given say 16GB of model per machine and 60GBps actual memory bandwidth @ $500 it's favorable vs A100s if the software is workable (which my guess is it's not today due to AMD's spotty support). This is for inference, training probably would be too slow due to interconnect overhead.


If you Epyc's are too pricey, there's the Threadripper pro, 8 channels. AMD Siena/8000 series with 6 channels, and and Threadripper with 4 channels.


That's interesting. It's about the same speed as the M3 Max then.

Have you tested it yourself?


Nope, but this guy has a similar build: https://www.reddit.com/r/LocalLLaMA/comments/1bt8kc9/compari...

It seems to reach only a little above half the theoretical speed, and scale only up to 32 threads for some reason. Might be a temporary software limitation or something more fundamental.


Should be at least twice the speed of the M3 Max, as the M3 CPU or GPU only get about half the memory bandwidth available to the package each. M3 Max can't take full advantage of it's memory bandwidth unless CPU, GPU, and NPU are all working at the same time.


I tried looking for some info on this but could only find the M1 Max review over at anandtech that managed to push 200 GB/s when using multiple cores on the CPU, but couldn’t really get any numbers for just the GPU that seemed realistic.

Do you have a source for the GPU only having access to half the bandwidth of the memory?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:
  翻译: