Hacker News new | past | comments | ask | show | jobs | submit login

Exploit a memory safety issue in the tokenizer/or other parts of your LLM infra written in a native language.



??? With weights?


There was a buffer overflow or some other exploit like that in llama.cpp and the gguf format. It has been fixed now, but it's definitely possible. Also weights distributed as python pickles can run arbitrary code.


Distributing anything as python pickles seems utterly batshit to me.


Completely agree.


There are plenty of exploits where the payload is just "data" read by some vulnerable program (PDF readers, image viewers, browsers, compression tools, messaging apps, etc)


Yes, there's a reason weights are now distributed as "safetensors" files. Malicious weights files in the old formats are possible, and while I haven't seen evidence of the new format being exploitable, I wouldn't be surprised if someone figures out how to do it eventually.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:
  翻译: