Josh Chessman’s Post

There are a lot of LLM's out there and seemingly more every day. I've been playing with Ollama which allows you to run LLM's locally. I started with the Mistral LLM and played around a bit. I wouldn't say it was amazing but it wasn't any significantly better or worse than anything else I've played with. Definitely not as robust as the bigger models but perfectly functional. Asking questions about things like the differences between CNAPP and EDR gave decently thought out answers. Questions of a less LLM nature (such as statistical odds of various rolls in the game of Yahtzee) were less accurate (and sometimes flat out wrong). There are a whole bunch of different models available (including models such as llama2, codellama, sqlcoder, wizard-math, and many more) and I've only played with a couple of them thus far but based on how well they currently work we are safe from LLM's totally replacing people. While these are all openly available models meaning their training data and functionality may not be at the same level as other more complicated and advanced (not to mention expensive) models, their ability to answer questions, especially as things get more complicated is very limited. SQLCoder sometimes gave great answers and sometimes seemed to just wander around putting random SQL statements together that had nothing to do with the question. While I am looking forward to playing with these and other models I'm also managing my expectations as it is pretty clear that we are not exactly on the cusp of a revolution that will render humanity redundant. #ai #llm

Ollama

Ollama

ollama.com

To view or add a comment, sign in

Explore topics