OpenEdu reposted this
LLM really matters a lot. in my earlier sample, I used LLAMA3 7B model, it didn't go as planned. like their were some minor errors related to return output, such as json schema, incorrect code, hallucinations. when I used LLAMA3 30B, those error just disappeared, like response was correct, took little bit extra time ( < 4 sec ) per response. let's see how it'll perform when we use LLAMA3 300+ B or GPT-4o. link to previous post: https://lnkd.in/gM9RQbzU