Malted AI’s Post

View organization page for Malted AI, graphic

1,727 followers

This weekend our Chief of Engineering Federico Rossetto gave a talk at the 2024 international workshop on Efficient GranAI sponsored by the Edinburgh Generative AI Lab (GAIL). It was a great space to discuss insights about improving the training and deployment efficiency of Large Language Models and Large Multi-Model Models. The development of optimised Small Language Models presents several opportunities for businesses to implement fit-for-purpose solutions at a fraction of the cost. As the demand to adopt AI increases across businesses, energy consumption, cost, systems integrations and AI effectiveness are some of the challenges companies need to navigate to unlock the value of the technology. Generative AI is one of the best technological advancements in the last decades. However, Large Language Models are mostly general. Off-the-shelf Gen AI models might not suit specific business needs, requiring fine-tuning by experts and human-generated labels. This presents further challenges for the technology to be adopted and scaled. By using new technologies such as knowledge distillation we are working towards implementing AI at a scale in a way that hasn’t been possible. Thank you to Antreas Antoniou and Edoardo Ponti for inviting us and creating opportunities to share knowledge and discuss challenges around such an exciting technology.

  • No alternative text description for this image
  • No alternative text description for this image

To view or add a comment, sign in

Explore topics