Andrinandrasana David Rasamoelina’s Post

View profile for Andrinandrasana David Rasamoelina, graphic

Lead Machine Learning Scientist

DoRA (Decomposed Rank Adaptation) emerges as a significant leap over LoRA (Low-Rank Adaptation) for finetuning pretrained models like #LLMs and vision transformers. By separating the weight matrix into magnitude and directional components, DoRA offers a more nuanced approach, enabling more precise adjustments with even fewer parameters. With even lower ranks compared to LoRA, this method not only match or outperform LoRA's performance but also maintains efficiency, making it a promising tool for researchers and practitioners. By pushing the boundaries of what's possible with fewer parameters, DoRA paves the way for more accessible and sustainable AI development. Learn more about this technique at https://buff.ly/3PbWVMX #AI #MachineLearning #DoRA #Innovation #LLM

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

8mo

The evolution from LoRA to DoRA is indeed intriguing, presenting a refined approach to fine-tuning pretrained models. It reminds me of past advancements where techniques like transfer learning revolutionized model training. Considering the efficiency gains, I'm curious about potential applications beyond traditional domains. How do you foresee DoRA's impact on emerging fields or unconventional use cases, and could this innovation open doors for AI in areas previously deemed resource-intensive or challenging? Your perspective on the broader implications could shed light on the transformative power of DoRA across diverse domains.

Like
Reply

To view or add a comment, sign in

Explore topics