🤗 ❤️ (Grounding) 🦕 Yesterday the famous Grounding DINO model landed on Hugging Face transformers main branch! You're now just one pip install away. Grounding DINO is a Zero-Shot object detection model, thus it's not constrained to a closed set of labels and can detect a wide range of objects. The model was released last year and got quite attention for its capabilities and possible combos with other models like Segment Anything (SAM). I've created a simple Demo to let you try the model look at the links below. Checkpoints: https://lnkd.in/dyFGBGk3 GroundingDINO Demo: https://lnkd.in/dfT9MQrt
Are the features extracted from an image using Dino or grounding Dino the same, or has there been a modification made to the architecture?
I prefer “hotdog” and “not hotdog”
That’s a great news! I was waiting for GroundingDINO in transformers!
Great job Eduardo Pacheco. Will it be possible to fine tune the Grounding DINO using Huggingface interface?
Eduardo Pacheco Interesting !! Could this automatically auto annotate hundreds or thousands of similar images ? And the annotations be saved and formatted for a Model like YOLOv9 ?
Thanks for the huge work! Cant wait to try
we can finetune the model now, more easily I guess.
Anisat Aydieva
AI Engineer @xtream • Maintainer @functime.ai • Machine Learning and Statistics @UniMi
5moWhat's the difference between SAM and DINO?