We put Aida, our LLM-based content moderation system, to the test.
Compared to OpenAI's Moderation LLM, Aida outperformed on accuracy—a remarkable achievement.
OpenWeb serves more than 170M monthly active users across thousands of publishers. With innovations like Aida, we're building a healthier web. 🌐
Learn more about Aida at https://lnkd.in/dYzvYWqY
Read more about Aida's head-to-head with OpenAI's Moderation LLM here: https://lnkd.in/es4TfY2M
We’re raising the bar for safety on the web 🚀
Aida is OpenWeb’s new LLM-based moderation ecosystem. With this extra layer, we’re making communities healthier across the web at massive scale.
Compared to research benchmarks, Aida is more accurate than OpenAI's content moderation solution as well as other LLMs such as Gemini.
Link in comments below
OpenAI is enhancing trust in digital content with the introduction of watermarks to DALL-E 3's image metadata. 🎨✨
While it's an excellent step towards establishing trust in digital content, remember that watermarks may not be foolproof.
What are your thoughts on using watermarks to verify the authenticity of AI-generated content? Share your insights below! 👇 #OpenAI#DigitalTrust
Read the full article here: https://lnkd.in/dN3sac2B
I am dubbing today #metadatamonday to share all things related to metadata. I spent the weekend writing my first Substack article about image metadata.
The article is actually kind of a plea for more social media platforms to give users at least a little more control over how images are used in those platforms by importing and allowing some metadata to travel with images (in particular, alt-text and caption), but the article also discusses the controversial topic of why metadata is routinely stripped to begin with and a few other related topics.
It also includes links to lots of resources including useful #IPTC tools.
#metadatamanagement#metadatamonday#metadatastripping#metadataworkflows
"OpenAI has unveiled Sora, an innovative text-to-video and video-to-video model with the ability to produce high-fidelity video content.
The brilliance of Sora lies in its efficient model, which breaks down training material into small visual patches. Each patch is then associated with specific prompts, and these are further assembled in a space-time network per-prompt.
The final outputs are generated from the latents within the network by decoding them back into pixel space.
For those keen on delving into the research, OpenAI's technical report documentation serves as an excellent starting point. Certainly, something worth keeping an eye on! 🙃"
Introducing Magic Blocks 🎉
Ever wished you could speed up your content creation process? That's why I built Magic Blocks. This app generates custom prompts to transform your content—a blog post into a tweet or a scientific paper into an explainer video script—in your style.
Here’s a quick rundown:
1. State the content format you want
2. Get an AI-crafted prompt
3. Add input-output examples for style
4. Input your content and generate
5. Save your favorite blocks for reuse
Try it out for free with your OpenAI or Gemini API key!
P.S. I shared my journey of building Magic Blocks, the tech stack I used, and valuable lessons on prompting LLMs on my personal blog. Link in comments.
Tagging a few people whom I think might like this: Ash ReadAudrey Chia 🚀Amanda CuaSi Quan Ong (SQ Ong)Michael Eckstein📚 Cedric Chin
~~~
And, yes, the above is generated by Magic Blocks, with some edits from me.
Made some progress on Minute Nav! Just added the YouTube summary feature. Here's how I achieved it: 👀
🆔 First, when the user inputs a YouTube link, we split it to get the video ID. The video ID is usually the part after the "=".
🗒️Then, we use the YouTube Transcript API to retrieve transcripts for a given video ID.
🤖Lastly, we use Google's Gemini AI to generate a summary for a given transcript. Initially, I attempted to use Hugging Face models, but they struggled with large token sequences. For now, I'll stick with this approach, but I'm open to suggestions.
Next, I'm planning to implement:
📽️Feature that allows you to ask questions based on the video.
📍Navigate you to the exact point in the video that covers the topic.
You can take a loot at the project here:
https://lnkd.in/dhmNY8xj
Senior Project Manager|Infosys|B.E(Hons) BITS, Pilani & PGD in ML & AI at IIITB & Master of Science in ML & AI at LJMU, UK | (Building AI for World & Create AICX)(Learn, Unlearn, Relearn)
🤖Creating an AI Agent with LangGraph, Llama 3, & Groq
Great 30min YouTube walkthrough by #Sam_Witteveen on converting a LangChain AgentExecutor to a LangGraph agent - and making it a bit more advanced in the process
Video: https://lnkd.in/gWiQpAT5
Code: https://lnkd.in/gCrG9kYY
My LinkedIn feed is awash with AI related posts. I can't lie, I'm not as up to speed with this as I feel I should be. As an educator, I critically need to be up to speed and I need to get my thinking straight about how this impacts the learning opportunities I provide and the shape of the curriculum in general.
My initial reaction is to be concerned by rapid change but I think I need to get past that quickly and start to consider positive, AI-integrated ways forward.
I'm wondering... the year-on-year dialogue is of an over-crowded curriculum. It's often the case that learning is more rushed than it should be and that depth of learning comes second to coverage of a weighty curriculum.
My nascent understanding of AI is that it can create content, at speed, given key prompts.
In learning, could this therefore benefiically mean:
* less time writing from scratch and more time analysing, editing and uplevelling language
* more time reading content (both newly generated and previously created) and actively discussing its language, meaning and impact
* more time for oracy and communciative exploration of language, its history, etymology and how to best use it effectively to share a message
* less deficit for those who struggle with the mechanics of writing, providing more opportunities for a levelling of the writing for communication 'playing field'
* readily accessible ways for learners to create pictorial, including moving image, content to bring more abstract concepts to visual life
* rather than perceived less time for human creativity (with AI doing that for us), actually more room in the curriculum for the arts - time to sing, dance, act and create works of art
I want to take a positive stance on what is new, unknown, uncertain (in terms of the ongoing trajectory), potentially devastating but also potentially rejuvenating. It seems to be necessary to work reflectively with these changes rather than fighting against them.
I welcome feedback on my, honestly, very emergent thinking. I'd love to learn more. I need to learn more.
#ai#teaching#learning#change#development#education#curriculumdevelopment
OpenAI’s Sora is going to save brands thousands of pounds 😱
It’s mind blowing to think creatives now have a platform to bring to life any concept they can dream up (as long as they have a solid prompt).
Here’s 🚀 Isaac Martin's initial reaction to this incredible tool.
Do you agree?
Do you think Sora will live up to the hype?
Let us know below 👇
Ever had the struggle of having multiple results in one row from a Claygent or GPT column inside of your Clay table?
In this video, I will show you a quick workaround to get rid of this problem.
In the Clay Slack support channel and on calls with our clients I often saw people struggling with this scenario.
If the enrichment you use is a native integration it will automatically appear as a list - so you can use the "Write to other table" enrichment to split the entries up into individual rows.
But how to solve it, if you have a Claygent return multiple names for example?
To learn this check out the video 😎
Leave a like & comment if you found this useful and want to see more Clay tips and tricks!