Lenovo lambasts video codec pools for incentivizing revenues A live webinar entitled ‘Who leads the VVC patent race?’ provided no objective answers to this clearly rhetorical question, but would we expect anything less from a topic as IP-charged as video codecs? Hosted by New York-based data analytics firm LexisNexis, a graph was presented breaking down VVC patent families regionally—data we had previously not laid eyes on—courtesy of IPlyptics. It shows an overwhelming (yet expected) dominance of VVC patents in North America, across all four categories. #video #codec #VVC
Faultline’s Post
More Relevant Posts
-
"Subtle and classy", a great way to describe many of the features of the #ThinkPad X1 Carbon Gen 12. I could go with "the promise of powerful, portable productivity", but then we might have to rename it to honor the alliteration 😄 From this review in CGMagazine: https://lnkd.in/eixpQkmC
To view or add a comment, sign in
-
🚀 Day 2 of 25 Days of RTL Challenge 🚀 Hey LinkedIn fam! 👋 Today's RTL journey included a deep dive into D flip-flops, exploring their role in sequential logic at the Register-Transfer Level (RTL). 💻🔍 Explored both asynchronous and synchronous reset variants, understanding their impact on digital designs. Real-world applications in synchronous systems are becoming clearer, and I'm excited about the progress! 🌐💡 Can't wait to share more insights! #25DaysOfRTL #DigitalDesign #DFlipFlop #LearningJourney #VLSI🚀
To view or add a comment, sign in
-
Creative Technologist, CIO, CAIO, CTO, COO, FOOMO, Senior Pre & Post Sales Engineer & Architect in XR, CV, AI, ML, OTT, OVP, SaaS, EdTech, Sports, Video, Games, Entertainment, Immersive & Volumetric.
2.5 hours of speech transcribed in just over 1.5 minutes, local style...
Covering the latest in AI R&D • ML-Engineer • MIT Lecturer • Building AlphaSignal, a newsletter read by 200,000+ AI engineers.
You can now transcribe 2.5 hours of audio in 98 seconds, locally. A new implementation called insanely-fast-whisper is blowing up on Github. It works on works on Mac or Nvidia GPUs and uses the Whisper + Pyannote library speed up transcriptions and speaker segmentations. Here's how you can use it: pip install insanely-fast-whisper insanely-fast-whisper --file-name <FILE NAME or URL> --batch-size 2 --device-id mps --hf_token <HF TOKEN>
To view or add a comment, sign in
-
Covering the latest in AI R&D • ML-Engineer • MIT Lecturer • Building AlphaSignal, a newsletter read by 200,000+ AI engineers.
You can now transcribe 2.5 hours of audio in 98 seconds, locally. A new implementation called insanely-fast-whisper is blowing up on Github. It works on works on Mac or Nvidia GPUs and uses the Whisper + Pyannote library speed up transcriptions and speaker segmentations. Here's how you can use it: pip install insanely-fast-whisper insanely-fast-whisper --file-name <FILE NAME or URL> --batch-size 2 --device-id mps --hf_token <HF TOKEN>
To view or add a comment, sign in
-
you can transcribe 2.5 hours of audio in 98 seconds!
Covering the latest in AI R&D • ML-Engineer • MIT Lecturer • Building AlphaSignal, a newsletter read by 200,000+ AI engineers.
You can now transcribe 2.5 hours of audio in 98 seconds, locally. A new implementation called insanely-fast-whisper is blowing up on Github. It works on works on Mac or Nvidia GPUs and uses the Whisper + Pyannote library speed up transcriptions and speaker segmentations. Here's how you can use it: pip install insanely-fast-whisper insanely-fast-whisper --file-name <FILE NAME or URL> --batch-size 2 --device-id mps --hf_token <HF TOKEN>
To view or add a comment, sign in
-
2 x LinkedIn Top Voice | Entreprenuer | Advisory Board @ CCW | Generative AI, CX, EX, UX, MX, ML and AI | MACS CP | Global Experience
It wasn’t that long ago that 2.5 hours of speech transcription required ~1.25 HOURS of CPU. Maybe 4 years ago, maybe 5? Either way, this model earns the name of “insanely fast” to be down to 1.5 MINUTES! As long as the accuracy is reasonable for business use cases, there will be a significant reduction in transcription costs in 2024, coupled with an explosion in use cases that were previously too slow or expensive to be viable. #anotherweekanothermodel #AI #CX #EX
Covering the latest in AI R&D • ML-Engineer • MIT Lecturer • Building AlphaSignal, a newsletter read by 200,000+ AI engineers.
You can now transcribe 2.5 hours of audio in 98 seconds, locally. A new implementation called insanely-fast-whisper is blowing up on Github. It works on works on Mac or Nvidia GPUs and uses the Whisper + Pyannote library speed up transcriptions and speaker segmentations. Here's how you can use it: pip install insanely-fast-whisper insanely-fast-whisper --file-name <FILE NAME or URL> --batch-size 2 --device-id mps --hf_token <HF TOKEN>
To view or add a comment, sign in
-
Vulkan Video Encode is finally released! Industry has been suffering from fragmented hw accelerated video encode/decode APIs. With this new release, we are getting closer to have truely working cross-platform video encode/decode API. Next is AV1! https://lnkd.in/g8v2_CrZ
NVIDIA First To Offer Driver Support For New Vulkan H.265 & H.264 Video Encode Extensions
wccftech.com
To view or add a comment, sign in
-
Hey folks! Ever heard of the "Divide and Conquer" trick for fixing code? It's like when you're playing hide and seek and instead of searching the whole house at once, you split it up room by room until you find what you're looking for. Pretty neat, right? Well, let me tell you about a recent adventure I had using this method to squash some bugs in my code. So, picture this: I'm knee-deep in a project when the client hits me with a bug report. Turns out, the stepper motors are acting funky when the number of spins goes up. Cue the detective music because it's time to figure out what's going on. Just to provide context, I was generating the PWM using the timer handler because the GPIO wasn't capable of PWM. I mean, it made sense, right? So, I whipped out the oscilloscope to take a peek at the PWM signals. And wouldn't you know it, they were all over the place, like a disco party gone wrong. After some digging, I realized it wasn't just a simple case of conflicting priorities with other modules. I mean, I even gave the PWM interrupt top priority, but no dice. Back to square one. So, I decided to strip everything down and start from scratch. Slowly but surely, I began re-enabling modules one by one. And wouldn't you know it, when I switched on the SoftSPI module, things went haywire again. Bingo! Turns out, there was a sneaky race condition between the SoftSPI GPIOs and PWM GPIO. Who would've thought a little GPIO write could cause such chaos, huh? Anyway, the moral of the story? Sometimes you gotta break things down to build them back up. The "Divide and Conquer" method might sound simple, but trust me, it's a lifesaver when you're knee-deep in code chaos. #ash #divideandconquer #debugging #embeddedsystems
To view or add a comment, sign in
-
Excellent article by Adam Taylor covering the Tria ZUBoard development board. If you've been following my journey to deploy the MediaPipe models to embedded platforms, this is the embedded board I will be targeting.
Avnet ZUBoard 1CG: The Swiss Army Knife of Development Boards
hackster.io
To view or add a comment, sign in
-
Tech Explorer - Generative AI, LLMs, AI Design and workflows, 2D/3D AR and VR experience, app design
Stable Cascade is supported in ComfyUI since a few hours, and with the FP16 models it's possible to generate UHD images directly even on 8GB VRAM gpus. I find it harder at prompt following and text rendering, which makes it less different than SDXL, but it's still fun to play with 😊 Let's see what the wizards of optimization and fine tuning will bring in the coming days and weeks ! (This is a completely "stock" Comfy workflow, and some people are already doing some hires fix and SDXL post processing).
To view or add a comment, sign in
1,403 followers