DEEPSEEK generates high-quality insights in four key stages:
DEEPSEEK’s Post
More Relevant Posts
-
Great EOY edition of Ali Afridi's Sandhill.io newsletter (I highly encourage anyone to subscribe), with lots of predictions about AI in 2025 and reviews of how the ecosystem and capabilities have changed just within 12 months. (https://lnkd.in/gArik5DR) While my day-to-day at Scout Space doesn’t involve any coding, I still try to make time outside of work to tinker on AI-based prototypes and skimming research to stay “plugged in” as best as I can. In addition, through co-working with other startups via our Chicago based initiative (https://density.ventures/), new tools and capabilities cross-pollinate fairly quickly among teams, which is really helpful. Here’s a short list of some of the AI tools/trends that had the most profound impact on how I build things in 2024: #1 - Switching to Cursor.com from VS code. While it was very hacky early on (as any startup product should be), today their native AI capability integration (keyboard shortcuts, code change highlights, agent mode etc.) has changed software development completely for me. I find myself often “directing” my code versus writing lines directly. And because it’s a VS code fork, the transition is seamless. The Cursor team really has done a phenomenal job. #2 - Inference cost reduction Based on estimates I’ve seen the cost-per-token has decreased ~200x in the last 2 years and that feels about right since I was building with AI APIs back in 2022 when it was very pricey. This reduction allows me to try more ambitious ideas and Chain-of-Thought experiments that would’ve been fairly expensive even just a year ago. #3 - Advanced voice modes I have found myself having conversations with OpenAi’s advanced voice mode while walking through the city (asking about architecture, history etc.) or using it in real-time as I’m thinking about a new product or strategic approach. It’s also really nice to just have it sitting next to you when you're reading a book to quickly ask questions about topics.
To view or add a comment, sign in
-
If DeepSeek and r1 teach us anything, it is that being model-agnostic has never been more important. Big breakthroughs will continue to emerge. This week it was DeepSeek; next week could be any of the major American players. The landscape is evolving faster than any single entity can monopolize. You should benefit as these foundation models become smarter, faster, and more commoditized.
To view or add a comment, sign in
-
CIP is living up to its name: we’re building tools for collective intelligence. Here’s a bit more on what we’re working on: https://lnkd.in/eG4BgaRk 1. Scenario Construction. This tool will help policymakers generate scenarios, stress-test policies, and gather public input in high-uncertainty settings. Useful for collective deliberation on complex, uncertain topics like AI governance. 2. Open-Source Collective Constitutional AI (OSCCAI). Expanding on our CCAI project with @AnthropicAI, we’re building an open-source platform that allows for communities around the world to create their own constitutions and fine-tune models. 3. Collective Intelligence with Agents (CIwA) Imagine AI agents representing different areas of expertise, deliberating on complex topics on behalf of you. Anticipating a more agentic world, we are setting up a framework for the collective governance of deliberative agents. 4. Voice of Nature: AI agents advocating for natural entities (e.g., rainforests, rivers) in human discussions. By combining environmental data with LLMs, these agents give voice to non-human stakeholders in critical conversations. These tools are early experiments in our mission to enable better collective input into important decisions. Interested in contributing? In addition to hiring for a founding engineer, we’re also looking for project partners and funders to help us build these tools for collective intelligence. Reach out!
To view or add a comment, sign in
-
This paper from deepmind is very interesting in that it clearly distinguishes between the LLM not knowing the ground truth vs the variability in the ground truth itself (caused by e.g. an ambiguous prompt like "who's the president?"). One immediate use case is when we use LLMs for data labeling. If we generate 10 responses with 10 different retrievals and one of them happens to be very similar to the ground truth, it's not clear if all the other ones were wrong. This is simply because we don't have all the possible ground truths. https://lnkd.in/efa9bcQj
To view or add a comment, sign in
-
Exciting development. AI just became more efficient and accessible. "If you want to host DeepSeek V3 in the EU cost-effectively, you will soon be able to try it in Nebius AI Studio and decide for yourself how it compares to closed alternatives using the Studio’s Playground. We’ll add V3 this week, with R1 coming soon after. [...] It’s safe to say the “Chinese AI New Year” began with a bang a full month early this year. After all, the Year of the Wood Snake signifies “a time of transformation, growth, and introspection.” This applies to both up-and-coming startups and established incumbents in the genAI space. The question is no longer whether open-source AI will catch up, but rather how quickly it will lap the field — and who will harness it for the greatest impact."
DeepSeek R1 sends waves through the community, right? We outlined why even before this turmoil began: https://lnkd.in/eSe7pciV Introducing our guest author, Prof. Dr. Ivan Yamshchikov, though some of you are already familiar with him. He gets to the very roots of where DeepSeek’s success grows from. This time, the Chinese New Year indeed kicked off earlier than usual. #DeepSeek #opensource #LLMs #largemodels
To view or add a comment, sign in
-
-
You should not add 1 before log-transforming zeros. If you don't believe me, listen to these two experts on how to make better decisions using log-transformed data. This conversation was produced by NotebookLM based on our discussion about the Log of Zero problem at Data Duets (dataduets.com). Duygu Dagli and I have now added a podcast-style conversation to each of our articles. All audio is raw/unedited. The conversations are usually fun (sometimes for odd reasons). The model adds (1) examples we don't have in the original content and (2) light banter and some jokes. The examples are hit or miss. So, besides the usual deep and reinforcement learning backend, what does NotebookLM do? (based on Steven Johnson's description on the Vergecast) 1. Start with a draft and revise it 2. Generate a detailed script of the podcast 3. Critique the script and create a revised version 4. Add disfluencies (um, uh, like, you know, c-c-can, sssssee...) to sound convincingly human 5. Apply Google’s latest text-to-speech Gemini model to add intonation, emphasis, and pacing 6. Have fun, and don't add 1 to your variables before applying the log transformation. #notebooklm #datascience #logofzero #dataduets
To view or add a comment, sign in
-
Here’s everything you need to know about DeepSeek:
To view or add a comment, sign in
-
-
Another insightful edition of One Useful Thing by Ethan Mollick on "the weirdness of prompting AIs". Although not called out explicitly in this piece, the RACE framework fits his comment on Context. RACE stands for Role, Action, Context, and Execute. Our team at Copado has been trained on using it by Ajay Gandhi of Insight Ventures. It makes a lot of sense to me and seems to produce some good results. Both the post by Mollick and the RACE framework are well worth the time to research. https://lnkd.in/es9QXf7c
To view or add a comment, sign in
-
Skills based approaches have the potential to transcend the operational silos that have plagued the Adam Smith organizational structures. When augmented by the deep data sets leveraged by GenAI, is where it get's really interesting! Thank you Amelia Dunlop, Kristin Starodub Roni Grant Gottesdiener Andrea Wilp for the conversations this research will spark!
To view or add a comment, sign in