Hop Labs

Hop Labs

Software Development

Atlanta, GA 710 followers

Production-Grade Machine Learning Strategy and Solutions

About us

Hop Labs is a research and development lab focused on building state-of-the-art and scalable machine-learning solutions. We work with a diverse array of organizations, from early-stage startups to well-established brands. Some clients want to turn a vision into a real product or scalable business, while others are looking for fresh ideas and strategies in order to leverage cutting-edge technology within their company. Though we can't always speak publicly about the work we've done, you'll find a number of case studies on our website. A few past projects have included fighting cancer through deep learning, accelerating the drug discovery process, and even making computers smart enough to help you find the best-fitting pair of pants. We spend a lot of our time helping teams to identify the risks in their strategies, figure out what concrete steps they can take to drive down those risks, and find the most efficient path toward their next meaningful milestone, whether that's closing some pre-launch sales, scaling up their business, or bringing an innovative product to market. Our team is well versed in the world of applied ML and can help with engineering, research, operations, and strategy. We've learned that there's a fine balance between over-building/over-thinking and implementing/producing something without enough critical thinking. On either side of that balance, you might paint yourself into a corner with some unnecessarily poor tech decisions, and we're here to help navigate those waters.

Industry
Software Development
Company size
11-50 employees
Headquarters
Atlanta, GA
Type
Privately Held
Founded
2012
Specialties
machine learning, product strategy, MVP development, deep learning, computer vision, ML engineering, ML strategy, ML operations, ML research, AI strategy, and LLM

Locations

Employees at Hop Labs

Updates

  • View organization page for Hop Labs, graphic

    710 followers

    CRAFTING A WINNING AI STRATEGY Critical Questions for Executives In today's rapidly evolving business landscape, AI isn't just a buzzword—it's a game-changer. As an executive, you're likely facing the challenge of creating an AI strategy that drives real value for your organization. Maybe you’re at the stage where many different departments are piloting AI projects, and you’re wondering how they all add up. Or maybe some pilots have yielded results, and you’re wondering if there’s any infrastructure you could put in place to accelerate AI’s impact. At Hop, we've guided numerous clients through this process, and we've identified two levels of questions that can serve as the pillars of a robust AI strategy. We walk through them in our latest blog post. https://lnkd.in/eaHyYNRz Subscribe to our list for occasional AI insights like these to your inbox: https://lnkd.in/g6JDWQXa #aistrategy

    Crafting a Winning AI Strategy: Critical Questions for Executives — Hop Labs

    Crafting a Winning AI Strategy: Critical Questions for Executives — Hop Labs

    hoplabs.com

  • View organization page for Hop Labs, graphic

    710 followers

    FOUR TRUTHS ABOUT AI STRATEGIES AI strategy is a hot topic these days, and everyone’s scrambling to come up with one. But where do you start? At Hop, we have a comprehensive process for developing an AI strategy that we work through with our clients, but before we even get started, it's helpful to consider some foundational truths underlying our approach. Our latest blog post covers some things we believe about AI strategies that are not necessarily widely understood. https://lnkd.in/ej6wyd6a Subscribe to our list for biweekly AI insights like these to your inbox: https://lnkd.in/g6JDWQXa #AIstrategy #LLM #generativeAI

    Four Truths About AI Strategies — Hop Labs

    Four Truths About AI Strategies — Hop Labs

    hoplabs.com

  • View organization page for Hop Labs, graphic

    710 followers

    Our latest blog post -- "Unproductive Claims about Generative AI in 2024" Subscribe to our list to receive AI insights like these in your inbox: https://lnkd.in/g6JDWQXa #generativeAI #LLM #hallucinations

    View profile for Ankur Kalra, graphic

    Bringing research-grade AI into the real world reliably and at scale.

    This is a common refrain in the popular discourse -- LLMs are cool, but not reliable enough for stuff that matters. I find this frustrating, because we've been building reliable systems out of relatively unreliable components for decades. That was the key insight in cloud computing -- you could get mainframe level reliability if you arranged commodity parts in the right way (see also: checks and balances in most organizations). I think this is because non-tech folks interact directly with LLMs and think of it as a product instead of as a component in a larger system. People actually deeply working with LLMs just consider it another architectural component in their toolkit. I touch on it a bit in our most recent article Unproductive Claims about AI in 2024: https://lnkd.in/ekEebseQ (screengrab is from a Forbes article on LLM risks: https://lnkd.in/eYsffQs4)

    • No alternative text description for this image
  • Hop Labs reposted this

    View profile for Jim Fan, graphic

    NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Group). Stanford Ph.D. Building Humanoid robot and gaming foundation models. OpenAI's first intern. Sharing insights on the bleeding edge of AI.

    The power of GPT-4o in the palm of our hands. It's a monumental day - for the first time in history, open weights catch up with the latest frontier models. This figure charts the historic run of open models against the closed ones. The former starts humble, but rise with a much higher gradient and a much more diverse ecosystem. Research like multimodal LM and robot foundation models wouldn't have been possible without white box access to a strong base LM. Llama-3.1 release includes more than just weights: - "Open Source AI is the Path Forward" - a manifesto that clearly lays out Zuckerberg's vision. It explains the commercial strategies, ecosystem positioning, and even geopolitical concerns, basically answering FAQs for OSS LLM in general: https://lnkd.in/gJt_nYZp - A 71-page long paper that treats all LLM researchers to a buffet of training details and analysis. They even discuss large-scale cluster failures and remedies - issues that only emerge with 16K H100s! It's literally early Christmas for my team ;) GPT-4o and Claude-3.5 are great, but I would easily rank Llama-3.1 the No. 1 highlight in the 2024 LLM landscape.

    • No alternative text description for this image
  • View organization page for Hop Labs, graphic

    710 followers

    HOW DOES THE AGILE MANIFESTO APPLY TO RESEARCH ENGINEERING? Applying novel research methods to production systems can be messy, resulting in tools that don't interoperate, duplicated infrastructure, a confusing backlog of tasks, and more. Anybody who's been in software development for a while is familiar with the standard approach for not getting buried by these challenges: Agile methodology. But how does Agile apply to engineering in a research context? https://lnkd.in/giqSPS-Y Subscribe to our list for biweekly AI insights to your inbox: https://lnkd.in/g6JDWQXa #agile #agilemanifesto #researchengineering #mlresearch #mlengineering

    How Does the Agile Manifesto Apply to Research Engineering? — Hop Labs

    How Does the Agile Manifesto Apply to Research Engineering? — Hop Labs

    hoplabs.com

  • View organization page for Hop Labs, graphic

    710 followers

    RLAIF (Reinforcement Learning from AI Feedback) & RLHF (Reinforcement Learning from Human Feedback) These are methods used to refine AI behavior. Both involve rewarding desired outputs to shape the AI's responses RLHF uses human evaluators to rate AI outputs, directly incorporating human judgment. This can capture nuanced preferences but is time-consuming. RLAIF uses another AI system to provide feedback. This allows for faster, more consistent evaluation at scale but can perpetuate existing AI limitations or biases

  • View organization page for Hop Labs, graphic

    710 followers

    Fine-tuning is great for getting LLMs to generate prose in a desired style and format. With a dataset of hundreds of query-completion pairs, the LLM can be trained to incorporate that information and essence into future completions. However, fine-tuning is not ideal for changing the factual information an LLM has access to, for two key reasons: - LLMs can hallucinate and make factually incorrect statements, even about information from fine-tuning. The added information is not guaranteed to be reproduced accurately. - The fine-tuned information gets baked into the model and cannot be easily removed or updated without re-tuning on a new dataset. It lacks flexibility if the world changes. Lore suggests fine-tuning and in-context learning via prompts can achieve similar behavior changes with the same number of examples. But fine-tuned models are cheaper since they don't need extra input tokens on each query.

  • View organization page for Hop Labs, graphic

    710 followers

    Control vectors are a tool that has come out of Anthropic's LLM interpretability research. They give practitioners precise information about the concepts an LLM is invoking as it reasons, and allows for control over the extent to which those concepts are expressed. These concepts can be abstract notions like truthfullness or secrecy – or they can be as concrete as the Golden Gate Bridge. https://lnkd.in/ehaU7EbF These vectors are weighted collections of neurons inside of an LLM, which are activated to varying degrees as LLMs produce text. By observing the activation values of these collections of neurons, it's possible to read the extent to which different concepts are invoked in any particular passage, and to induce the LLM to invoke particular concepts in its text generation.

    Golden Gate Claude

    Golden Gate Claude

    anthropic.com

Similar pages

Browse jobs