Guardrails AI

Guardrails AI

Software Development

Menlo Park, California 3,627 followers

Our mission is to empower humanity to harness the unprecedented capabilities of foundation AI models.

About us

Our mission is to empower humanity to harness the unprecedented capabilities of foundation AI models. We are committed to eliminating the uncertainties inherent in AI interactions, providing goal-oriented, contractually bound solutions. We aim to unlock an unparalleled scale of potential, ensuring the reliable, safe, and beneficial application of AI technology to improve human life.

Industry
Software Development
Company size
2-10 employees
Headquarters
Menlo Park, California
Type
Privately Held
Founded
2023

Locations

Employees at Guardrails AI

Updates

  • View organization page for Guardrails AI, graphic

    3,627 followers

    Get ahead of 0.6.0, releasing Thursday next week! Short checklist 1. Make sure you’re auth’ed into the hub by running “guardrails configure” 2. Replace “prompts” and “instructions” with messages 3. Update to 0.5.15 if you don’t plan on going all the way to 0.6.0 right away! Find our migration guide here - https://lnkd.in/gP3EbmCG

    Migrating to 0.6.0-alpha | Your Enterprise AI needs Guardrails

    Migrating to 0.6.0-alpha | Your Enterprise AI needs Guardrails

    go.guardrailsai.com

  • View organization page for Guardrails AI, graphic

    3,627 followers

    v0.5.15 is out, and it’s HUGE. The team had a great onsite last week, and so we released two weeks worth of changes at once. Here’s what you can find in the latest updates Features: - Async streaming of strings to buffer and merge. - Batch exporter implementation by default (no more blocking telemetry!!) - Updated history handling and streaming functionality to work on single-node guardrails server deployments. Bug Fixes: - Fixed metric enablement - Fixed stream generation and async history error - Enabled messages tag in RAIL - Fixed server configuration and first-time login issues Improvements: - Corrected model name in example for Cohere model - Performance updates and cleanup - pinned dependency versions on guardrails-api and guardrails-api-client - Enabled async guards by default from create template - Updated guardrails create to create async guards by default Find the full changelog here! https://lnkd.in/gEnds7FM

    GitHub - guardrails-ai/guardrails: Adding guardrails to large language models.

    GitHub - guardrails-ai/guardrails: Adding guardrails to large language models.

    github.com

  • View organization page for Guardrails AI, graphic

    3,627 followers

    gRPC and/or websocket implementations will become really important to agentic workflows once LLMs get quick enough. Bidirectional communication between agents will become a core assumption for agent-based workflows. We've started working on a protobuf endpoint for streamed validation. When looking around, we've found that most of the main model providers don't have gRPC endpoints. Our first cut will work as a parallel to the HTTP endpoints. It'll be interesting to see how openai etc will eventually deal with gRPC and inform the broader adopted protobuf design. Example use case: Imagine a multi-agent system where a research agent and a writing agent collaborate in real-time. gRPC would allow for efficient, typed data exchange, enabling the research agent to stream relevant info to the writer as it's discovered. Another example: In a customer service scenario, WebSockets could enable an AI agent to provide real-time updates to the user while simultaneously querying multiple knowledge bases and APIs, creating a more responsive experience. bidi comms have their challenges too. Increased complexity in system design, potential compatibility issues with existing infrastructure, and the need for specialized knowledge among developers are hurdles to consider. The last is likely why the major providers haven’t invested heavily in this yet. There's also the question of standardization. As more providers potentially adopt gRPC, ensuring interoperability between different systems could become a significant challenge for the industry.

  • View organization page for Guardrails AI, graphic

    3,627 followers

    Validating LLM responses may seem simple on the surface, but there's a world of complexity lurking beneath. While a basic implementation might seem straightforward, each aspect - from reusability to performance, from streaming to structured data - adds layers of consideration. We've tackled these challenges head-on. Our open-source framework offers a standard for validation, supports all major LLMs, enables real-time fixes, and even provides monitoring. It's not just about checking responses; it's about elevating your entire AI app. https://lnkd.in/d2m4DR8Q

  • View organization page for Guardrails AI, graphic

    3,627 followers

    Release day! Guardrails v0.5.12 is out! This one’s mainly bug fixes and usability improvements. If all goes well, this will be the last 0.5.x update, and we’ll be moving over to 0.6.0 next week. The main usability updates center on presenting helpful warnings more often and guard initialization patterns. As far as bug fixes go, we’ve taken a deeper look at async and solved a high prio bug with the server. Our wonderful discord community was quick to find and point out these bugs. See release notes here https://lnkd.in/g59xu253

    Release v0.5.12 · guardrails-ai/guardrails

    Release v0.5.12 · guardrails-ai/guardrails

    github.com

  • View organization page for Guardrails AI, graphic

    3,627 followers

    The vast majority of Guards either log, reask, or implement automatic fixes when validators fail. But there are a growing number of Guards that use custom on_fail actions. custom on_fail actions are useful when you want to modify some of the output text. We wrote a guide for using on_fail actions, and custom on_fail actions specifically, here 👇🏽 https://lnkd.in/gmr6Qe3N

    Use on_fail Actions | Your Enterprise AI needs Guardrails

    Use on_fail Actions | Your Enterprise AI needs Guardrails

    go.guardrailsai.com

  • View organization page for Guardrails AI, graphic

    3,627 followers

    We’re renaming and deprecating a few fields: from_pydantic → for_pydantic from_string → for_string from_rail → for_rail from_rail_string → for_rail_string Why? from_ really does not describe well what these functions do, and really, the guard is created FOR validating those different structures.

Similar pages

Funding