AI's Trust Architects: The Patronus Approach to LLM-Based Applications and Enterprise Adoption with Anand Kannappan & Rebecca Qian

AI's Trust Architects: The Patronus Approach to LLM-Based Applications and Enterprise Adoption with Anand Kannappan & Rebecca Qian

In this insightful episode of Founder Real Talk, my colleague Dan Cahana and I sit down with Anand Kannappan and Rebecca Qian , co-founders of Patronus AI . This dynamic duo is revolutionizing the world of AI evaluation and security, helping enterprises navigate the complex landscape of LLMs confidently. From their days as undergraduates at the University of Chicago to their groundbreaking work at Meta and now Patronus, Anand and Rebecca share their journey and vision for the future of AI adoption. Dive into this conversation to understand why they believe we're still at "Day -1" of the AI revolution and how Patronus is addressing the critical need for trust and reliability in AI systems.

Listen to our Founder Real Talk podcast episode or read the interview below.

This transcript captures the conversation as it happened. We hope you enjoy the authentic voices of our guests.

Glenn:

Dan and I are ecstatic to have the two founders of Patronus AI, Anand Kannappan and Rebecca Qian, join us on Founder Real Talk today. Notable Capital recently led the Patronus $17 million Series A round, and the company has been on an absolute tear. Anand and Rebecca have been up to some really interesting and exciting things at the company, and we're going to dig into this, plus lots of other topics today.

Anand:

Thanks so much, Glenn and Dan for having us. We're super excited to be on this podcast and to really dive into Patronus and everything that we're extremely excited about.

Glenn:

We thought we'd start by talking a little bit about you and your backgrounds.  Anand, I'll start with you. Patronus isn't your first foray into the world of AI. Tell us a little bit about how you got into this field, how you met Rebecca, and what led you guys to found Patronus.

Anand:

In general, I've been really excited my entire life about all things related to machine learning, but in particular, especially for the last several years, a lot of what I've spent time thinking about is ML interpretability and explainability. And so in the past, I worked as an early data scientist on the Oculus team, which today is the Meta Reality Labs org, and I developed a lot of the early ML foundations around things like causal inference and advanced experimentation. A lot of what I thought about is, how do we make models a lot more usable, especially in a large organizational setting, especially at Meta Reality Labs, the organization grew from 2000 to 20,000 people over a few years, and so seeing that growth, and seeing how lots of different kinds of partner teams were able to really scale what they were doing and really think about the ways to use machine learning was an incredible point that I was really involved with. Rebecca and I overlapped at Meta, but we knew each other from even before that, back in undergrad, and so I remember both Rebecca and I were in machine learning classes together. We spend a lot of time thinking about not only where AI is headed, but also how we believe people around the world would use AI responsibly, and that's a really big part of our mission today, and we're super excited to be able to bring that vision forward through Patronus.

Glenn:

Great. How about you, Rebecca, was there a moment that it made sense to you to start a company with Anand and tell us a little bit about that? 

Rebecca:

Absolutely. So there are two separate questions to that. One is the moment we realized we needed to start Patronus. And the question of when I realized I needed to start a company with Anand. So the second question came way earlier like Anand had mentioned, when we overlapped, when we studied CS at the University of Chicago, and we're working on startups. Even back then my first impression of Anand was he was running a quant hedge fund in between machine learning courses that was backed by Mark Cuban, and so we were already working on machine learning problems back then. So I knew I wanted to work with Anand. At this point we’ve known each other throughout Meta, the University of Chicago, and have worked together on Patronus for the past year and a half. So the question of when we decided to start Patronus was when ChatGPT was released. 

So Glenn you can remember back in late 2022, ChatGPT was released and there was a lot of enterprise interest but at the same time, a lot of companies were banning it. And also at that time, we were both at Meta. Anand was at Meta Reality Labs and I was at Meta AI Research, also known as FAIR, where I drove responsible AI and developed this new pillar that, back then, you know, problems like hallucinations were just known in a research setting. But of course, now it's become really commonplace terms. And so we just knew it was going to be a problem. And that's, of course, been validated from just the discourse that we see. You know, the headlines around Air Canada and Chevy bots and just all the reputational risks that might incur from lack of AI evaluation. So it's very clear that this was the number one blocker now to the enterprise AI adoption.

Glenn:

Very cool.

Dan:

Maybe give us kind of a quick background on what Patronus does? What's the short elevator pitch? 

Anand:

At Patronus, we help companies scalably and reliably catch mistakes with LMS and LM systems, including things like hallucinations and unexpected behavior, and lots of different kinds of unsafe outputs. As Rebecca said, enterprises have been extremely excited about the prospect of generative AI, especially over the last couple of years, but they're also equally concerned about all kinds of potential failures, and at Patronus, we have a unique AI research-first approach to how we solve that problem, where we do things like train evaluation models, develop alignment techniques and new standardized benchmarks, and then we apply all that research into our product to ultimately drive real customer value. And so what enterprises and all of our customers have been excited about is the way that we approach the problem space, and the fact that the product itself has yielded very impressive results that has now caught several thousands of hallucinations and other kinds of failures for Fortune 500 companies, as well as for leading AI companies around the world.

Glenn:

So Rebecca, maybe I'll turn it to you. You guys, as you alluded to earlier, AI is not new to you. In fact, it sounds like the two of you have been, even since your university days, focused on machine learning and all the promise that this field can bring. When you saw ChatGPT come out and Gen AI start to take hold, what were some of the surprises for you and how has your perspective changed on where this whole area of technology and market can go?

Rebecca:

So I think I could go the rest of the hour just talking about this alone. The biggest surprise coming from a researcher who worked on AI, safety, and security issues to a practitioner and helping some of the world's largest enterprises with these same problems versus a research setting. Some of the work I did is I trained the first large language model, FAIR, with the fairness objective. And we were very focused on demographic fairness, making sure AI doesn't discriminate against people based on their gender, protected categories, etc. And those are very important problems, and making sure models aren't producing toxic outputs. That was primarily what the focus on evals was before. But then, since working with enterprises, I realized it's a much bigger problem, even more than Anand and I initially anticipated. Some examples are brand alignment, like making sure the output doesn't reference competitors or doesn't violate certain company policies, and that's different between each company. Another is the focus on capabilities like hallucinations. So we put out Finance Bench, which is the first large-scale benchmark for financial analyst-type queries. It was very surprising to us that a data set like that had not existed before, and we realized models were being assessed on SAT benchmarks and middle school math questions, and then they were being deployed in the real world and very complex like financial analyst questions.

Glenn:

People see the promise and want to put these models to work in applications. And then realize, you know, oh, there are problems, there are potential risks. It's truly remarkable how quickly this market has moved. I'm sure it must be exhilarating for you guys because there's such a big need for what Patronus is doing.

Rebecca:

Yeah, absolutely. We also saw that the discourse around evaluation and the tools that we have access to was lacking, and so we want to provide more of that to companies and prevent these kinds of mistakes.

Dan:

Rebecca, I think folks have seen the impact of a lack of evals in high-profile stories like the Chevy chatbot and others. But I'm not sure many folks in our audience understand how evals work. So maybe walk us through the Patronus product. How are folks using it, and what sort of errors and mistakes can you help reduce?

Rebecca:

Broadly speaking, the Patronus platform focuses on two stages of evaluation – pre-deployment and post-deployment. So really, you should start thinking about evals and working with us, the moment you want to develop an AI application. It could be an assistant, it could be a chatbot, it could be internal. And so when you're doing pre-deployment testing, we have what we call evaluation runs. So you can run a batch of evaluations, and that's basically testing your AI application around a range of evaluation criteria. So it could be like hallucinations, like we mentioned. It could be toxicity-based. It could be, for example, in a RAG system to test different aspects like context, relevance, or making sure you're retrieving the right information. And RAG testing is really, really complex. Our RAG evaluators perform 20% better than industry and academic alternatives, so it's really difficult to get right. And we're seeing that AI systems are more complex with many different touch points. 

The second part is, okay, you've deployed this thing into production. Now how do I make sure it's not giving embarrassing responses to users? And I'm saying embarrassing because it’s embarrassing when your Chevy bot says that you can sell a car for $1 and Ford is better, right? So we catch those kinds of mistakes in real-time, and that's why sometimes we tell people our goal is to catch failures and mistakes at scale because we have this real-time monitoring API, and so we are unique in that we are API-first, and it is important to have a commercially available evaluation API, because there's no other way to be able to catch these kinds of mistakes at scale.

Glenn:

It sounds like there are different ways to deploy Patronus. Anand, maybe you could talk a little bit about some of the customers, how they are using the product, and what are some of the use cases that you see people wanting to deploy you into?

Anand:

Yeah, I'd say that especially as we’ve talked to a lot of developers over the last several months, one of the biggest questions we get is, okay, we, of course, understand that this is a really big problem, and we're experiencing the problem, and it's a really, really painful problem, but why exactly is measuring LLM performance so difficult, and what makes it so different from traditional ML like predictive ML, for example? Typically what we say is that with predictive ML, you tend to get discrete results, whereas with LLMs, because by nature these models are generative, there is such a wide space of behavior that makes it difficult to not just achieve testing coverage, but even to define what testing coverage even means. And so then what we've seen happen is a lot of companies have tried to solve the problem by spending a lot of time and money on manual evaluation methods where they have internal QA teams, external consultants, and even expensive engineering time allocated towards manually creating test cases and manually grading outputs and spreadsheets. And that was just something that we've seen companies continue to do just because they lack confidence and automated approaches that just don't work well and are incredibly inconsistent and unreliable. 

The biggest thing that we do at Patronus is bring an extremely high quality, extremely highly reliable solution that is able to catch these mistakes in a very scalable way, in both an offline setting and an online setting. Today our customers span various kinds of verticals, including automotive, education, healthcare, financial services, and across various kinds of things. Our focus has always been to make sure that customers are able to feel a lot more confident and even compliant with the kinds of things that they ultimately care about before they roll out a product, but also after they've rolled out a product. 

In terms of our market focus, we specifically focus on companies in slightly more regulated industries, because these are the kinds of companies that are trying to use AI in mission-critical scenarios, and therefore the margin of error has to be extremely low and so when enterprises use us, those are the kinds of things that they care about when they get started. But of course, from a product perspective, they care about a lot of different types of things, not just equality, like I mentioned earlier, but also flexibility and the robustness of the solution itself and its ability to integrate pretty much anywhere across the stack. And as Rebecca mentioned, the fact that we have an API-first solution is really, really worth noting because when you look up evaluation API on Google, you won't find anything. You might find some open source packages, but you won't find a company that offers something that is incredibly accessible through the form of an API. 

There are three big reasons why an API-first approach is really important here. One is the fact that you can pretty much plug it into any part of your code base. It could be in CI/CD systems, it could be in your product stack because you want to do things like real-time and live use cases, and so that makes it really easy to do just because you're not tied to a particular framework. The second reason is you essentially now have a unified interface or unified schema, which means that it's really easy to implement and also really easy to pick up and learn, especially across your team. I'd say the third big reason is in part, because of a powerful feature we have called active learning, which allows you to grow the quality of the evaluators over time. So the more feedback you give it, the better it becomes for you and the more aligned it becomes for your very specific use cases. So those are some of the ways that enterprises have been using us, and what they've been really excited about with our product. 

Dan:

That's awesome. Hearing you talk through that just reflects how meaty this problem is. When you guys came together to first found the company, enterprise adoption of LLMs was still in its infancy. I'm curious, Rebecca, especially given your research background, what gave you the conviction that this was going to be a lasting problem and the right place to focus the next 10 or so years of your career?

Rebecca:

Like you said, it was in its infancy, and I would say just as a whole, the Gen AI field and just the range of applications that we could deploy AI into that have not been unlocked yet, is still very much in the early stages. Like Anand and I always say – it's Day -1 out here. So I'd say it's very much still the case today in 2024. In terms of wanting to focus the next decade of our careers on this problem, it really comes from, even just in the beginning back in Chicago, we knew that AI was going to be transformative, and we wanted to really dedicate our careers to AI. And from that, realizing what are the problems blocking AI to enterprise adoption, and this was clearly the number one blocker, because trust is really the center of every everything that we build, day

Glenn:

Negative Day One is kind of an interesting way to think about this market, and if you're right, if it's really Day -1, then it's just incredible because there's already so much activity going on, but I think on. Anand, we were curious to get your take. You're talking to tons of Fortune 500, Global 2000 companies about Patronus. We know because we've introduced you to a lot of them, and it seems like every single company we've introduced you to is excited to talk to you, which tells us that they're somewhere between experimenting with and really thinking about putting in a production LLM-based applications. You have that catbird seat so we’d be curious to get your perspective on it. Do you think that's really, really where we are? How far are we away from seeing lots and lots of meaningful applications in production from large companies? What do you think the blockers are and what is the future? What does the future look like when those blockers are alleviated?

Anand:

What's incredibly surprising and unique about this market and this point in time is the fact that large companies in the world are moving faster than ever. I remember around when we were getting started a year ago, I spoke to the CIO of a large financial services company, and he told me that they haven't moved faster on anything else in their entire history, since the 1800s and so seeing that kind of momentum was incredibly exciting. And a question that we sometimes get is, why are we currently in New York. The biggest reason is that we noticed this really amazing market opportunity, and we wanted to be as close to the market as possible. And given that we're focused on companies in slightly more regulated industries, especially traditional buyers, we wanted to make sure that we were as close to that geographically. And that's one of the ways that we've been able to continue to develop our product over time. There are a lot of things that we've learned in terms of how enterprises have tried to approach the problem space and what are some of the blockers around AI adoption? There are a few different kinds of things that come up really often. One is, of course, the hallucination problem. And what's interesting is that a lot of people might agree that the definition of AI insecurity has been changing quite a bit, where it's no longer just about third-party actors and adversarial threats, but it's also about accuracy and reliability, and that's something that has come up time and time again. And the solution that we've developed in terms of our hallucination section evaluators has been able to significantly outperform all alternatives, even using GPT-4 as a judge, by over 20% and we're really excited to be able to continue to innovate on those kinds of capabilities and even make those kinds of capabilities more accessible over time. 

In addition to that, we're seeing large enterprises care about very enterprise-specific capabilities when it comes to evaluation. So that includes broadly, various things related to brand alignment. So that includes things like the tone of voice of your chatbot, or style, conciseness, bias, company policies, regulatory policies, and those are a lot of the kinds of things that larger companies tend to care about a lot more. In addition to, of course, the kind of reputation risk that they take on in the scenarios where the AI products that they roll out ultimately have unsafe outputs across the board, Another thing that they're finding challenging is in a market like this, especially in a new category, there is a lot of noise that's happening, and it's really confusing to figure out what to really do, and that goes across the stack. And if we, let's say, start with the LLM side of things, a lot of enterprises are confused about how to even make decisions around their LLM architecture stack, especially in a world where there are over a million models on Hugging Face. Now all of the commercial companies like OpenAI and Anthropic are updating models every two weeks. It’s just become an increasingly complex environment to navigate. And so what companies have been asking for is an unbiased, independent company, almost a trusted expert third party that can help them navigate these kinds of challenges in an extremely fast market. There's an article that came about Patronus around the end of last year that called Patronus, the “Moody's of AI,” where we are that unbiased and independent company, and so not only from a product perspective but even from a company perspective, those are the kinds of ways that we've been able to partner with large companies to be able to help them solve some of the most challenging problems with LLMs in the most powerful ways. 

Dan:

On the flip side, we're seeing LLMs both power some of the largest enterprises in the world and also enable this massive wave of indie development. I'm curious, as you think about the evolution of Patronus, how do you think about supporting some of these smaller customers or individual developers who are just getting started with LLMs but looking to achieve the same kind of reliability that enterprises are concerned about. 

Anand:

I'd say Patronus is extremely excited to continue to make our product a lot more accessible, especially over time. And what that means is we want to make the time-to-value through our product as low as possible. And so within that, we've developed a lot of really exciting, AI-driven features that ultimately make element evaluation and security as easy as possible to do. We talked a little bit earlier about why an API-first approach is important. But in addition to that, we want to continue to make the kinds of offerings that we bring to customers as diverse as possible. One aspect of what we're planning to launch in the coming weeks and months is what we call evaluator families which essentially refers to different tiers of how you can interact with the various kinds of evaluation models and techniques that are offered by Patronus. In the same way that you have alums that are of different sizes, like Mr. Small and Mr. Large and Cohere Small and Coherent Large, we'll have those kinds of things through Patronus. And so if you're using small evaluation, that kind of approach might work really well for real-time use cases, just because it tends to be a lot faster and a lot cheaper with some trade-off to quality. But on the flip side, if you want to use a large evaluation model from Patronus, it might be a little bit more expensive and a little bit slower, but it will certainly have better quality in terms of how good it is at catching mistakes. And those are the kinds of scenarios that we expect people to use large evaluation models, especially in relatively more offline use cases. And so being able to have different kinds of offerings is one way that we want to continue to support indie developers and make sure that folks get value as fast as possible through Patronus.

Glenn:

Like you said, the fact that you offer the service via API makes it quite accessible even for the indie developer, which is exciting. I'm sure you guys feel like the whole market is your oyster right now. Let me ask each of you, Rebecca, I'll start with you. You know, things are moving really fast and Anand just talked about how models are changing almost by the week, and that's very confusing and challenging for customers, and gives you guys a role to play, but there's still skepticism out there about the role that Gen AI can play and how useful it'll really be as a technology. So if we look forward to a year or two, what are some of the signs that you'll be looking for to say, yeah, like this is really working? Enterprises are achieving the kind of results that they want to achieve with all this investment.

Rebecca:

I think right now, like you said, Glenn, everyone sees a lot of potential, but we're still in the early stages of large-scale deployment. And when we think about large-scale societal deployment, we're talking every single vertical. And there's like, some high risk, some low risk, like health care, medical, legal, finance, insurance, etc. The possibilities are really endless. And that's like both consumer and enterprise. I definitely think we're not there yet. And in fact, you know, I mentioned this, like, sort of pre-deployment and post-deployment evaluation. What we would expect in the coming years is to see more and more companies go from pre-deployment to post-deployment, and that's where real-time monitoring and having access to an evaluation API is really critical to be able to test and prevent these kinds of things at scale. In terms of signs that we are reaching that stage, we would expect to see companies facing a different set of issues like the discourse might shift from how do I run these evaluations and build confidence and show internally that we can mitigate these risks to how can I do this at scale? How can I prevent these things from going out to users? And how can I make this fast as well? How can I do this in a low-latency, cost-effective way? A lot of the focus has been around accuracy and preventing failures, but we're going to see more people ask about cost, and in some cases, they might even be willing to trade off, like cost and latency for certain levels of performance. So we're really excited, and of course, are very prepared for that shift to happen. 

Glenn:

Anand, how about you? Are there any clues that you derive from your customers on how they plan to deploy this type of technology? I'm curious if the champion is typically a technical person that you're working with or more of a business person, and what the interplay is between sort of technical and business oriented at your customers?

Anand:

Typically, the decision-maker is a technical leader of some kind. It could be a CIO, CTO, or some kind of engineering leader at the company. But what's been unique, especially about the current market, is that, especially across enterprises, the teams or the organizations that are working on generative AI products today are sort of the shiny objects, and so everyone wants to have a piece of that. Everyone wants to be involved in shaping those new experiences. And so we also have customers that are using our product, that are product managers, designers, BD, compliance, marketing, especially those who are using our web platform in particular, just because you can run a lot of the same kinds of workflows, but pretty much without writing any code. And so we're seeing customers that are setting up custom evaluators on Patronus just by writing a sentence or two in English or any language for that matter, and they're able to define those policies in exactly the ways they want, and then they're passing those off to developers to implement them in the API. So we're seeing those new kinds of workflows really begin. To your earlier question, in terms of the signs that we will continue to see, what we tell enterprise leaders every day is you have to be as metrics-driven as possible from the moment you start, from the moment you think about what a product experience could even look like in the long term. And the reason for that is, especially when you're dealing with models that are generative by nature and given that it's a more difficult problem, it's easy to forget that it's really important to do that. There are ways to be able to measure that over time, to understand if you are really moving the needle forward for your product and for your team. 

Dan:

Rebecca, this wouldn’t be an AI-focused podcast in 2024 if we didn't ask you for some predictions. So we'd love to hear: What do you think the next big, mind-blowing breakthrough will be from an AI Lab?

Rebecca:

Yeah, absolutely. I think there are near-term and longer-term predictions. The next big breakthrough would probably be a non-transformers-based architecture that is deployed at scale. We've seen some evidence of that with state space models, and there have been talks of the next dominant architecture for some time. We've done so many optimizations on transformers, and that's really pretty much what all big AI labs and LLM providers are pouring money into right now. And so it would be exciting to see an alternative architecture compete. There's definitely promise in some of the SSMs and some of the architectures that people are exploring like Stanford and other labs. I think, just in the implications of that, and also near term, the AI applications we have seen right now or so far right that are successful have been primarily text based. But multimodal has gotten more attention like image, video, speech, etc., but also in the future. 

I was a robot researcher in the past and worked on embodied agents and developing NLU modules for robot assistance. So I think robotics for looking down the 10-20 year horizon would be the next big frontier for AI. And, of course, you need to have proper data sets. You need to test these things. What really motivates Anand and me, day in and day out, is this concept of scalable oversight, which is, okay, we're going to enter this world where very soon AI is going to -  it's already outperforming humans in many day-to-day tasks. These are super intelligent entities. How do you continue to supervise entities that are more intelligent than you? And so that's the concept of scalable oversight, which is humans acting as overseers in a world where AI evaluates AI.

Glenn:

Okay, Dan, the future has been depicted for us. We're going to have AI robots, and we're going to need scalable oversight. So thank you, Anand and Rebecca, for thinking about this problem before it's too late. We're going to put you guys on the hot seat with our speed round and Anand, I'll start with you. Just say the first thing that comes to mind, what's the biggest misconception that you had when you started Patronus, that you no longer have? Something you've learned about, maybe running a company or starting a company?

Anand:

I'd say that what's unique about the market that we're in, and the timing, is that enterprises don't typically work with startups. They fear about their economic stability and all kinds of things that might happen to companies. But given what's happening today, they have to work with startups, because startups, by default, will just move faster. And along with that, there just isn't enough time, or maybe even a focus for the company to grow the AI expertise inside the company, and so they want to make sure that they pick the right kind of long-term trusted partner that they can work with. That's exactly where we come in. So I'd say that that was probably the biggest thing that I deeply underestimated.

Glenn:

Great. Well, it certainly seems like you guys have hit the nail on the head with respect to timing. So Rebecca, how about you? What advice would you give to founders who are thinking about starting something in the AI field right now?

Rebecca:

I would tell founders to keep an open mind and to stay flexible and basically not have any priors, because you know, what we're seeing, especially now, is that the market is shifting and moving so quickly. I've never seen so many foundation models released in the past six months. There was a period where it felt like week to week, just new models were being released and continuing to push the frontier. So in research, we always talk about the state of the art. What is state of the art today is not what state of the art was a year ago. You constantly have to be innovating. Anand and I and our whole team love that because we love being in an environment where the standards are constantly being raised, and we're always raising the standard for ourselves. It really keeps everyone on their toes. So I would tell founders to be open-minded, flexible, and to keep up. 

Glenn:

Yeah, be ready to stay on your toes. Okay, last one, and I'll ask it to both of you. Rebecca, I'll start with you. Do you have a frequent, you know, in your personal life or/and business, like an LLM use case that you enjoy?

Rebecca:

Oh, well, I can't share LLM use cases in our business. First of all, we use it a lot and aspects of development in our platform. In my personal life, a fun LLM use case for me is creative applications. This started when I was a researcher, just at a lab, because I think that language models can really help humans with generating ideas. And so I'll be like, give me a name for something, or give me some ideas for a birthday gift or something to cook. Or just give me lists of different variations, names, recipes, and things like that.

Glenn:

Great. How about you Anand? Anything special you've used LLMs for? 

Anand:

Yeah, I promise you, we did not coordinate this, but I was about to say the exact same thing about idea generation. One thing that has been helpful for me in particular is generating ideas around code which has been incredible since the beginning. I was one of the early beta users of GitHub Copilot in 2021 back when it was offered for free. And that was when I remember, not just inside your IDE, but even I was using ChatGPT to generate code recommendations. That was a really big accelerant for everything that I was building. In addition, maybe a second one that has been more helpful for me in recent times is summaries of books and podcasts, maybe like this one that has been incredible, because sometimes I just want key takeaways really quickly, and I want to get the most interesting or the most surprising insights as fast as possible. That's been a great way for me to continue to stay on my toes in terms of everything that I should be thinking about, and the kinds of ways that I'm learning and growing as well.

Glenn:

That's fantastic. Well, Anand and Rebecca, speaking on behalf of Dan, everybody at Notable Capital, and myself, we are so, so excited to be working with you and the entire Patronus team. We really appreciate you guys coming on today and sharing your thoughts and perspectives, and it gets us even more excited about the future. We look forward to big things from Patronus, as we know you do, and we're really excited to see where this goes. So thanks so much. 

Anand:

Thanks so much for having us. This is incredible and it's been amazing partnering with you all, and we're super, super excited for 2024 and beyond.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics