TruthSuite (YC W23)’s cover photo
TruthSuite (YC W23)

TruthSuite (YC W23)

Software Development

San Francisco, CA 23,526 followers

AI Fact Checking for Lawyers

About us

TruthSuite provides a comprehensive platform to supercharge the due-diligence and research processes for lawyers and other professionals for whom the truth matters. Lawyers use TruthSuite to uncover consistent versions of events as well as identify misinformation in testimonials.

Industry
Software Development
Company size
2-10 employees
Headquarters
San Francisco, CA
Type
Privately Held
Founded
2022
Specialties
AI, LLMs, Code Synthesis, machine learning, artificial intelligence, natural language processing, SaaS, No Code, Devtools, Modernization, lawyers, and legaltech

Locations

Employees at TruthSuite (YC W23)

Updates

  • TruthSuite (YC W23) reposted this

    View profile for 🫵 Matthew Mirman

    CEO TruthSuite (YC) | PhD AI ETH

    I'm fighting AI spam. 😎 🤡 🤠 You're not fooling anybody with that AI post that has: - Excessive use of adjectives - Redundant phrases like "Y Isn't just for X." - "This was a bold/ambitious attempt to X" We can absolutely tell. I'm working on a browser extension to help you report and remove AI spam from the internet. At the moment it's literally just a stress-ball, but more to come! ↘ Forever Open Source: https://lnkd.in/dQxqCJUz #antiai #machinelearning #llm #chatgpt #opensource #ai #freetools #anthropic #linguistics #nonprofit #writing

    • No alternative text description for this image
  • TruthSuite (YC W23) reposted this

    View profile for 🫵 Matthew Mirman

    CEO TruthSuite (YC) | PhD AI ETH

    AI systems often need to make decisions with incomplete information, learning as they go. This is called the 𝗰𝗼𝗻𝘁𝗲𝘅𝘁𝘂𝗮𝗹 𝗯𝗮𝗻𝗱𝗶𝘁 𝗽𝗿𝗼𝗯𝗹𝗲𝗺. For example, an algorithm choosing ads or treatments trying to improve its decisions based on past results while still experimenting with new options. The goal in such systems is to minimize 𝗿𝗲𝗴𝗿𝗲𝘁: the cumulative difference between the results of the algorithm’s choices and the best possible choices it could have made if it knew everything from the start. In a new NeurIPS spotlight paper, 𝘏𝘰𝘸 𝘋𝘰𝘦𝘴 𝘝𝘢𝘳𝘪𝘢𝘯𝘤𝘦 𝘚𝘩𝘢𝘱𝘦 𝘵𝘩𝘦 𝘙𝘦𝘨𝘳𝘦𝘵 𝘪𝘯 𝘊𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭 𝘉𝘢𝘯𝘥𝘪𝘵𝘴?, researchers from MIT and UVA show that 𝗿𝗲𝘄𝗮𝗿𝗱 𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 (how unpredictable the outcomes of choices are) dramatically change how effectively these systems learn. The authors introduce a concept called the 𝗲𝗹𝘂𝗱𝗲𝗿 𝗱𝗶𝗺𝗲𝗻𝘀𝗶𝗼𝗻 (not a typo), which measures how hard it is for the algorithm to resolve uncertainty about its options. The results are clear: systems that adapt to variance and complexity learn faster and make better decisions. These insights apply directly to improving personalized recommendations, healthcare algorithms, and autonomous systems in uncertain environments. Paper: https://lnkd.in/eWsmYJFK

  • TruthSuite (YC W23) reposted this

    View profile for 🫵 Matthew Mirman

    CEO TruthSuite (YC) | PhD AI ETH

    I am so grateful for PyTorch. Last year I sat down with Daniel Lenton who's something of an expert on ML frameworks. Here's a brief history: In the mid-2010s, the machine learning community was fragmented across various frameworks: * 2015: Caffe, a C++ library, was widely used for deep learning tasks. [1] * 2016: TensorFlow gained popularity, offering a Python API but with a static computation graph, which posed certain limitations. [2] * 2017: PyTorch emerged, introducing dynamic computation graphs that provided greater flexibility and ease of use, quickly becoming a favorite among researchers. [3] During this period, Daniel interned at Amazon, working on their drone program, where MXNet was the framework of choice. [4] Concurrently, colleagues returning from internships at organizations like DeepMind and OpenAI brought back experience with frameworks such as Chainer and JAX. [5] This diversity in tools led to significant collaboration challenges within research labs, as sharing code across different frameworks was cumbersome and inefficient. The advent of PyTorch addressed these issues by providing a standardized, researcher-friendly platform that bridged the gap between experimentation and production. Its intuitive design and dynamic nature facilitated seamless collaboration, enabling researchers to share and build upon each other's work more effectively. Here’s why PyTorch has been transformative: * Standardization enables collaboration: By unifying the community around a common framework, PyTorch has streamlined workflows and enhanced cooperative efforts. * Research meets deployment: Its flexibility makes PyTorch suitable for both cutting-edge research and scalable production systems, easing the transition from prototype to product. * Accessible innovation: Lowering the barrier to entry for new researchers, PyTorch has fostered a more inclusive and dynamic AI research environment. As Daniel puts it, technical tools don’t just shape code; they shape culture. PyTorch has not only simplified AI research but also made it more collaborative and cohesive. Watch the full interview here: https://lnkd.in/eNYrxnnC [1] https://lnkd.in/e42SW_4a [2] https://lnkd.in/e3CDFAUW [3] https://lnkd.in/eyRJZi8Y [4] https://lnkd.in/eV5GUTEq [5] https://lnkd.in/e7AHWkQi

  • TruthSuite (YC W23) reposted this

    View profile for 🫵 Matthew Mirman

    CEO TruthSuite (YC) | PhD AI ETH

    Impressed by fellow ETH Zürich researchers for their NeurIPS spotlight, Learning diffusion at lightspeed, which introduces JKOnet*, a very neat tool for understanding systems governed by diffusion—phenomena ranging from heat flow to cell behavior. Diffusion processes appear everywhere, from climate models to diseases spread. Their method makes analyzing them faster, cheaper, and more accurate. For example, JKOnet* was used to predict how stem cells evolve with unmatched precision, something critical for medicine. Traditional methods relied on complex "bilevel optimization," which requires solving multiple hard problems at once. Instead, JKOnet* simplifies this by focusing on the key ingredients: energy driving the system (potential), how parts interact (interaction), and randomness (internal energy). By doing so, it sidesteps computational bottlenecks, offering both speed and clarity. Paper: https://lnkd.in/egb7PWvW Code: https://lnkd.in/eiTps2TF

  • TruthSuite (YC W23) reposted this

    View profile for 🫵 Matthew Mirman

    CEO TruthSuite (YC) | PhD AI ETH

    What’s a database, really? A few months ago, I had a chat with Bob van Luijt about his vector database company Weaviate. When most people hear "database," they think of one of two things: 1. Programmers: SQL/MongoDB. Great for storing and finding data you already understand. 2. Everyone else: all their data dumped just somewhere. But what if your data isn’t simple? Then you can use vector databases (VDBs). These are tools that organize information based on meaning. Instead of looking up “that file from Tuesday,” you can say, “find something like this” and it works. *Why care about VDBs?* Databases as you know them SQL, MongoDB, etc. are built on a 50+ year-old assumption: you already know what you’re looking for. They’re great for structured queries like “show me all users in New York.” *But VDBs kill this paradigm.* *Why?* Because they organize data by meaning, not structure. Instead of forcing you to know what you’re looking for, VDBs help you explore concepts and similarities. For ex: “find documents about renewable energy” rather than “find doc_1234.pdf.” VDBs thrive on unstructured data: images, text, videos. Things traditional databases choke on. For AI, search engines, or recommendations, this is essential. Watch the full episode here: https://lnkd.in/eC6CnVfb

  • TruthSuite (YC W23) reposted this

    View profile for 🫵 Matthew Mirman

    CEO TruthSuite (YC) | PhD AI ETH

    The linux kernel is 28 million lines of code. These are 28 million opportunities for attack. In possibly one of the most widely depended on projects in the world. Humans alone can't possibly defend this. Humans need to find every vulnerability. An attacker only needs to find one. The future must automate defending it. A few months ago I spoke with Prof. Justin Cappos at NYU Tandon School of Engineering about automating code quality. His answers surprised me. I've personally always hated linters. I'm very scientific. Linters seemed arbitrary. Rules made up by pedants. Justin's lab had an idea: Maybe we could measure the importance of these rules scientifically. We can actually figure out if a line of code is statistically confusing. Why is confusing code a bigger problem than not enough whitespace? Simple: It's harder to debug what you can't understand. Using these techniques they found 3.6 million confusing lines of code in open repositories. Hopefully these techniques lead to a safer kernel. Try them for yourself and check out our latest accelerometer podcast! https://lnkd.in/exnQuN7Q

Similar pages

Browse jobs