SolasAI

SolasAI

Software Development

Philadelphia, Pennsylvania 1,230 followers

We wrap ethical compliance in cool data science!

About us

We build trust in AI! SolasAI is software that detects and removes bias & discrimination from a customer's decisioning models. It works in credit & insurance underwriting, predictive marketing, healthcare, and employment to name a few use cases. We use AI to fix AI. We provide trust & transparency into artificial intelligence, machine learning, and standard statistical models. We have over 45 years experience doing this, so our software is designed by the people that large banks, insurers, and healthcare providers trust to help them identify and lower their risk. If you are tired of paying expensive experts who can't seem to agree, and then leave the hard part of fixing the problems to your expensive and over worked data scientists, then SolasAI is for you. We follow the latest decisions and signals from courts, regulators, and law makers, as well as the latest and greatest technology trends for AI and fairness as a whole. This is built into SolasAI so you don't have to figure it out yourself. SolasAI is also flexible enough to adjust to your business without exposing you to unnecessary risks. We help you identify the fairest and best performing alternatives, and then create just the right amount of data and justification for you to communicate with auditors, regulators, or other stakeholders. We believe in fairness, but we are practical problem solvers. Bad decisions are bad decisions. We help you make the best decisions while expanding your business and uncovering new possibilities through fairness. Modelers, are you tired of tools trying to replace you or call your 'babies ugly?' Well SolasAI is your co-pilot to help take your great data science and insights and ensure you also achieve amazing fairness. We solve this together using cool data science. We are your partner and not your competition!

Website
http://www.solas.ai
Industry
Software Development
Company size
2-10 employees
Headquarters
Philadelphia, Pennsylvania
Type
Privately Held
Founded
2021
Specialties
Machine Learning, Artificial Intelligence, fair lending, compliance, risk management, model governance, analytics governance, disparate impact, disparate treatment, AI Bias, algorithmic bias, machine learning bias, proxy detection, automl, software, Ethical AI, AI transparency, AI Trust, AI Trust Risk and Security Management, and AI TRiSM

Products

Locations

  • Primary

    1608 Walnut St.

    Suite 1108

    Philadelphia, Pennsylvania 19103, US

    Get directions

Employees at SolasAI

Updates

  • View organization page for SolasAI, graphic

    1,230 followers

    We would like to congratulate Tori Shinohara for her well deserved recognition by Chambers and Partners for her work in Financial Services Regulation: Consumer Finance (Enforcement & Regulation)! Great work Tori!

  • View organization page for SolasAI, graphic

    1,230 followers

    Check it out!

    View profile for Nick Schmidt, graphic

    Chief Technology and Innovation Officer at SolasAI

    I’m looking forward to speaking tomorrow at the 89th Annual National Association of Consumer Credit Administrators’ training symposium tomorrow morning! I’ll be talking to state and federal regulators about fairness in machine learning and AI, focusing on credit underwriting. I think it will be a fun talk, and I’m happy to report that I’m going to roll out some new material (I’ve gotten very tired of hearing my old stuff). It looks like it’s going to include an engaging group of state and federal regulators, and I’m excited to learn from them and hear about their work. #fairai #consumercredit #modelgovernance https://lnkd.in/ecwsB_7k

    89th Annual Meeting and Regulators' Training Symposium (2024)

    89th Annual Meeting and Regulators' Training Symposium (2024)

    naccaonline.org

  • View organization page for SolasAI, graphic

    1,230 followers

    Looking forward to Nick Schmidt's, Yolanda D. McGill's, and Carol Evans' discussion on the responsible use of #ai and #ml in Consumer Financial Services this Thursday at the Conference on Consumer Finance Law. Check them out at 10AM CT. #responsibleai #machinelearning

  • SolasAI reposted this

    View profile for Nick Schmidt, graphic

    Chief Technology and Innovation Officer at SolasAI

    I'm looking forward to speaking tomorrow at the ASSOCIATION OF LIFE INSURANCE COUNSEL's annual conference! I'll be joining an esteemed panel to discuss "What Can We Learn About AI Governance, Strategy and Testing from Other Regulated Industries?" I'll be sharing the stage with industry leaders: - Mary Jane Wilson-Bilik of Eversheds Sutherland, - Michael Brick of Microsoft, - Laura Jehl of Willkie Farr & Gallagher LLP - with Matthew J. Gaul moderating the panel. This panel promises to be an insightful exploration of how different regulated industries approach AI governance, strategy, and testing, and what industry participants in the life insurance sector can learn from them. I'm looking forward to a dynamic discussion and the opportunity to share my insights on responsible AI usage, mitigating algorithmic discrimination,, and establishing effective model governance processes. #ALIC2024 #AIGovernance #LifeInsurance #ResponsibleAI #MachineLearning #AIethics

  • SolasAI reposted this

    View organization page for MoreThanFair, graphic

    354 followers

    Today, the MoreThanFair community came together once again in Washington, D.C., to discuss President Biden’s AI Executive Order and its untapped potential. #AI can play a vital role in reducing bias and advancing fairness. MTF members are continuing to work toward a future where #credit is fair and accessible to all. More to come! American Fintech Council, Credit Builders Alliance, National Bankers Association, National Community Reinvestment Coalition - NCRC, National Consumer Law Center, Prosperity Now, UnidosUS (@WeAreUnidosUS), Camino Financial, Cross River, Esusu, LendingClub, Oportun, SolasAI, Stratyfy, Upstart, Zest AI. ➡️ morethanfair.com

    • No alternative text description for this image
  • SolasAI reposted this

    View profile for Nick Schmidt, graphic

    Chief Technology and Innovation Officer at SolasAI

    We're thrilled to share insights on the emerging NIST Generative AI Risk Profile, brought to you by our colleague, Patrick Hall — an esteemed AI authority. Collaborating with Patrick and other leading experts, we provide unparalleled support to diverse industries in mastering model risk management for generative AI technologies. A common challenge our clients articulate is the novelty and magnitude of risks associated with generative AI. They're eager to leverage these advancements but often grapple with understanding and mitigating the inherent risks. Ensuring customer safety and regulatory compliance remains a paramount concern. The evolving NIST guidance under the AI RMF framework is set to offer crucial benchmarks for safely capitalizing on generative AI's potential. Our role is to guide you through these regulatory challenges while maximizing business outcomes. Our team at SolasAI, BLDS LLC, and Global Economics Group possess over 45 years of expertise applying statistics (now known as "data science") to legal and regulatory compliance issues, as well having significant experience in business optimization (i.e., making algorithms work). We also have a specialized focus on the fair and equitable use of algorithms. In our work, we have developed MRM programs for major players in Financial Services and Health Insurance, conducted LLM audits, testified before the Senate on fairness standards, and are at the forefront of setting benchmarks that regulators will rely on as technology progresses. Interested in how we can help your organization navigate these challenges, seize opportunities, and develop a robust risk management program? Please contact message me or one of my colleagues, Susan Bridge, Steven D., or Darrin Williams. We look forward to working with you! #genAI #mrm #fairai

    View profile for Patrick Hall, graphic

    Machine Learning & AI Risk Management

    I've been getting a lot of questions about the new draft of NIST AI 600-1 Generative AI Risk Profile that is now available for public comment: https://lnkd.in/gTvBVu68. If helpful for my ERM/MRM/AI Governance colleagues, here's some basic information about the document and my thoughts on how it might fit into model or AI governance programs. Basic info: * Information for submitting much-needed public feedback is available on pg. iii. * For this document to make sense, you'll need to read the broader AI Risk Management Framework (AI RMF, NIST AI 100-1). That's available here: https://lnkd.in/gs7g6iJD. * This draft document is a cross-sectoral profile of generative AI (GAI) for the AI RMF. It provides general information and guidance across sectors that fit into the AI RMF's MAP, MEASURE, MANAGE, and GOVERN functions. * The AI RMF is a risk-based framework. Riskier products, models, or use cases receive more oversight. The AI RMF is also a socio-technical framework. It seeks to improve the technical and social aspects of AI jointly. * Like the AI RMF, no part of the document is a checklist, and adherence is voluntary. Some personal thoughts on incorporation into governance programs: * The first sections of the document propose 12 GAI risks that can be used to assess the risks of products, models, or use cases and guide risk-tiering. E.g., does a third-party LLM or image generator present intellectual property risks? How severe? What about other risks from the list? * The next parts of the document present GAI actions taken from public comment over Fall 2023. Actions are similar to risk controls but address a wider set of tasks than risk acceptance, mitigation, transfer, etc. The AI RMF Playbook also contains actions. (https://lnkd.in/gmb4e6sS) Draft AI 600-1 actions can be matched to AI RMF functions, categories, and subcategories and augment Playbook actions. E.g., an AI RMF Playbook action for GOVERN 1.1 is: "Align risk management efforts with applicable legal standards." Using this profile, we can augment that with additional actions for GAI: "Establish policies restricting the use of GAI to create child sexual abuse materials (CSAM) or other nonconsensual intimate imagery." Each set of actions also suggests which roles might be best suited to apply the actions. * The last major part of the document focuses on primary GAI considerations: governance, pre-deployment testing, content provenance (e.g., watermarking), and incident disclosure. These are some of the main activities that should be top-of-mind for effective GAI risk management. Many specific actions relate to these more general topics. Finally, it was an honor to contribute to this document with many others at NIST and the GAI Public Working Group. Please read the doc and let NIST know what you think. The more uncorrelated input signals, the better the outcome!

  • View organization page for SolasAI, graphic

    1,230 followers

    Congratulations team for achieving our SOC 2 Type 2 successful audit! This will help us help our customers to minimize the amplification of #bias and #unfairness in #algorithms. To help customers gain deep transparency and improve the outcomes of their models, and reduce the risk of regulatory actions or litigation. Continue the great work, team!

    View profile for Larry Bradley, graphic

    We Build Trust in AI!, Chief Executive Officer at SolasAI - Harnessing cutting-edge AI & deep industry expertise to illuminate & reduce unfairness in algorithms & models

    SolasAI, we believe in designing security and privacy into our software from the ground up to help our customers "Build Trust in AI." We understand that our #AI and #ML #bias and #disparity testing and mitigation software must access and analyze our customer's most sensitive data and business logic to optimize their business and #fairness goals. To that end, I am happy to announce that SolasAI has completed our SOC 2 Type II audit. Our valued customers may rest assured that we will continue to protect them and their algorithms when it comes to fairness, privacy, and security. Congratulations to Mike Wilkerson for leading this effort and our partners at KansoCloud, Drata, and AssuranceLab to helping us to achieve this goal. Christian Bonilla Dimitris Geragas Nick Schmidt Anita Pandey Brian Hughes Patrick Hall Susan Bridge Steven D. Govind Bangarbale

    • No alternative text description for this image
  • View organization page for SolasAI, graphic

    1,230 followers

    If you missed Nick Schmidt's remarks (and jokes!) at the United States Senate Committee on Banking, Housing, and Urban Affairs hearing on #ai, here are his 4 Key Principles to Consider for AI Regulation: Materiality: This principle advocates for a risk-based approach in governing AI systems. By focusing more stringent regulation on higher-risk AI applications, resources will be allocated more effectively. For example, a company should not spend as much time reviewing a marketing model as they would an underwriting model that enormously impacts both consumers and the business. Adopting such a risk-based approach ensures that systems with the most significant potential impact are carefully monitored and promotes innovation by not overburdening lower-risk initiatives with unnecessary regulatory constraints. As discussed below, SR 11-7 provides a solid foundation for guiding how materiality is assessed in AI regulation. Fairness: The principle of fairness is central to the responsible deployment of AI. Establishing a clear understanding and expectation of fair AI practices is crucial, particularly in applications that significantly impact individuals, such as housing. Regulators should set expectations that bias and discrimination should be identified and mitigated in AI systems. Existing frameworks relating to measuring and mitigating disparate impact, disparate treatment, and proxy discrimination should guide further regulation of AI fairness. Accountability: AI systems must have accountability mechanisms, especially those with high impact. This involves providing individuals affected by AI decisions a right to appeal, ensuring that there is recourse for those who may be adversely impacted. Additionally, entities that deploy AI systems irresponsibly should face appropriate consequences. Transparency: The principle of transparency mandates clear explanations for decisions made by AI systems. This is fundamental to building trust in AI systems. Understanding the 'why' and 'how' behind AI-driven decisions is crucial for public acceptance and confidence in these technologies, and is further crucial to ensure that systems are fair. #solasai SolasAI

    Artificial Intelligence and Housing: Exploring Promise and Peril | United States Committee on Banking, Housing, and Urban Affairs

    Artificial Intelligence and Housing: Exploring Promise and Peril | United States Committee on Banking, Housing, and Urban Affairs

    banking.senate.gov

  • SolasAI reposted this

    View profile for Nick Schmidt, graphic

    Chief Technology and Innovation Officer at SolasAI

    I'm honored that tomorrow I will provide testimony about AI in housing before the U.S. Senate Subcommittee on Housing, Transportation, and Community Development. I'm looking forward to explaining how AI is a tool that has the potential to provide great benefits to consumers and businesses, but that it also comes with an equally significant risk due to issues such as algorithmic discrimination. I will talk about how a measured but firm regulatory approach that builds on existing model governance and anti-discrimination rules is necessary. I'm also eager to hear the testimony of Lisa Rice of the National Fair Housing Alliance and Vanessa Perry of The George Washington University. Thank you very much to Senator Tina Smith for inviting me to testify. I'm attaching a link to the stream. I hope you get a chance to watch it. https://lnkd.in/eAQ4BUUz #FairHousing #AI #ResponsibleAI

    Artificial Intelligence and Housing: Exploring Promise and Peril | United States Committee on Banking, Housing, and Urban Affairs

    Artificial Intelligence and Housing: Exploring Promise and Peril | United States Committee on Banking, Housing, and Urban Affairs

    banking.senate.gov

  • View organization page for SolasAI, graphic

    1,230 followers

    We are very honored that the United States Senate Committee on Banking, Housing, and Urban Affairs asked our CTO, Nick Schmidt, to speak on the "Promise and Perils" of #ai in #housing. This important meeting will take place next Wednesday, January 31st. We look forward to hearing he and his fellow speakers discuss this important topic. THE COMMITTEE ON BANKING, HOUSING, AND URBAN AFFAIRS, SUBCOMMITTEE ON HOUSING, TRANSPORTATION, AND COMMUNITY DEVELOPMENT will meet in OPEN SESSION, HYBRID FORMAT to conduct a hearing entitled, “Artificial Intelligence and Housing: Exploring Promise and Peril.” The witnesses will be: Ms. Lisa Rice, President and Chief Executive Officer, National Fair Housing Alliance; Dr. Vanessa Perry, Interim Dean and Professor, George Washington University School of Business, and non-resident Fellow, Housing Finance Policy Center, Urban Institute; and Mr. Nicholas Schmidt, Partner and Artificial Intelligence Practice Leader, BLDS LLC, and Founder and CTO, SolasAI.

    Artificial Intelligence and Housing: Exploring Promise and Peril | United States Committee on Banking, Housing, and Urban Affairs

    Artificial Intelligence and Housing: Exploring Promise and Peril | United States Committee on Banking, Housing, and Urban Affairs

    banking.senate.gov

Similar pages

Browse jobs