In November 2022, OpenAI took a shot heard ‘round the world: LLMs overtook the legal zeitgeist. Accompanying that arrival, millions of lawyers screamed out in unison: “What does this mean for us?” The good news: This means good things for our industry. If we do it right... We are rolling out our red carpet for Damien Riehl's (VP at vLex) strong piece covering the golden potential of Large Language Models in the legal world. Read the article and find out how your legal AI strategy can be shaped by understanding pitfalls, oilfields, and best practices of LLMs. 📝 Link to the article here: https://lnkd.in/e8RsnXWT 📘 Link to our Legal AI Strategy report here: https://lnkd.in/eq9t5BkD Enjoy!
Henchman’s Post
More Relevant Posts
-
Damien Riehl, This is a perfect explanation of the "trust but verify" mantra: "To realize LLMs’ potential to transform the legal industry, we must harness the models’ superhuman speed and abilities, providing what we humans do best: judgment." I liken it to hiring a new employee. We trust them with something small to start. As they succeed or fail, we adjust our approach, coaching and giving them more responsibility as trust is built. The judgment they must exercise is proportionate to their position. And we hold ourselves accountable as their leader exercising our own judgment. #legalai #llms #trustbutverify #legaltech #hallucinations #legalresearch
In November 2022, OpenAI took a shot heard ‘round the world: LLMs overtook the legal zeitgeist. Accompanying that arrival, millions of lawyers screamed out in unison: “What does this mean for us?” The good news: This means good things for our industry. If we do it right... We are rolling out our red carpet for Damien Riehl's (VP at vLex) strong piece covering the golden potential of Large Language Models in the legal world. Read the article and find out how your legal AI strategy can be shaped by understanding pitfalls, oilfields, and best practices of LLMs. 📝 Link to the article here: https://lnkd.in/e8RsnXWT 📘 Link to our Legal AI Strategy report here: https://lnkd.in/eq9t5BkD Enjoy!
To view or add a comment, sign in
-
-
I enjoyed writing this #article — "#LLMs in #legal: Pitfalls, oilfields, and best practices" — link below!
In November 2022, OpenAI took a shot heard ‘round the world: LLMs overtook the legal zeitgeist. Accompanying that arrival, millions of lawyers screamed out in unison: “What does this mean for us?” The good news: This means good things for our industry. If we do it right... We are rolling out our red carpet for Damien Riehl's (VP at vLex) strong piece covering the golden potential of Large Language Models in the legal world. Read the article and find out how your legal AI strategy can be shaped by understanding pitfalls, oilfields, and best practices of LLMs. 📝 Link to the article here: https://lnkd.in/e8RsnXWT 📘 Link to our Legal AI Strategy report here: https://lnkd.in/eq9t5BkD Enjoy!
To view or add a comment, sign in
-
-
AI for Social Good, Climate, Food security, Recommerce | Product leader | GTM Expert | Startup Founder | Board Member | Investor | Advisor | Philanthropist |
Fascinating 8 mins. What are the boundaries around what an Human, an individual, owns as their style or creative process, and what are the copyright laws around that, in this new Era of AI. #aiethics #AIintellectualProperty #openai
Really enjoyed my live 8-minute interview on yesterday's Bloomberg Wall Street Week Daily with David Westin and Romaine Bostick, where we took a deep dive into the law and economics of the recent New York Times - #OpenAI lawsuit. We unpack the nature of the clear and growing regulatory risk for generative AI platforms, contrasting today's situation with the emergence of Google Books, while also debating the competitive risk that generative AI poses to the New York Times business model. There's little doubt that a considerable fraction of the world's information will be AI-generated in the future, and we concluded with a conversation about the limits to training AI on machine-generated content, and the challenge of maintaining the right mix of future human and AI-generated training data.
To view or add a comment, sign in
-
Really enjoyed my live 8-minute interview on yesterday's Bloomberg Wall Street Week Daily with David Westin and Romaine Bostick, where we took a deep dive into the law and economics of the recent New York Times - #OpenAI lawsuit. We unpack the nature of the clear and growing regulatory risk for generative AI platforms, contrasting today's situation with the emergence of Google Books, while also debating the competitive risk that generative AI poses to the New York Times business model. There's little doubt that a considerable fraction of the world's information will be AI-generated in the future, and we concluded with a conversation about the limits to training AI on machine-generated content, and the challenge of maintaining the right mix of future human and AI-generated training data.
Legal Debate Over Content In Training AI
bloomberg.com
To view or add a comment, sign in
-
“When your technology aims to rewrite the rules of society, it stands that society’s current rules need not apply.” Instead of letting AI do my work for me, I’ve been reading a lot about it. This recent article in The Atlantic on the ScarJo case is the latest example of a clear-eyed look at its implications https://lnkd.in/eW7v8ewW
OpenAI Just Gave Away the Entire Game
theatlantic.com
To view or add a comment, sign in
-
I'm very pleased to release an embedding model for use in retrieval-augmented generation (RAG), fine-tuned on the decisions of the High Court of Australia. It outperforms OpenAI's embedding model by a significant margin on legal documents! At the heart of the 'retrieval' component of RAG sit the embedding models which convert text to vector data, allowing documents relevant to a query to be rapidly located and incorporated into LLM responses. Not only can a fine-tuned embedding model increase the performance and accuracy of your RAG application, but it is also significantly cheaper and faster to run than API-based solutions like OpenAI, especially for large document stores. I release the model, which was trained on over 129,000 legal context-question pairs on Hugging Face: https://lnkd.in/gbtXsbZu #llm #largelanguagemodels #rag #retrievalaugmentedgeneration #embedding #huggingface #sentencetransformers #auslaw #ai #ailaw
To view or add a comment, sign in
-
-
OpenAI Employees Advocate for Protections to Address ‘Serious Risks’ of AI A coalition of current and former employees from leading AI companies, including OpenAI, Google DeepMind, and Anthropic, has united to advocate for greater transparency and protection from retaliation for those who voice concerns about the significant risks associated with artificial intelligence (AI). This group, comprising 13 signatories, highlights the critical need for oversight and the establishment of robust whistleblower protections in the industry. read full article here: https://lnkd.in/eDGqdv4d
OpenAI Employees Advocate for Protections to Address ‘Serious Risks’ of AI
genigears.com
To view or add a comment, sign in
-
A recent survey concluded that 81% of lawyers believed that generative AI can be readily applied to legal work. Boon Edam were able to reduce contract review time by a staggering 43% using OpenAI within Summize. See it for yourself: https://lnkd.in/dDv8XTik #CLM #legaltech #openai #contractreview
To view or add a comment, sign in
-
Building Niche Job Boards (Web3 & AI) | HR Tech Consultant | Expertise in Job Sites SEO, Google Jobs, NLP & AI Solutions, Job Scraping | HR Tech Blogger
NY Times requesting the destruction of language models based on training data with their articles was expected. It is a crucial risk for OpenAI and might be the start of AI Winter. Even if NYT are looking for creative ways to increase their settlement, the outcome of this claim is going to be crucial for two reasons: 1) a chain of claims from other content providers can cause OpenAI and other LLM startups to stop operating and render their business model useless. 2) stop Gen AI adoption in large enterprises due to legal risk, which will stop funding and pop the current AI bubble. What remains unclear is how open-source models will exist post-NYT claim. Good luck ordering destruction here.
To view or add a comment, sign in
-
-
OpenAI insiders’ open letter warns of ‘serious risks’ and calls for whistleblower protections - CNN: OpenAI insiders’ open letter warns of ‘serious risks’ and calls for whistleblower protections CNN http://dlvr.it/T7rgJ9 #ai #artificialintelligence
To view or add a comment, sign in