Government communication can be complex, and accessibility is key. In this blog series, Florian Kunneman and Joleen van der Zwan van der explain how AI and language technology may help simplify communication for Dutch citizens. Learn about the collaboration in research between RVO and Utrecht University to make government letters more accessible using Large Language Models (LLMs). #AI #GovernmentCommunication #LLM
SogetiLabs’ Post
More Relevant Posts
-
Writing good prompts for large language models is a skill. We have become accustomed to using keywords to search the internet but much more guidance is needed for generative AI. Frameworks like COSTAR have been suggested as one approach to this. However, even this feels like a lot of work. Sometimes, I will just ask the model to create a prompt, edit it to match what I am looking for and then run it. You can also run a prompt and then ask the model to help optimize the last prompt. Specifically, you can say something like "please help optimize the prompt. first, ask me three questions. then, wait for the answer. Now, use the answers to optimize the prompt" Finally, it pays to be nice. Being polite to a large language model often produces better results. There's even research showing this - https://lnkd.in/ekGpYpFu. Hope this is useful. What are your tips / tricks to getting better output from LLMs? PS: This is the COSTAR framework Context (C) : Providing background information helps the LLM understand the specific scenario. Objective (O): Clearly defining the task directs the LLM’s focus. Style (S): Specifying the desired writing style aligns the LLM response. Tone (T): Setting the tone ensures the response resonates with the required sentiment. Audience (A): Identifying the intended audience tailors the LLM’s response to be targeted to an audience. Response (R): Providing the response format, like text or json, ensures the LLM outputs, and help build pipelines.
To view or add a comment, sign in
-
Want to become better at leveraging LLMs ( ChatGPTs, Gemini, Copilot, etc.) in your domain? It starts with writing a good prompt. Read this insightful post by Kapil Parakh, MD, MPH, PhD, to learn more about the COSTAR framework and accelerate your learning. Doc, you are awesome for sharing these research-backed gems!
Health Innovation Executive | Keynote Speaker | Senior Medical Lead @ Google | Cardiologist @ VA | Views are my own
Writing good prompts for large language models is a skill. We have become accustomed to using keywords to search the internet but much more guidance is needed for generative AI. Frameworks like COSTAR have been suggested as one approach to this. However, even this feels like a lot of work. Sometimes, I will just ask the model to create a prompt, edit it to match what I am looking for and then run it. You can also run a prompt and then ask the model to help optimize the last prompt. Specifically, you can say something like "please help optimize the prompt. first, ask me three questions. then, wait for the answer. Now, use the answers to optimize the prompt" Finally, it pays to be nice. Being polite to a large language model often produces better results. There's even research showing this - https://lnkd.in/ekGpYpFu. Hope this is useful. What are your tips / tricks to getting better output from LLMs? PS: This is the COSTAR framework Context (C) : Providing background information helps the LLM understand the specific scenario. Objective (O): Clearly defining the task directs the LLM’s focus. Style (S): Specifying the desired writing style aligns the LLM response. Tone (T): Setting the tone ensures the response resonates with the required sentiment. Audience (A): Identifying the intended audience tailors the LLM’s response to be targeted to an audience. Response (R): Providing the response format, like text or json, ensures the LLM outputs, and help build pipelines.
To view or add a comment, sign in
-
Can AI save languages? No. Why? Because of this ... "LLMs may get better and better at sourcing certain kinds of information or completing certain kinds of tasks, but they are finders, not creators; they are mimics, not conversation partners; they are machines, not people." https://lnkd.in/gB8w5frs
To view or add a comment, sign in
-
Interesting article here on AI translation, the web and a potential existential crisis for low-resource languages. Can be summed up as "when culture is unseen by technology" https://lnkd.in/e6xny3Em
To view or add a comment, sign in
-
Rules based translation was the original machine translation. Most consider it outdated. Why is it now being combined with cutting edge AI? Data scientists and and linguists have found that combining LLM machine translation with the rules based machine translation has yielded better results than LLMs alone for low/no resource languages (languages without large monolingual data sets to train LLMs on). https://lnkd.in/gDS_2HWc
To view or add a comment, sign in
-
Right on point … IMHO. “In this paper we argue that key, often sensational and misleading, claims regarding linguistic capabilities of Large Language Models (#LLMs) are based on at least two unfounded assumptions: the assumption of language completeness and the assumption of data completeness. Language completeness assumes that a distinct and complete thing such as “a natural language” exists, the essential characteristics of which can be effectively and comprehensively modelled by an #LLM. The assumption of data completeness relies on the belief that a language can be quantified and wholly captured by data. “ #AI https://lnkd.in/dBDMajcq
To view or add a comment, sign in
-
Did you know that, at government level, specialised translators have the job of turning complicated official documents into easy-to-read text? But it’s an expensive and time-consuming task. #SITAlumni SUMM AI tries to solve that problem by using large language models to simplify any kind of text quickly and inexpensively. SUMM AI’s software can be compared to Google Translate: you enter any complicated text, push a button, and it automatically generates a plain text translation. But rather than translating text word for word, the AI distils the meaning of the original, rewriting it with shorter sentences, simpler syntax, and explanations of difficult terms. Watch the video below to find out more and read the full article here: https://lnkd.in/eJc8K36w #ChangeTomorrowToday #SocialInnovation #SocialEntrepreneurship #SocEnt #Impact #SIT23Stockholm #EIBInstitute #EIB #SDGs #GlobalGoals #Stockholm #Sweden #German #PlainText #AI #ArtificialIntelligence #Language #LanguageModels
To view or add a comment, sign in
-
📝 Introducing our latest report – “AI speaks Polish. The ecosystem of open language models in Poland”! It presents research and conclusions of Alek Tarkowski (Open Future Foundation), Kuba Piwowar (Fundacja Centrum Cyfrowe), and Michał Owczarek (Uniwersytet SWPS). The goal of the report is to provide a case study of Poland’s ecosystem for creating open AI models for the Polish language 🇵🇱 Small language models are filling the gap left by large commercial models, which are not adapted to the Polish language or cultural nuances. The work on these models serves as an example of effectively creating alternatives to dominant entities. The report focuses on two key projects: building the SpeakLeash | Spichlerz language corpus and using it to create the Bielik model, as well as the activities of the #PLLuM consortium (Polish Large Language Model). Based on interviews with the creators of Polish models, the authors outline the development processes and the challenges they presented, and summarise the lessons learned from the achievements so far. 👉 See the full report on our website – in Polish now, and in English next week! 🔗 https://lnkd.in/d7JTr4G8 __ Image: Portrait of Adam Mickiewicz, Austrian National Library [Public Domain, via Europeana.eu]
To view or add a comment, sign in
-
-
Interesting article from Slator that 1 in 3 LSPs have implemented LLMs into their workflows. Contrasts from an article earlier today from the BBC that concludes AI products like ChatGPT are "much hyped but not much used". So I guess the localization industry IS leading compared to most when it comes to AI! https://lnkd.in/ecKS3Xps #localization #AI
A new Slator survey shines a 🔦 spotlight on AI adoption, finding that one in three language service providers have implemented LLMs into their workflows.
To view or add a comment, sign in
-
I recommend reading the report on the development of genAI in Poland. I was involved in writing it with Kuba Piwowar and Alek Tarkowski, we talked to leaders and experts in the field. My favorite conclusion is why it is worthwhile to build Polish LLMs: because the process creates know-how in the industry and allows institutions to store and process their data locally.
📝 Introducing our latest report – “AI speaks Polish. The ecosystem of open language models in Poland”! It presents research and conclusions of Alek Tarkowski (Open Future Foundation), Kuba Piwowar (Fundacja Centrum Cyfrowe), and Michał Owczarek (Uniwersytet SWPS). The goal of the report is to provide a case study of Poland’s ecosystem for creating open AI models for the Polish language 🇵🇱 Small language models are filling the gap left by large commercial models, which are not adapted to the Polish language or cultural nuances. The work on these models serves as an example of effectively creating alternatives to dominant entities. The report focuses on two key projects: building the SpeakLeash | Spichlerz language corpus and using it to create the Bielik model, as well as the activities of the #PLLuM consortium (Polish Large Language Model). Based on interviews with the creators of Polish models, the authors outline the development processes and the challenges they presented, and summarise the lessons learned from the achievements so far. 👉 See the full report on our website – in Polish now, and in English next week! 🔗 https://lnkd.in/d7JTr4G8 __ Image: Portrait of Adam Mickiewicz, Austrian National Library [Public Domain, via Europeana.eu]
To view or add a comment, sign in
-
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
4moIt's fascinating how Florian Kunneman and Joleen van der Zwan van der are exploring the potential of AI to bridge communication gaps in government. The use of Large Language Models to simplify official correspondence resonates with historical efforts to make legal documents more accessible, like the simplification of contracts during the Industrial Revolution. This shift towards user-friendly language aligns with the growing demand for transparency and inclusivity in public services. Given the potential impact on citizen engagement, it's intriguing to consider: how might these LLMs be trained to not only simplify language but also personalize communication based on individual needs and comprehension levels?