With great power comes great instability? 🔥 Channeling the enthusiasm of large language models 🤖 into a useful tool for drug discovery 💊 requires more than just prompting. It takes a village of tools, data, and ... real people. Best practices 💊 Ground the model to avoid hallucination 💊 In doing so, always tailor to the specific biological context 💊 Enforce a structured output format to enable structured post-processing 💊 Set up a rigorous and efficient workflow to keep the precious human in the loop 💊 Take care of data privacy #drugdiscovery #llm #ai #machinelearning #biotech #genai #noblesseoblige
Information Security Officer | IT-Security Engineer
1y" After testing numerous models across different biological contexts, we found that GPT-4 seems on par with other state-of-the-art models, such as PALM2 or Llama 2. Indeed, the LLM technology appears to matter less than the prompt design and output format. " That is an interesting find. Did you us specific technique like Tree of Thought or similar methodologies in your testings or pure prompts and possibly even instruction compression to keep the ideas but not use an excess of Tokens for the Role and Prompt?