CDD Vault Update (January 2025): New Curve Fits, Register an AI-Generated Molecule, and more What’s New? ➡️ New Curve Fits: Exponential Decay & Scatter Plot curve fits. ➡️ Simplified Registration: Instantly register AI-generated molecules. ➡️ Custom Optimization Scores: Define and calculate molecule scores in real time. Read the full update here:https://lnkd.in/gguxZXXy
Collaborative Drug Discovery - CDD Vault’s Post
More Relevant Posts
-
A Human's Guide to Machine Learning - A very good read on #MachineLearning and its business alignment. Thanks to Altair https://lnkd.in/deJfF-tD Altair RapidMiner
To view or add a comment, sign in
-
How to select an #LLM for your next use case? Let's see👇 When selecting an LLM, several critical factors must be carefully evaluated across three main dimensions. From a technical perspective, the parameter size indicates the model's complexity and potential capabilities, while the context window determines how much information it can process at once. The model's architecture and training data quality directly influence its understanding and generation abilities. Performance-wise, inference speed is crucial for real-time applications and user experience, while accuracy ensures reliable outputs. The model's reliability and consistency across different tasks and inputs are essential for production deployments. Operational considerations include cost, which encompasses both training and inference expenses, and scalability, which determines how well the model can handle increasing workloads and user demands. The trade-offs between these factors are interconnected - for instance, larger parameter sizes might offer better accuracy but come with increased computational costs and slower inference speeds. Similarly, a wider context window provides better understanding of longer texts but requires more resources to process. Therefore, the ideal LLM choice depends heavily on the specific use case, available resources, and performance requirements of the application. Here is my complete article on different strategies to enhance your LLMs performance: https://lnkd.in/g6tw5M8R Also, no matter what LLM you choose, having a robust data platform for your AI applications is highly recommended. SingleStore being a versatile data platform supports all types of data and handles the vector data efficiently. Try SingleStore for FREE: https://lnkd.in/gQ6zGCXi
To view or add a comment, sign in
-
-
Hear the inside story on how to unlock your data for AI effortlessly! Discover Featrix, the tool that assesses your data's predictive power without the need for AI expertise or lengthy data science projects. Learn more from my interview with co-founder Pawel Zimoch: https://featrix.ai
To view or add a comment, sign in
-
DADA X is a Different Kind of Agent We're seeing "agentic workflows" that are essentially choreographed sequences of LLM calls - whether static chains or routed workflows. When we talk about DADA X as an agent, we mean something fundamentally different. DADA X is an event-driven agent that uses causal activity models rather than data models. Instead of AI models routing between tasks, our Agent uses native causal operators to maintain coherence across event streams and event clouds. At Decision-Zone, we’ve operationalized causality as a first-class construct, embedding the understanding and management of cause-and-effect relationships directly into the design, architecture, and operation of a system. When causality is a first-class construct: - Systems operate proactively, preventing issues before they escalate. - Interventions are precise and deterministic, driven by causal understanding. - The system handles complexity and emergent behaviors effectively, as it can trace how complex events interact and intervene at the root cause. Our Agent actively shapes and controls event flows in real-time. Instead of executing predefined tasks, it captures human intent and operates at a higher level of abstraction — a layer beyond the traditional network layers and the application layer where AI agents typically operate. With DADA X, events are not attributes of nodes but occurrences on edges. An event represents a change in the state of a relationship (e.g., a message sent, a signal received, a decision made). By focusing on events, DADA X operates at the most granular and actionable layer of system behavior. Nodes are inherently local—they represent isolated perspectives within a system. Edges, however, span the entire network, providing the connective framework through which global coordination and causality emerge. By controlling the edges, ours is a truly Global Agent that can enforce causal coherence across complex distributed systems. DADA X operates autonomously at the event level, maintaining system-wide causal coherence in real-time, preventing impossible states and anomalies. It can autonomously detect, prevent, and adapt to new patterns while ensuring system-wide correctness. The Agent can halt events that might lead to an undesired outcome and can enforce constraints by understanding not just what is happening, but why it is happening. We're not about chaining AI calls... instead we are “edging out the nodes.” It’s an inversion of the hierarchy. Now the edges, or relationships, become the primary subject of computation, analysis, and control.
To view or add a comment, sign in
-
-
Dataiku v13 now supports Multimodal ML for Advanced AutoML Modeling meaning you can integrate images, text, and tabular data—pushing the boundaries of data-driven insights and innovation. Here’s what’s under the hood: * optimised feature extraction from LLM Mesh connections, selecting the most suitable embedding models per data type. This ensures that each modality is processed with the most advanced, context-specific models available. * Governed, Resource-Efficient Model Development By using embeddings directly from LLM Mesh, administrators gain enhanced security, cost efficiency, and control over model lifecycles—all while reducing computational overhead and ensuring governed use of embeddings across datasets. * For data scientists and ML engineers, Multimodal ML opens the door to enriched model context and unparalleled flexibility in feature engineering, all while meeting enterprise-level requirements for security and scalability. Where would you deploy Multimodal ML in your workflows? Is anyone leveraging this already for their own use cases?
To view or add a comment, sign in
-
-
Several people are recommending to read the LLama3.1 titled "𝐓𝐡𝐞 𝐋𝐥𝐚𝐦𝐚 3 𝐇𝐞𝐫𝐝 𝐨𝐟 𝐌𝐨𝐝𝐞𝐥𝐬" paper and rightly so. It is pretty lengthy read however, if you can, I would highly recommend to give it a read Why I thing it is worth reading? - Extensive information on data pre and post training - Information of Speech and Vision adaptor - Model architecture such as use of Group Query Attention and RoPE - Information on the Infra and the complexities/issues they have face - check Table 5 on the root-cause categorisation of unexpected interruption during pre-training - Personally for me the sub section on Reliability and Operational Challenges on page 13 was pretty interesting to see what kind of issue are usually happening for large clusters in this case 16K GPUs - For anyone working in the field section 3 and 4 are gems! amazing level of detailed information about each and every aspect of the model training, data, refining, issue etc. - In the post training section I really like the section 4.3.2 on the multilinguality and the one on Tool Use - Then pretty comprehensive information on results and Safety - For anyone who is serving a large base of customer like Meta 'Inference' is an extremely important topic, no wonder this paper has a whole section on that only - Then of course, due its multi modality aspect we also have dedicated sections on vision and speech part Of course, I did not read the whole paper and just skimmed over it and spend some time on some parts which got my attention however, I am pretty certain that it is a definite read for anyone who is curious about the topic. Each section is so comprehensive that you can spend some time to just finish one section at a time when possible, instead of going through the whole paper. Paper -> https://lnkd.in/g3mvhuwe _________________________________ ✅ Follow me for regular updates and insights on Generative AI and my journey as a CTO in AI product development
To view or add a comment, sign in
-
-
This is a fantastic breakdown! 👏 Choosing the right LLM truly requires a balance across technical, performance, and operational factors, each critical for optimizing impact in a specific use case. The points on parameter size and context window are spot-on, as these choices directly influence the model’s complexity, understanding, and speed. Performance considerations like inference speed and accuracy are equally crucial, especially for real-time applications where user experience is key. And operationally, cost and scalability must be front-of-mind, particularly as larger models can become resource-intensive. Your article provides valuable strategies for enhancing LLM performance, which will be incredibly useful for anyone looking to fine-tune their AI approach. Also, the tip on using SingleStore as a robust data platform is a great add—having a flexible, high-performing data solution is essential in today’s AI landscape. Thank you for sharing this comprehensive guide and resource! 🚀 #LLMSelection #AIOptimization #SingleStore #AIPlatforms #MachineLearning
GenAI Evangelist | Developer Advocate | 40k Newsletter Subscribers | Tech Content Creator | Empowering AI/ML/Data Startups 💪
How to select an #LLM for your next use case? Let's see👇 When selecting an LLM, several critical factors must be carefully evaluated across three main dimensions. From a technical perspective, the parameter size indicates the model's complexity and potential capabilities, while the context window determines how much information it can process at once. The model's architecture and training data quality directly influence its understanding and generation abilities. Performance-wise, inference speed is crucial for real-time applications and user experience, while accuracy ensures reliable outputs. The model's reliability and consistency across different tasks and inputs are essential for production deployments. Operational considerations include cost, which encompasses both training and inference expenses, and scalability, which determines how well the model can handle increasing workloads and user demands. The trade-offs between these factors are interconnected - for instance, larger parameter sizes might offer better accuracy but come with increased computational costs and slower inference speeds. Similarly, a wider context window provides better understanding of longer texts but requires more resources to process. Therefore, the ideal LLM choice depends heavily on the specific use case, available resources, and performance requirements of the application. Here is my complete article on different strategies to enhance your LLMs performance: https://lnkd.in/g6tw5M8R Also, no matter what LLM you choose, having a robust data platform for your AI applications is highly recommended. SingleStore being a versatile data platform supports all types of data and handles the vector data efficiently. Try SingleStore for FREE: https://lnkd.in/gQ6zGCXi
To view or add a comment, sign in
-
-
We’re proud to be a partner with Linkurious. Learn how Linkurious with #SenzingInside will help your organization build an entity resolved knowledge graph for a complete 360° view of entities of interest and their relationships that’s reliable, visual and easy to analyze. #EntityResolution #EntityResolvedKnowledgeGraph #SenzingPartner
We're happy to announce our partnership with Senzing, the leading #entityresolution AI provider. This partnership will enable our users to identify duplicate records across data sources and build an accurate unified #knowledgegraph. This dynamic duo, combining graph technology & entity resolution will allow users to: 🔎 Easily uncover hidden anomalies & key insights ✨ Gain clarity & make context-based decisions, faster Learn more 👇 https://lnkd.in/eeaKGMk5 #DataAnalysis #GraphIntelligence
To view or add a comment, sign in
-
Nyckel has a new model type! It’s called multimodal classification, and it allows for models trained on both image and text data (previously it was one or the other). A retailer, for instance, could use a product’s image and name to autotag it to the right subcategory. This new function type adds to Nyckel’s suite of models, including multiclass and multilabel classification, image and text search, OCR, box detection, and center detection. If you’re interested in testing it, send us a note, and we'll enable it. https://lnkd.in/g2iFV9tf
To view or add a comment, sign in