Worried about AI but feel you don’t understand it? Physicist and AI afficionado Matt Hodgson reviews "Why Machines Learn: the Elegant Maths Behind Modern AI" by Anil Ananthaswamy, concluding it's “an entertaining journey into the mind of a machine”. https://lnkd.in/ecWSCnG9
Physics World’s Post
More Relevant Posts
-
Does any one worry about electricity bill? Money💰 🤑💸 talks. Check your electricity bill first, for reality. AI has doubled the electricity bill of a C-level friend of mine 😥 😿 😞 . Is it worthwhile? Then let's try this fact-check question of AI-IP to see if AI works: "Is there any today's data technology/solution you know, that can interpret and answer the following fundamental and realistic Chinese-English multilingual questions of BI?" With our intellectualproperty (IP), a copyrighted multilingual metadata, we can provide real time answers, as evidence, for policy/decision making. "Who, in the Ontario province of Canada, has new US patents granted on the nearest Tuesday, when the USPTO releases the newly granted US patents on a weekly basis?" "Who, in the "江蘇‘’ province of China, has new US patents granted on the nearest Tuesday, when the USPTO releases the newly granted US patents on a weekly basis?" Metadata is an enabler to let us find the data we want. Without metadata, NO data can be found/retrieved, even by the most advanced technologies, like AI, NVIDIA chips, supercomputers, etc. https://lnkd.in/g-aJFnXR Our IP can also make your data service UNIQUE globally.
How AI mathematicians might finally deliver human-level reasoning Artificial intelligence is taking on some of the hardest problems in pure maths, arguably demonstrating sophisticated reasoning and creativity – and a big step forward for AI #AI #AIMath
How AI mathematicians might finally deliver human-level reasoning
newscientist.com
To view or add a comment, sign in
-
How AI mathematicians might finally deliver human-level reasoning Artificial intelligence is taking on some of the hardest problems in pure maths, arguably demonstrating sophisticated reasoning and creativity – and a big step forward for AI #AI #AIMath
How AI mathematicians might finally deliver human-level reasoning
newscientist.com
To view or add a comment, sign in
-
Make sure to join us August 8, 2024 for the virtual AI, Machine Learning and Computer Vision Meetup! Register for the event: https://lnkd.in/dFfxgnu8 We have three great talks scheduled including: ** Evaluating RAG Models for LLMs: Key Metrics and Frameworks** Evaluating the model performance is the key for ensuring effectiveness and reliability of LLM models. In this talk, we will look into the intricate world of RAG evaluation metrics and frameworks, exploring the various approaches to assessing model performance. We will discuss key metrics such as relevance, diversity, coherence, and truthfulness and examine various evaluation frameworks, ranging from traditional benchmarks to domain-specific assessments, highlighting their strengths, limitations, and potential implications for real-world applications. About the Speaker Abi Aryan 🦉 is the founder of Abide AI and a machine learning engineer with over eight years of experience in the ML industry building and deploying machine learning models in production for recommender systems, computer vision, and natural language processing—within a wide range of industries such as ecommerce, insurance, and media and entertainment. Previously, she was a visiting research scholar at the Cognitive Sciences Lab at UCLA where she worked on developing intelligent agents. Also, she has authored research papers on AutoML, multi agent systems, and LLM cost modeling and evaluations and is currently authoring LLMOps: Managing Large Language Models in Production for O’Reilly Publications. #computervision #ai #artificialintelligence #machinevision #machinelearning #datascience
To view or add a comment, sign in
-
-
🔥 5 Game-Changing Machine Learning Papers Every Developer Should Know About 1. 📚 "Attention is All You Need" The paper that started the AI revolution! Introduced the Transformer architecture that powers ChatGPT and other LLMs. Key innovation: self-attention mechanism that lets models process text more efficiently than ever before. This is literally why we have modern AI! 2. 🌳 "Neural Networks are Decision Trees" Ever wondered what's happening inside neural networks? This paper shows how they're similar to decision trees, making these "black boxes" more interpretable. Game-changer for understanding how AI actually makes decisions! 3. ⚠️ "Cross-Validation Bias in Unsupervised Preprocessing" Critical read for ML practitioners! Shows how common preprocessing practices can give overly optimistic results. Essential for building truly reliable models that work in production. 4. 💡 "LoRA: Low-Rank Adaptation of Large Language Models" Tired of expensive model training? LoRA introduces a clever technique to fine-tune massive language models using fraction of the resources. Perfect for teams wanting to customize LLMs without breaking the bank! 5. 🧠 "Grokking: Generalization Beyond Overfitting" Mind-bending research showing how models can suddenly "understand" patterns after appearing to overfit. Challenges everything we thought we knew about training on small datasets! #MachineLearning #AI #DeepLearning #DataScience #TechPapers Have you read any of these before? Let me know in the comments! 👇 Sharing this to help fellow developers stay updated on groundbreaking ML research. Follow me for more tech insights! 🚀
To view or add a comment, sign in
-
Reinventing Machine Learning: Meet Kolmogorov-Arnold Networks (KANs)! Hey tech enthusiasts and AI aficionados! We're on the brink of a revolution in machine learning with the introduction of Kolmogorov-Arnold Networks (KANs), a groundbreaking advancement that promises to redefine our approach to AI and neural networks. KANs are inspired by the Kolmogorov-Arnold representation theorem and offer a novel alternative to traditional Multi-Layer Perceptrons (MLPs). They feature learnable activation functions on edges, replacing linear weights with univariate functions parameterized as splines. This innovative design leads to smaller, more efficient computation graphs that outperform MLPs in accuracy and interpretability on small-scale tasks. Key highlights of KANs: - Incredible Efficiency: KANs are poised to deliver up to 10 times more efficiency for large language models, potentially marking the dawn of Machine Learning 2.0. - Enhanced Accuracy: Smaller KANs achieve comparable or superior accuracy to larger MLPs in function fitting tasks. - Improved Interpretability: KANs offer intuitive visualization and interaction, making them powerful tools for scientific discovery. - Faster Neural Scaling: KANs exhibit faster scaling laws than MLPs, opening new possibilities for AI advancements. This new paradigm has shown immense potential, helping scientists rediscover mathematical and physical laws and proving to be valuable collaborators in fields like mathematics and physics. Despite some challenges, including potential overfitting and the need for further research, KANs are a promising step towards more efficient and interpretable AI models. Let's dive into the future of ML with KANs and explore how these networks can push the boundaries of what's possible in deep learning. 🔍 #AI #MachineLearning #DeepLearning #NeuralNetworks #Innovation #Technology #Science #MLP #KAN #FutureTech #TechInnovation
To view or add a comment, sign in
-
-
Aspiring AI Developer | SIH'23 Finalist | Former DeepSoft Intern | Specializing in LLMs, Deep Learning, AI & ML | Developing Innovative AI Solutions Across Various Domains
Ever thought how AI makes faces and generates human facial images? 🤖✨ Meet DCGANs (Deep Convolutional Generative Adversarial Networks)! What are DCGANs? DCGANs are a class of GANs (Generative Adversarial Networks) that use deep convolutional networks for both the generator and discriminator. They are incredibly effective in generating realistic images by learning from a dataset of real images. How do they work? 1. Generator: This neural network generates new images from random noise. 2. Discriminator: This neural network evaluates whether an image is real (from the dataset) or fake (generated by the generator). 3. Adversarial Process: The generator tries to create images that are indistinguishable from real images, while the discriminator tries to get better at distinguishing real from fake. This adversarial process improves both networks over time. Applications of DCGANs: - Image Generation: Creating realistic images for various uses, including art and entertainment. - Data Augmentation: Generating additional training data for machine learning models. - Super-Resolution: Enhancing the resolution of images. I've created a Google Colab notebook where you can see the code and try it out for yourself! Check it out and dive into the fascinating world of AI-generated images. 🌐 [Explore the DCGANs Google Colab Notebook](https://lnkd.in/gahjv6eM) Let’s unlock the magic of AI together! 🚀 #AI #MachineLearning #DeepLearning #DCGANs #GenerativeModels #ArtificialIntelligence #DataScience #TechInnovation #ColabNotebook #LearnAI
To view or add a comment, sign in
-
-
🗓 Save the Date! On June 10, Professor Chris Frederick will do a deep dive into LLM's and Generative AI where you will; ▶ Learn what Generative AI is ▶ Understand how to write a program to use Generative AI in your business ▶ Leave with code samples that you can adapt to your specific use case Register below to join this free virtual event! https://lnkd.in/dirj4stm
AI Revolution: Hands on with LLM's hosted by Notre Dame
eventbrite.com
To view or add a comment, sign in
-
Just completed "Prompt Engineering 101" with LearningMate. Looking forward to applying these insights in AI and machine learning. #AI #PromptEngineering
To view or add a comment, sign in
-
-
🤖 The Turing Test: A Milestone in AI Development 🤖 The Turing Test, proposed by Alan Turing in 1950, was designed to answer a fundamental question: Can machines think? By engaging in a text-based conversation, if a machine could convince a human that it was also human, it was said to have passed the test—a gold standard for machine intelligence. For decades, the Turing Test remained a benchmark for evaluating AI, often depicted in sci-fi as the ultimate hurdle for machines to achieve human-like intelligence. Yet, in recent years, we've witnessed a significant shift. Large language models like GPT-4 have reached a point where they can convincingly pass the Turing Test in many scenarios. Using the OpenAI foundational models as a benchmark, one could argue that the Turing test was definitely passed sometime between GPT-3 in 2020 and GPT-4 2023. While GPT-3 was impressive, it sometimes struggled with maintaining coherent, contextually relevant conversations over extended interactions, particularly when dealing with complex topics or requiring deep reasoning. GPT-4, however, has made substantial advancements in these areas, demonstrating an ability to handle nuanced discussions, provide more accurate responses, and maintain context over long exchanges. The development of longer context windows will only further enhance the ‘human like’ reasoning of these large models. What’s fascinating—and somewhat surprising—is that this breakthrough hasn’t generated the level of public discourse one might expect. The achievement of what was once thought of as a distant, almost sci-fi milestone seems to be seen more as an incremental step rather than the paradigm shift it truly represents. #ArtificialIntelligence #MachineLearning #TuringTest #GPT4 #AI #TechInnovation #AIResearch #FutureOfWork #Innovation #DeepLearning
To view or add a comment, sign in
-
-
Quote (from the conclusion at the end of the article): No matter how sophisticated the computation is, how fast the CPU is, or how great the storage of the computing machine is, there remains an unbridgeable gap (a “humanity gap”) between the engineered problem solving ability of machine and the general problem solving ability of man. That's it! What is called "AI' today is just a bunch of math. If the problem at hand can be explored and solved by math, then today's AI engines will provide some answers (+/- data biases). But there are gigantic numbers of problems that do not fall into that category. Way back, symbolic processing attempted to go beyond "brute force AI". It felt short. Bottom line: we are still trying to understand "knowledge", "reasoning", "problem solving", "decision making" and, of course, "creativity". In other words, we are lightyears away from general AI.
Great article on AI by famous J.M. Bishop. “... all the impressive achievements of deep learning amount to just curve fitting.” The key, as Pearl suggests, is to replace “reasoning by association” with “causal reasoning” —the ability to infer causes from observed phenomena. “we need to stop building computer systems that merely get better and better at detecting statistical patterns in data sets—often using an approach known as ‘Deep Learning’—and start building computer systems that from the moment of their assembly innately grasp three basic concepts: time, space, and causality.”
Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It
frontiersin.org
To view or add a comment, sign in
More from this author
-
Physics World Newsletter: Physics Nobel prize, IOP Newton Medal and big-science spin-offs
Physics World 1w -
Physics World Newsletter: CERN at 70, peer review and how to think outside the box
Physics World 3w -
Physics World Newsletter: Quantum jobs, fusion challenges and imaging brain activity
Physics World 1mo