It's well-known that some problems can be solved on the quantum computer exponentially faster than on the classical one in terms of computation time. However, there is more subtle way in which quantum computers are more powerful. There is a problem, which can be solved by quantum circuit of constant depth, but can't be solved by classical circuit of constant depth. In this notebook it consider this problem. https://lnkd.in/gBXdyFuz
Devmallya "Dev" Karar’s Post
More Relevant Posts
-
This article dives into the fascinating intersection of hyperdimensional computing, direct memory access, and PCA to enhance the efficiency of knowledge graphs. By combining these advanced techniques, the author provides an insightful look into how we can streamline and optimize data-intensive processes within complex computing systems. The exploration of direct memory access to reduce bottlenecks in knowledge graph computation is particularly enlightening, offering readers a pathway to faster and more efficient processing. This piece is a must-read for those interested in cutting-edge data science and AI, providing clear explanations and innovative solutions for computational challenges. #HyperdimensionalComputing #KnowledgeGraphs #DataOptimization #DirectMemoryAccess #MachineLearning #DataScience
To view or add a comment, sign in
-
The algorithm is the "how" LLM itself is not a scientific revolution It's just technological know-how and it will be figured out pretty quickly We already know it has three basic components Algorithm, Compute, and Data The process of getting there is the classic catch up, learn, iterate, reverse-engineer and improvise
To view or add a comment, sign in
-
The first physical system to learn nonlinear tasks without a traditional computer processor https://buff.ly/3WSS6Ll
To view or add a comment, sign in
-
The CLEF paper entitled “QuantumCLEF 2025 - The Second Edition of the Quantum Computing Lab at CLEF” has been accepted to #ecir2025. The QuantumCLEF lab is part of the CLEF Initiative (Conference and Labs of the Evaluation Forum). It will feature three tasks: Feature Selection, Instance Selection, and Clustering. Participants will have access to real quantum computers to solve these tasks measuring the performance of quantum technologies vs traditional ones. The paper has been written in collaboration with Maurizio Ferrari Dacrema, Paolo Cremonesi, Washington Cunha, Marcos André Goncalves, and Nicola Ferro. For more information, you can look at https://lnkd.in/df949Q_p where we will soon provide more details about the tasks.
To view or add a comment, sign in
-
Computing high-degree polynomial gradients in memory https://lnkd.in/gUWXbXDV
To view or add a comment, sign in
-
Imagine there is a really hard problem that takes forever to solve, like finding the best way to deliver packages in a big city. Kipu Quantum, a company that studies computers that use the strangeness of quantum mechanics, says they have a new way to solve this problem way faster. Normal computers are like following a bunch of rules to solve a problem. Quantum computers are like having a superpower that lets you try out all the solutions at once. Kipu Quantum figured out a new way to use this superpower to solve these hard problems even faster. They tested their method on a big, fancy computer from IBM and it worked way better than other methods. They also simulated their method on an even bigger computer that isn't finished yet, and it looks like it will work there too. This could be a sign that computers using quantum mechanics are finally getting good enough to solve real-world problems. Kipu Quantum's method is still being studied, but it has the potential to revolutionize fields like logistics, medicine, and chemistry.
To view or add a comment, sign in
-
#Can QML be used to fine-tune LLMs and deploy them on classical computers? Building on the Quantum-Train framework (or Quantum Parameter Generation) introduced earlier this year by our team, we propose Quantum Parameter Adaptation (QPA). This approach leverages QML to generate parameters for Parameter-Efficient Fine-Tuning (PEFT) methods tailored for LLMs. We applied QPA to fine-tune GPT-2 and Gemma-2B, focusing on PEFT techniques such as LoRA, DoRA, Prefix-tuning and Adapters. Our findings show that QPA not only reduces the number of trainable parameters in PEFT methods but also maintains—if not slightly improves—the performance of LLMs in text generation tasks. Since QPA uses quantum circuits solely for parameter generation, it avoids the challenges associated with quantum data encoding. Additionally, the resulting QPA fine-tuned model is a fully classical LLM, meaning it can be deployed on classical computers without requiring quantum resources during the inference stage. arXiv: https://lnkd.in/gu6cmdyB
To view or add a comment, sign in
-
-
Machine learning blazes path to reliable Quantum Computers - https://lnkd.in/eqK2AQXj #quantum #thequantumfacts #quantumcomputing #quantumcomputers #quantumtechnology #quantumcomputer #quantumleap #quantumphysics #quantumsecurity
To view or add a comment, sign in
-
https://lnkd.in/gDeQ-a-b It performed a computation in under five minutes that would take one of today’s fastest supercomputers 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years
To view or add a comment, sign in
-
This is not the first paper showing multiple rollout can improve LLM performance. This is not a surprise because using multiple rollouts will need significantly more compute. We can do even better if we save LLM’s learning experience in a database, reflect on the experience, and extract relevant reflection for everyone problem to be solved. But that would be much more compute. At the end of the day, computing power is still the biggest bottleneck. That’s why Nvidia is worth so much. But to achieve the next level of intelligence, we still need much better algorithm to make LLM computation orders of magnitude more efficient, and also scale up computing power orders of magnitude more
To view or add a comment, sign in