🔥Natron Project advanced🔥 Our team at Beholder continues to make progress on the Natron project, with the first study site located in Tanzania. The goal of the project is to leverage advanced technologies to validate existing gold deposits and discover new sources through modern data analysis and machine learning methods. We have already prepared the first version of how we will train our neural network to make more accurate predictions. As part of this project, we are actively implementing 3D inversions of gravitational anomalies, which allows us to visualize and analyze the structural elements of the terrain in far greater detail than traditional topographic maps. This enables us to detect complex features and geological structures crucial for identifying potential deposits. In addition, the team is developing a predictive model based on advanced AI algorithms. Our approach involves analyzing not only data at the pixel level but also spatial patterns, which helps produce more reliable predictions. We have prepared predictors for the neural network, which will be trained using a more complex method that takes into account spatial patterns in the distribution of gravitational anomalies and other indicators. An essential part of our work is validating the model on a site in the USA, where we have access to verified ground data. This will help us assess the model’s accuracy and reliability, making the Natron project even more scientifically grounded and effective for practical application. Stay tuned🔥
Beholder’s Post
More Relevant Posts
-
Artificial Intelligence for Geology: Machine Learning and Computer Vision Humans have limitations in our analytical power, especially when faced with the challenge of processing large volumes of data that surpass our cognitive capacities. Artificial intelligence (AI) provides us with the ability to harness vast databases to analyze them and generate new insights. A crucial branch of AI is machine learning. This discipline is based on the idea that systems can learn from data, identify patterns, and make decisions with minimal human intervention. Another fundamental branch is computer vision, which allows us to extract quantifiable information from digital images efficiently. In the field of geology, it is common to encounter large digital databases. In any deposit, we have information from wells, electrical records, analyses, and samples, which are often not fully utilized. Our expertise focuses on applying these new AI tools to extrapolate relevant information from specific data, such as that obtained from a core. Through advanced machine learning and computer vision techniques, we can explore and make the most of these datasets, offering valuable insight to reduce risk associated with hydrocarbon exploration and production.
To view or add a comment, sign in
-
KAN: Kolmogorov-Arnold Networks https://lnkd.in/eTNPQNXx Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes ("neurons"), KANs have learnable activation functions on edges ("weights"). KANs have no linear weights at all -- every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful collaborators helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today's deep learning models which rely heavily on MLPs.
To view or add a comment, sign in
-
Interesting article about data shortages and synthetic data when training LLMs and other models (ex CNNs) Hopefully someday we'll have a better approach than shoveling more data into the model to improve accuracy How have people in my network dealt with this challenge? https://lnkd.in/gQrF6Jmp.
Another month, another look at how artificial intelligence is pushing limits we never knew we had. This time, it’s the amount of useful data in the world for training AI systems. Did you think there was an inexhaustible supply? There is not. We look at the types of data facing the greatest shortages and what companies are doing to compensate. Also in this issue: Graphene, a material that could potentially transform multiple industries, is getting renewed attention 20 years after it was discovered; a new way to design buildings to prevent catastrophic collapse during earthquakes; and a look at the crews who take care of all those cables crisscrossing the ocean floor. Read our latest monthly newsletter for more. Image Credit: Ian Lyman / Midjourney https://lnkd.in/eSaPmQku
To view or add a comment, sign in
-
Who is really an expert on AI ? Let’s start with an example , most of us have to beg , borrow , and steal computer time And that’s is graciously awesome to even have a chance on advanced computing , think telescope time (that kind of stuff is booked up , scheduled in advance, and your project needs to be meaningful) So what are you an expert on in your telescope time - perhaps Crab Nebula - and if so , amazing project and mankind loves Star formation So what are you and expert on in AI , well if your telescope time was a black box neural net in Eugene Oregon , just smashing the white boards with a group of PhD’s that are more eager to hear your thoughts than another regurgitation of a reference manual or a hyperbole of a press release … Perhaps , the little engine that could , still has some work to do So, let digress to Machine Learning, Data Mining , and trippy punnet squares —- as the way we refer to the very uninvented and hasn’t happened yet …. - AI. // Artificial Intelligence Throw thirty billion dollars as at anyone’s algo and lame ass schema - you will get something interesting, but not AI
To view or add a comment, sign in
-
Another month, another look at how artificial intelligence is pushing limits we never knew we had. This time, it’s the amount of useful data in the world for training AI systems. Did you think there was an inexhaustible supply? There is not. We look at the types of data facing the greatest shortages and what companies are doing to compensate. Also in this issue: Graphene, a material that could potentially transform multiple industries, is getting renewed attention 20 years after it was discovered; a new way to design buildings to prevent catastrophic collapse during earthquakes; and a look at the crews who take care of all those cables crisscrossing the ocean floor. Read our latest monthly newsletter for more. Image Credit: Ian Lyman / Midjourney https://lnkd.in/eSaPmQku
May 2024
aventine.org
To view or add a comment, sign in
-
Natural vrs Artificial intelligence I often wonder where the science is going to. There is a sea change in the current level of understanding of the purpose of science and the definition of science from when it started. Alchemy, body of science, started with search of immortality. Similarly, there were many reasons of study of science. Science in short, is a utilitarian body of knowledge of laws of nature. It is limited like a bucket of water taken from infinite storage of water in rivers. As needs increase, this knowledge is accordingly increased and this way science is continued forever. Artificial intelligence is a type of intelligence or machine learning of logically connect thoughts. Intelligence will reduce memory because it will be more precise and standard or equal for most people. I was thinking that the intelligence used in artificial intelligence is crucial. Artificial intelligence which can study data, sensory inputs and language and images and so on can also understand language and behaviour of animals and plants as well. This will improve human and animal relationships and it can do miracles better than our limited scope of the technology. I have read that European people at one time didn't consider Africans as humans. People from Africa were used as slaves and in zoo. This means, the Europeans in those days were trained in artificial intelligence but were lacking in natural human intelligence. This caution exists even now that artificial intelligence should not create divide but should integrate and inclusive and human. Science is a double edged sword and it depends on users and definition of purpose and intention. regards Krishna Gopal Misra
To view or add a comment, sign in
-
KAN: Kolmogorov–Arnold Networks Abstract Inspired by the Kolmogorov-Arnold representation theorem, we propose KolmogorovArnold Networks (KANs) as promising alternatives to Multi-Layer Perceptrons (MLPs). While MLPs have fixed activation functions on nodes (“neurons”), KANs have learnable activation functions on edges (“weights”). KANs have no linear weights at all – every weight parameter is replaced by a univariate function parametrized as a spline. We show that this seemingly simple change makes KANs outperform MLPs in terms of accuracy and interpretability. For accuracy, much smaller KANs can achieve comparable or better accuracy than much larger MLPs in data fitting and PDE solving. Theoretically and empirically, KANs possess faster neural scaling laws than MLPs. For interpretability, KANs can be intuitively visualized and can easily interact with human users. Through two examples in mathematics and physics, KANs are shown to be useful “collaborators” helping scientists (re)discover mathematical and physical laws. In summary, KANs are promising alternatives for MLPs, opening opportunities for further improving today’s deep learning models which rely heavily on MLPs. https://lnkd.in/d2BjFMYj
arxiv.org
To view or add a comment, sign in
-
The artificial intelligence industry is often compared to the oil industry: both are processes in which raw materials -data and oil, respectively- are extracted and refined into valuable commodities. In an increasingly environmentally conscious world, it is interesting to note that this analogy extends even further, as deep learning, like fossil fuel production, has a significant environmental impact. According to a study by researchers at the University of Massachusetts, training large-scale AI models can generate over 626,000 pounds of carbon dioxide equivalent emissions. That is nearly 5x the lifetime emissions of the average American car, including its production. Here are some interesting carbon footprint benchmarks, in lbs of CO2 equivalent: 💠 Roundtrip flight b/w NY and SF (1 passenger): ̴̴̴ 2.000 💠 Human life (avg. 1 year): ̴̴̴ 11.000 💠 American life (avg. 1 year): ̴̴̴ 36.000 💠 US car including fuel (avg. 1 lifetime): ̴̴̴ 126.000 💠 Transformer (213M parameters) w/ neural architecture search: : ̴̴̴ 626.000 Historically, human technological advancement has often prioritized functionality and convenience over efficiency, safety, and environmental impact. Now, we are increasingly aware of these issues, and this data can guide us in responsibly developing promising technologies like AI. Find the source in the comments for more information
To view or add a comment, sign in
-
Kolmogorov-Arnold Networks (KANs) are very interesting! The Kolmogorov-Arnold representation theorem essentially states that any single continuous function can be represented as a superposition of multiple continuous functions. In other words, KAN is different from current deep neural networks, which are based on multilayer perceptrons (MLPs) and use static activation functions. KAN uses univariate dynamic functions that act as both weights and activation functions. This makes KAN more flexible and efficient. They claim that their models are 100 times more efficient in terms of the number of parameters. However, they do have some limitations. For example, they require a lot of computation, so the training and inference process can often be very slow. However, I have to say that this is a great innovation and a scientific step forward, which makes this line of research interesting to see how it can be improved and how it turns out.
To view or add a comment, sign in
558 followers