Hinote Strategy LLC’s Post

View organization page for Hinote Strategy LLC, graphic

22 followers

View profile for Clint Hinote, graphic

Disruptive Strategist | Matching Emerging Tech to Defense Needs | Transforming Organizations for the China Challenge | Growing Leaders to Navigate an Uncertain Future

#ConnectingTech Want a summary of last week's #AIforDefense Summit? Here is my (humble) attempt: - Large Language Models - #LLMs (e.g. #ChatGPT or #llama2) & similarly trained models are not magic. These are 'trained' on large amounts of data, such as articles, papers & books. The training involves breaking language down into "tokens" (you might think of these as common pairs of characters--like 'qu'--or common syllables in English like "er"). Using these tokens, they establish patterns in language & catalog these patterns. This allows them to use math to predict the most likely next word/phrase in response to a prompt, like "who is the better singer, Ed Sheeran or Taylor Swift?" Try it. - For a long time, LLMs were very clunky & responded in gibberish. Developers, however, stuck with their core thesis: with enough data plus enough computing power to establish mathematical linkages throughout language, it is possible for a model to construct responses in human language that are useful. That thesis is now proven. - The majority of experts now believe the next evolution in LLMs is to move from general language models to more specific applications that are trained on specialized datasets & able to provide very good answers within a narrow field. These will be bad at answering who is the best singer, but they would be excellent with "identify publicly traded companies whose stock price and PE ratios rose during periods of high inflation (CPI > 5%)?" That's a good question for #BloombergGPT - These models are not human. When we ascribe human conditions to them - like "hallucinations" - we obscure what is really going on. - Since these models are probabilistic, not deterministic, we may never be able to test them using techniques developed for deterministic systems. Establishing trust will require other approaches, probably testing how they work in interactions with humans of different skill levels. - It's clear that major productivity gains are possible when these models are used by humans. This is especially true when someone with general knowledge uses the model to do very specialized things. - There is wide agreement that LLMs are useful in govt & military applications, but the govt doesn't know what to ask for, & companies don't know what govt wants. In many cases, these are talking past one another. Added to this, established requirements & acquisition processes don't appear to be well-suited in getting this tech to the field. Govt-sponsored experiments can help, but many companies with advanced tech feel like they are on the outside looking in. What I think I think: We will see rapid development of specialized language models. The results will be compelling. Govt adoption will be slow unless something changes in requirements/acquisition. A DoD hedge or disruption portfolio can help. For my fellow summit participants, what did I miss? Defense Innovation Unit (DIU) Defense Entrepreneurs Forum Hanna Price Luis Hernandez Joe Chapa

  • No alternative text description for this image

To view or add a comment, sign in

Explore topics