🚀 New Article Alert for Laravel Developers! 🚀 We’re thrilled to share our latest guide on implementing a Laravel AI component with support for multiple large language models (LLMs). Whether you’re looking to expand your application’s capabilities or dive into the world of AI with Laravel, this step-by-step guide has got you covered. Here’s what you’ll learn: 🔧 How to seamlessly integrate AI components into your Laravel projects. 💡 Flexibility in choosing the right LLM for your specific needs. ⚡️ Best practices to optimize performance and ensure a seamless implementation. If you’re working with Laravel and want to leverage AI, this is a must-read! 👉 Check out the full article: https://lnkd.in/dqMRUsk5 As always, our goal is to help you optimize your development process. We’re here to support you, so feel free to share your thoughts or questions! #Laravel #AI #Tech #WebDevelopment #Inspector
Inspector’s Post
More Relevant Posts
-
Hi Friends, Excited to share my latest blog on the integration of Artificial Intelligence with Laravel for crafting intelligent web applications. Read Here: https://lnkd.in/d8FZJ_CE #AI #Laravel #WebDev #ArtificialIntelligence #WebDevelopment #TechBlog
Integrating Artificial Intelligence with Laravel for Intelligent Web Applications
https://insidetechie.blog
To view or add a comment, sign in
-
Calling AI APIs with JavaScript: A Simple Example (Soon with your Center of Excellence: October 2024) Artificial intelligence (AI) is revolutionizing the way we interact with technology, and AI APIs are making it easier than ever to integrate AI into our applications. In this post, I'll show you how to call an AI API using the JavaScript fetch API. Here's an example of how to call the OpenAI API's text completion endpoint and ask for a text about IT business challenges: JavaScript const fetch = require('node-fetch'); const API_ENDPOINT = 'https://lnkd.in/eF9jZxdA'; const API_KEY = 'YOUR_API_KEY'; const prompt = 'Write a text related to IT business challenges.'; const headers = { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/x-www-form-urlencoded', }; const query = new URLSearchParams(); query.set('prompt', prompt); query.set('max_tokens', 256); query.set('temperature', 0.7); const requestOptions = { method: 'POST', headers: headers, body: query.toString(), }; fetch(API_ENDPOINT, requestOptions) .then(response => response.json()) .then(data => { if (data.choices.length === 0) { throw new Error('No completions returned'); } const completion = data.choices[0].text; console.log(completion); }) .catch(error => { console.error(error); }); This script will do the following: - Call the OpenAI API's text completion endpoint - Write a text related to IT business challenges - Return the generated text Here's a breakdown of the script: - Import the fetch library: This library is used to make HTTP requests. - Define the API endpoint: This is the URL of the OpenAI API endpoint that we want to call. - Define your API key: This is the key that allows you to use the OpenAI API. - Define the prompt: This is the text that you want the AI to generate a completion for. - Create headers: These headers are required to authenticate with the OpenAI API. - Create query parameters: These parameters provide additional information to the AI API. - Create request options: These options specify the type of request and the data to send. - Make the request: This sends the request to the API and returns a response. - Parse the response: This extracts the data from the response and converts it into a JavaScript object. - Check for errors: This throws an error if there are any problems with the response. - Get the completion: This extracts the generated text from the response object. - Print the completion: This prints the generated text to the console. This script will call the OpenAI API's text completion endpoint, but it will ask for information about a business instead of generating a poem. The AI will return a completion with a maximum of 256 tokens and a temperature of 0.7. The temperature parameter controls the creativity of the generated text, with higher values resulting in more creative but potentially less coherent text. #sageuniversity #sagepartner
To view or add a comment, sign in
-
Who wants to pair with me on building a cool side-project for software developers? Here's the big idea: Developers often spend significant amounts of time trying to understand large and / or unfamiliar codebases, tracing through code history to debug issues and searching for relevant code. Git commit history can be a treasure trove of valuable information, but more often than not it is completely useless: in reality, most commit messages look like this: "fixed a bug" Yay. An AI-powered tool to re-analyse and re-summarize Git commits could dramatically improve developer productivity. Leveraging an LLM to guess the intent and summarise the code changes in each commit - even if far from perfect - would likely make the codebase much more accessible and understandable. Developers new to the project can quickly get up to speed on an unfamiliar codebase, with more historical context and a timeline that provides valuable insights. But wait, there's more. When debugging, it's often unclear which commit introduced a bug. Searching an index of AI-enhanced commit descriptions could help zero in on likely culprit commits much faster than manual search. Semantic code search powered by an LLM-generated index could allow more natural language queries to find relevant code snippets and understand how code evolved. For companies, this could mean faster and more productive onboarding of new developers, less time wasted on unnecessary code archeology and ultimately fewer bugs. It could be a powerful productivity multiplier for both new and experienced devs. My proposal for the technical approach, in a nutshell: Technical Approach: * Use an open-source or proprietary pre-trained code-aware model (e.g. Code Llama, OpenAI Codex, or just GPT4 / Claude) * To process a new codebase, go through all the branches and for each commit, extract the diff and commit message * Chunk the diff into manageable pieces if needed (modern models have rather large context window, but still - some commits are monsters) * Prompt the LLM with each diff chunk + commit message (plus the project description for context), asking it to summarize the likely intent of the code changes in concise natural language * Aggregate the LLM's responses into a commit-level intent summary * Index these summaries along with the diffs and commit metadata in a search engine like Elastic * Build a clean, simple UI to search and explore this data, allowing queries in natural language that get embedded and semantically matched against the index. ALSO provide a CLI tool for easy command-line access. Additional Ideas: * Augment the LLM prompts with info from linked pull requests, issues, documentation to get better context. * Identify and surface insightful trends, e.g. files changed together frequently, common bug patterns, etc. * Integrate with IDE to surface relevant commits in-context. Develop plugins for PyCharm, VSCode, etc.
To view or add a comment, sign in
-
I think what a lot of people have intuitively figured out, but haven't noticed explicitly, is that using AI for greenfield projects feels much more useful than using it in an established codebase. From what I've seen, there are two main reasons for this: 1. Experienced engineers often work on systems that involves many different parts of systems. Current AI tools just aren't built for this kind of task. 1. AI models are trained on a broad range of data, which doesn't always match up with the specific, deep knowledge that experienced devs have built up over years. New devs are brought up while experienced devs are weighed down. I'm going to focus on that first point in this post, because I think it's in part what's allowing less experienced devs to see things that more experienced devs aren't. AI models are getting pretty damn good, to the point where using Claude 3.5 rarely leaves me wanting more. AI tooling is the exact opposite. Working on greenfield projects that have grown, I've started to run into problems: it's becoming increasingly harder to give the AI enough context to get a good response. The changes I'm requesting are touching more parts of the codebase, and it's tough to include all the relevant bits. For any given change to my web projects (like Django, for example), if I want a solution quickly I need: 1. The relevant html 2. Any blocks of other content I'm including 3. Relevant CSS 4. Relevant JS 5. Sometimes an example of a similar feature implemented in another html, css, or js file, to maintain consistency 6. The view 7. Any relevant imports 8. Similar views that may have implemented similar patterns to what I need to happen 9. Any other functions that the view calls 10. The URL structure 11. Any schemas that might be relevant 12. Database models And that's not even counting things like repo structure, ownership, git diffs, or (for more complicated scenarios) call graphs. More relevant context means better AI output, but getting that context is a pain, and for best results it should all be in a single message. I got fed up with this and made a Neovim shortcut to collect these snippets in a haphazard kind of way that grabs code snippets, file info, and generates a file structure at the top of a temporary buffer based on the files that snippets are grabbed from. It's not perfect, but it helps get more context to the AI without spending ages adding all the metadata. Just by using this there has been a noticeable improvement in how often I am able to get zero-shot solutions out of Claude 3.5. At this point I am just doing a manual, informed RAG. I would like to automate this process, so to that end I ask "How can I automatically find all of the snippets that are relevant to the feature I am trying to implement?" I cover the rest of my thoughts on this in a post on my blog: https://lnkd.in/gtAmyx7a
Using Agents as Retrofit Solutions to Established Codebases
thelisowe.com
To view or add a comment, sign in
-
Here's how I think about the software stack for LLM inference, from a JS/TS dev point of view: There are 6 levels that build on one another: 1) The model: the actual model that will be executed at inference time. Sometimes it's the providers models (e.g. GPT-4 et al for OpenAI), sometimes you can choose yourself (download different GGUF files and run them with llama.cpp). When I say model I put fine-tunes, base models, and LORAs all in the same bucket for this post - it's the weights that are being used to infer the next token. 2) The model execution engine (model backend): the models need to be run in some runtime environment to process inputs and produce tokens. Some providers have their own engines for their own models (OpenAI, AnthropicAI), others let you run open source models in the cloud (e.g. FireworksAI), and then there are engines that you can use locally (llama.cpp). The engine needs to support the architecture of the model. Some providers wrap existing open source engines, e.g. ollama uses llama.cpp. 3) The API: the models are exposed through REST APIs mostly. With Llama.cpp, you can use bindings. With WebLLM, you can run in the browser. 4) The client library: various options here. Many providers standardize on the OpenAI client library these days, but others choose to have different libs (e.g. mistral, google, anthropic, ollama). With Llama.cpp you can use bindings in various languages, including JS (node bindings) or clients for the Llama.cpp server. 5) The orchestration framework: Handles how you integrate LLMs into apps, e.g. for chat, retrieval augmented generation (in combination with vector stores and embeddings), agents, etc. llama_index and LangChainAI are examples of orchestration frameworks. 6) UI integration: most JavaScript apps are client/server apps with a web frontend. It's important to move information from the server (where the API keys are) to the client, ideally with streaming. The Vercel AI SDK is an example of a UI integration library for AI. This means that there are 3 types of LLM providers: A) integrated providers (such as OpenAI, GoogleAI, Anthropic): they train and host their own proprietary models, have their own execution engines, their own API, and provide client libraries to work with their models. B) open-source cloud providers (such as Fireworks, Anyscale, TogetherAI): They host open source models (and often your own models) and provide a standardized API (often OpenAI compatible). C) local model providers (such as Llama.cpp, Ollama, webllm): you download and run the model on your machine. Some have their own client (e.g. Ollama). Right now the orchestration frameworks and the UI integration is separate from the backend LLM provider stack. Do you agree? How do you see these components evolve?
To view or add a comment, sign in
-
Innovative Transformational Leader | Multi-Industry Experience | AI & SaaS Expert | Generative AI | DevOps, AIOps, SRE & Cloud Technologies | Experienced Writer | Essayist | Digital Content Creator | Author
13 Must-know Open-source Software to Build Production-ready AI Apps 🧙♂️🪄✨ by Sunil Kumar Dash via The Practical Developer ([Global] GDPR) URL: https://ift.tt/feJ6hap I've been developing both AI and non-AI applications for some time now. While creating a prototype can be relatively straightforward, building AI systems that are truly ready for the real world is a much more challenging task. The software needs to be Reliable and well-maintained. Adhere to security standards (SOC2, ISO, GDPR, etc). Scalable, Performant, Fail-safe, and so on. Despite all the buzz around AI, the ecosystem for developing production-ready AI applications is still in its early stages. However, considerable progress has been made recently, thanks to advancements in open-source software. So, I have compiled a list of open-source software to help you build production-ready AI applications. Click on the emojis to visit the section. Composio 👑 - Seamless Integration of Tools with LLMs 🔗 Weaviate - The AI-native Database for AI Apps 🧠 Haystack - Framework for Building Efficient RAG 🛠️ Litgpt - Pretrain, Fine-tune, Deploy Models At Scale 🚀 DsPy - Framework for Programming LLMs 💻 Portkey’s Gateway - Reliably Route to 200+ LLMs with 1 Fast & Friendly API 🌐 AirByte - Reliable and Extensible Open-source Data Pipeline 🔄 AgentOps - Agents Observability and Monitoring 🕵️♂️ ArizeAI’s Phoenix - LLM Observability and Evaluation 🔥 vLLM - Easy, Fast, and Cheap LLM Serving for Everyone 💨 Vercel AI SDK - Easily Build AI-powered Products ⚡ LangGraph - Building Language Agents as Graphs 🧩 Taipy - Build Python Data & AI web applications 💫 Feel free to star and contribute to the repositories. 1. Composio 👑: Seamless Integration of Tools with LLMs 🔗 I have built my tools for LLM tool calling and have used tools from LangChain and LLamaHub, but I was never satisfied with the accuracy, and many applications are unavailable. However, this was not the case with Composio. It has over 100 tools and integrations, including but not limited to Gmail, Google Calendar, GitHub, Slack, Jira, etc. It handles user authentication and authorization for integrations on your users' behalf. So you can build your AI applications in peace. And it’s SOC2 certified. So, here’s how you can get started with it. Python pip install composio-core Add a GitHub integration. composio add github Composio handles user authentication and authorization on your behalf. Here is how you can use the GitHub integration to star a repository. from openai import OpenAI from composio_openai import ComposioToolSet, App openai_client = OpenAI(api_key="******OPENAIKEY******") # Initialise the Composio Tool Set composio_toolset = ComposioToolSet(api_key="**\*\***COMPOSIO_API_KEY**\*\***") ## Step 4 # Get GitHub tools that are pre-configured actions = composio_toolset.get_actions(actions=[Action.GITH...
13 Must-know Open-source Software to Build Production-ready AI Apps 🧙♂️🪄✨ by Sunil Kumar Dash via The Practical Developer ([Global] GDPR) URL: https://ift.tt/feJ6hap I've been developing both AI and non-AI applications for some time now. While creating a prototype can be relatively straightforward, building AI systems that are truly ready for the real world is a much more challenging ta...
dev.to
To view or add a comment, sign in
-
Hello All, Exciting #Blog Alert! 🌟 Aman Tailor from InsideTechie just shared a new blog on our website. "Integrating Artificial Intelligence with Laravel for Intelligent Web Applications" 🚀 Uncover how AI and Laravel come together for a seamless web experience. A must-read for tech enthusiasts. Check out Aman's insights here: https://lnkd.in/ddPErU8h Stay ahead with our latest insights and blogs: https://insidetechie.blog/ Don't forget to share your thoughts in the comments. #Insidetechie #AI #WebDevelopment #TechBlog #Technology #Technologyblog #Laravel #TechBlog #ArtificialIntelligence #blogging
Integrating Artificial Intelligence with Laravel for Intelligent Web Applications
https://insidetechie.blog
To view or add a comment, sign in
-
Solid assessment Lars. I'd add a few more LLM "providers" (or I'd call them "layers") Regarding (B) layer, we're also seeing inference run at the edge in addition to more centralized cloud providers. By this I mean CDN networks (fastly/cloudflare) that can run inference at an edge node in order to lower latency a bit to the end client. That said shaving off a few milliseconds of latency is pretty marginal given compute time is the largest bottleneck on response latency. There are other advantages inference at the edge could provide though, like caching responses that might be similar etc. Regarding (C) layer, I think that's gonna expand a fair amount to basically being an "embedded LLM" layer. Llama etc needs a pretty beefy machine to perform. Seems like there will be a future where IoT devices have smaller specialized models embedded for certain niche tasks, and then for more cpu bound tasks they will cascade up a chain that could look like going to the (B) layer, and failing that going to a SOTA (A) layer. Lastly there's also the possibility of an "on-premise" layer to get inference closer to the end client/IoT device but still have beefier compute. But that only makes since if bandwidth is bottleneck (ie video not text)
Here's how I think about the software stack for LLM inference, from a JS/TS dev point of view: There are 6 levels that build on one another: 1) The model: the actual model that will be executed at inference time. Sometimes it's the providers models (e.g. GPT-4 et al for OpenAI), sometimes you can choose yourself (download different GGUF files and run them with llama.cpp). When I say model I put fine-tunes, base models, and LORAs all in the same bucket for this post - it's the weights that are being used to infer the next token. 2) The model execution engine (model backend): the models need to be run in some runtime environment to process inputs and produce tokens. Some providers have their own engines for their own models (OpenAI, AnthropicAI), others let you run open source models in the cloud (e.g. FireworksAI), and then there are engines that you can use locally (llama.cpp). The engine needs to support the architecture of the model. Some providers wrap existing open source engines, e.g. ollama uses llama.cpp. 3) The API: the models are exposed through REST APIs mostly. With Llama.cpp, you can use bindings. With WebLLM, you can run in the browser. 4) The client library: various options here. Many providers standardize on the OpenAI client library these days, but others choose to have different libs (e.g. mistral, google, anthropic, ollama). With Llama.cpp you can use bindings in various languages, including JS (node bindings) or clients for the Llama.cpp server. 5) The orchestration framework: Handles how you integrate LLMs into apps, e.g. for chat, retrieval augmented generation (in combination with vector stores and embeddings), agents, etc. llama_index and LangChainAI are examples of orchestration frameworks. 6) UI integration: most JavaScript apps are client/server apps with a web frontend. It's important to move information from the server (where the API keys are) to the client, ideally with streaming. The Vercel AI SDK is an example of a UI integration library for AI. This means that there are 3 types of LLM providers: A) integrated providers (such as OpenAI, GoogleAI, Anthropic): they train and host their own proprietary models, have their own execution engines, their own API, and provide client libraries to work with their models. B) open-source cloud providers (such as Fireworks, Anyscale, TogetherAI): They host open source models (and often your own models) and provide a standardized API (often OpenAI compatible). C) local model providers (such as Llama.cpp, Ollama, webllm): you download and run the model on your machine. Some have their own client (e.g. Ollama). Right now the orchestration frameworks and the UI integration is separate from the backend LLM provider stack. Do you agree? How do you see these components evolve?
To view or add a comment, sign in
-
Full Stack Developer PHP | Laravel | Laravel Nova, Horizon, Scout, Pennant,Octane Javascript | Vue.JS | Inertia.JS, Typescript and Flutter & Quasar Framework.
🚀 Excited about the potential of Laravel in AI development! 🚀 Laravel's elegant syntax and powerful features make it an ideal choice for building robust AI platforms. With seamless integration capabilities, it allows for efficient management of data pipelines, API development, and real-time data processing. By leveraging Laravel, developers can create scalable and maintainable AI solutions, accelerating innovation and driving impactful results. Let's harness the power of Laravel to shape the future of AI! 🌟💻🤖 #Laravel #AI #TechInnovation #WebDevelopment #ArtificialIntelligence #PHP #MachineLearning
To view or add a comment, sign in
-
Laravel Finetuner: Generate training examples, save them as a .jsonl file, upload it to OpenAI, and start the fine-tuning job. Your AI model is now ready.
GitHub - halilcosdu/laravel-finetuner: Laravel Finetuner is a package designed for the Laravel framework that automates the fine-tuning of OpenAI models.
github.com
To view or add a comment, sign in
351 followers
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
2wThe integration of AI into web development frameworks like Laravel is gaining traction, with studies showing a 25% increase in developer interest in AI-powered tools. This trend aligns with the growing demand for intelligent applications, driven by advancements in natural language processing and machine learning . Given the emphasis on LLM flexibility, what strategies would you recommend for developers to select the most appropriate LLM based on specific application requirements like latency, accuracy, and data privacy?