Calling AI APIs with JavaScript: A Simple Example (Soon with your Center of Excellence: October 2024) Artificial intelligence (AI) is revolutionizing the way we interact with technology, and AI APIs are making it easier than ever to integrate AI into our applications. In this post, I'll show you how to call an AI API using the JavaScript fetch API. Here's an example of how to call the OpenAI API's text completion endpoint and ask for a text about IT business challenges: JavaScript const fetch = require('node-fetch'); const API_ENDPOINT = 'https://lnkd.in/eF9jZxdA'; const API_KEY = 'YOUR_API_KEY'; const prompt = 'Write a text related to IT business challenges.'; const headers = { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/x-www-form-urlencoded', }; const query = new URLSearchParams(); query.set('prompt', prompt); query.set('max_tokens', 256); query.set('temperature', 0.7); const requestOptions = { method: 'POST', headers: headers, body: query.toString(), }; fetch(API_ENDPOINT, requestOptions) .then(response => response.json()) .then(data => { if (data.choices.length === 0) { throw new Error('No completions returned'); } const completion = data.choices[0].text; console.log(completion); }) .catch(error => { console.error(error); }); This script will do the following: - Call the OpenAI API's text completion endpoint - Write a text related to IT business challenges - Return the generated text Here's a breakdown of the script: - Import the fetch library: This library is used to make HTTP requests. - Define the API endpoint: This is the URL of the OpenAI API endpoint that we want to call. - Define your API key: This is the key that allows you to use the OpenAI API. - Define the prompt: This is the text that you want the AI to generate a completion for. - Create headers: These headers are required to authenticate with the OpenAI API. - Create query parameters: These parameters provide additional information to the AI API. - Create request options: These options specify the type of request and the data to send. - Make the request: This sends the request to the API and returns a response. - Parse the response: This extracts the data from the response and converts it into a JavaScript object. - Check for errors: This throws an error if there are any problems with the response. - Get the completion: This extracts the generated text from the response object. - Print the completion: This prints the generated text to the console. This script will call the OpenAI API's text completion endpoint, but it will ask for information about a business instead of generating a poem. The AI will return a completion with a maximum of 256 tokens and a temperature of 0.7. The temperature parameter controls the creativity of the generated text, with higher values resulting in more creative but potentially less coherent text. #sageuniversity #sagepartner
Bruno Marchand’s Post
More Relevant Posts
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
To view or add a comment, sign in
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
Scrape Websites with OpenAI Function Calling in JavaScript
proxiesapi.com
To view or add a comment, sign in
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
Scrape Websites with OpenAI Function Calling in JavaScript
proxiesapi.com
To view or add a comment, sign in
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
Scrape Websites with OpenAI Function Calling in JavaScript
proxiesapi.com
To view or add a comment, sign in
-
I built my first AI project on a weekend. Here are the 7 steps I followed: I want to create a personal assistant. So I sat down and wrote what I needed: - Something that can write text like humans. It sounds like a job for an LLM. - A UI that allows the user to ask questions and the chatbot to respond. - A simple way to publish this application. You can't train an LLM on a weekend, so borrow one; I used OpenAI, but you can use your own fine-tuned LLMs. Now, for the fun parts, the UI, the code, and the hosting, I used 𝗧𝗮𝗶𝗽𝘆. Taipy is an open-source Python library designed for easy development of data-driven web applications. It covers both the front-end and back-end, allowing users to develop the whole back-end of an application, model dataflows, and pipelines. It was perfect for my weekend idea (and probably for many of your ideas, too). 𝗪𝗶𝘁𝗵 𝗧𝗮𝗶𝗽𝘆: You can build the whole back-end and the front-end without knowing much about HTML, CSS, and JS. You have access to Taipy Cloud; designed to simplify web application development and deployment. Everything starts here: '$ pip install taipy' From then 7 simple steps: 1. Add your imports 2. Write the request and the send_message functions. The "request" function takes the user message as input and returns the response from the LLM. The "send_message" function adds the user's message to the context, sends it to the API, and then displays the conversation. 3. Now, the only missing piece was the UI. Taipy has a way to define pages by using Markdown strings. It cannot be easier. I used a table to display the conversation and input so the user could type their message. When the user presses enter, the UI calls the send_message() function. 4. I added some styling, and … I have my Personal Assistant. 5. From here, I just connected to Taipy Cloud, clicked on "Add Machine," filled in the fields, and added a new Application. This took me less than 5 minutes. 6. There is only one configuration pending. Adding my environment variable to hold the OpenAI's key. (Keep your keys out of source code) 7. Now for the final step: zip all files and upload, and click "Deploy app." Wait for the deployment to complete, and share the link with the people you want to impress. 𝗧𝗮𝗶𝗽𝘆 𝗣𝗹𝗮𝘁𝗳𝗼𝗿𝗺 𝗔𝗹𝘀𝗼: - It helps you manage dataflows and optimizes task performance and pipelines. - It provides a Rest API. - Includes a cache system that enables it to skip repetitive tasks. - Supports Authentication/Authorization 𝗪𝗵𝘆 𝗱𝗶𝗱 𝗜 𝗰𝗵𝗼𝗼𝘀𝗲 𝗧𝗮𝗶𝗽𝘆 𝗼𝘃𝗲𝗿 𝗦𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝘁? - It is designed for both prototyping and production. - It can scale. - It provides much better performance. - It supports large data. - A lot of UI components. - It is truly multi-user and provides for different user profiles. Open-source AI is taking over the world, and Taipy is part of it. Give them a star: https://lnkd.in/eYsaZGEr Disclaimer: This post was sponsored by Taipy.
To view or add a comment, sign in
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
Scrape Websites with OpenAI Function Calling in JavaScript
proxiesapi.com
To view or add a comment, sign in
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
Scrape Websites with OpenAI Function Calling in JavaScript
proxiesapi.com
To view or add a comment, sign in
-
Scrape Websites with OpenAI Function Calling in JavaScript https://lnkd.in/dVhaD7ma Web scraping with OpenAI allows for resilient data extraction from websites using JavaScript. It leverages natural language processing to handle changes in HTML structure. This article provides a code example for scraping product data from an ecommerce website. #webcrawling #webscraping
Scrape Websites with OpenAI Function Calling in JavaScript
proxiesapi.com
To view or add a comment, sign in
-
Here's how I think about the software stack for LLM inference, from a JS/TS dev point of view: There are 6 levels that build on one another: 1) The model: the actual model that will be executed at inference time. Sometimes it's the providers models (e.g. GPT-4 et al for OpenAI), sometimes you can choose yourself (download different GGUF files and run them with llama.cpp). When I say model I put fine-tunes, base models, and LORAs all in the same bucket for this post - it's the weights that are being used to infer the next token. 2) The model execution engine (model backend): the models need to be run in some runtime environment to process inputs and produce tokens. Some providers have their own engines for their own models (OpenAI, AnthropicAI), others let you run open source models in the cloud (e.g. FireworksAI), and then there are engines that you can use locally (llama.cpp). The engine needs to support the architecture of the model. Some providers wrap existing open source engines, e.g. ollama uses llama.cpp. 3) The API: the models are exposed through REST APIs mostly. With Llama.cpp, you can use bindings. With WebLLM, you can run in the browser. 4) The client library: various options here. Many providers standardize on the OpenAI client library these days, but others choose to have different libs (e.g. mistral, google, anthropic, ollama). With Llama.cpp you can use bindings in various languages, including JS (node bindings) or clients for the Llama.cpp server. 5) The orchestration framework: Handles how you integrate LLMs into apps, e.g. for chat, retrieval augmented generation (in combination with vector stores and embeddings), agents, etc. llama_index and LangChainAI are examples of orchestration frameworks. 6) UI integration: most JavaScript apps are client/server apps with a web frontend. It's important to move information from the server (where the API keys are) to the client, ideally with streaming. The Vercel AI SDK is an example of a UI integration library for AI. This means that there are 3 types of LLM providers: A) integrated providers (such as OpenAI, GoogleAI, Anthropic): they train and host their own proprietary models, have their own execution engines, their own API, and provide client libraries to work with their models. B) open-source cloud providers (such as Fireworks, Anyscale, TogetherAI): They host open source models (and often your own models) and provide a standardized API (often OpenAI compatible). C) local model providers (such as Llama.cpp, Ollama, webllm): you download and run the model on your machine. Some have their own client (e.g. Ollama). Right now the orchestration frameworks and the UI integration is separate from the backend LLM provider stack. Do you agree? How do you see these components evolve?
To view or add a comment, sign in
-
Solid assessment Lars. I'd add a few more LLM "providers" (or I'd call them "layers") Regarding (B) layer, we're also seeing inference run at the edge in addition to more centralized cloud providers. By this I mean CDN networks (fastly/cloudflare) that can run inference at an edge node in order to lower latency a bit to the end client. That said shaving off a few milliseconds of latency is pretty marginal given compute time is the largest bottleneck on response latency. There are other advantages inference at the edge could provide though, like caching responses that might be similar etc. Regarding (C) layer, I think that's gonna expand a fair amount to basically being an "embedded LLM" layer. Llama etc needs a pretty beefy machine to perform. Seems like there will be a future where IoT devices have smaller specialized models embedded for certain niche tasks, and then for more cpu bound tasks they will cascade up a chain that could look like going to the (B) layer, and failing that going to a SOTA (A) layer. Lastly there's also the possibility of an "on-premise" layer to get inference closer to the end client/IoT device but still have beefier compute. But that only makes since if bandwidth is bottleneck (ie video not text)
Here's how I think about the software stack for LLM inference, from a JS/TS dev point of view: There are 6 levels that build on one another: 1) The model: the actual model that will be executed at inference time. Sometimes it's the providers models (e.g. GPT-4 et al for OpenAI), sometimes you can choose yourself (download different GGUF files and run them with llama.cpp). When I say model I put fine-tunes, base models, and LORAs all in the same bucket for this post - it's the weights that are being used to infer the next token. 2) The model execution engine (model backend): the models need to be run in some runtime environment to process inputs and produce tokens. Some providers have their own engines for their own models (OpenAI, AnthropicAI), others let you run open source models in the cloud (e.g. FireworksAI), and then there are engines that you can use locally (llama.cpp). The engine needs to support the architecture of the model. Some providers wrap existing open source engines, e.g. ollama uses llama.cpp. 3) The API: the models are exposed through REST APIs mostly. With Llama.cpp, you can use bindings. With WebLLM, you can run in the browser. 4) The client library: various options here. Many providers standardize on the OpenAI client library these days, but others choose to have different libs (e.g. mistral, google, anthropic, ollama). With Llama.cpp you can use bindings in various languages, including JS (node bindings) or clients for the Llama.cpp server. 5) The orchestration framework: Handles how you integrate LLMs into apps, e.g. for chat, retrieval augmented generation (in combination with vector stores and embeddings), agents, etc. llama_index and LangChainAI are examples of orchestration frameworks. 6) UI integration: most JavaScript apps are client/server apps with a web frontend. It's important to move information from the server (where the API keys are) to the client, ideally with streaming. The Vercel AI SDK is an example of a UI integration library for AI. This means that there are 3 types of LLM providers: A) integrated providers (such as OpenAI, GoogleAI, Anthropic): they train and host their own proprietary models, have their own execution engines, their own API, and provide client libraries to work with their models. B) open-source cloud providers (such as Fireworks, Anyscale, TogetherAI): They host open source models (and often your own models) and provide a standardized API (often OpenAI compatible). C) local model providers (such as Llama.cpp, Ollama, webllm): you download and run the model on your machine. Some have their own client (e.g. Ollama). Right now the orchestration frameworks and the UI integration is separate from the backend LLM provider stack. Do you agree? How do you see these components evolve?
To view or add a comment, sign in