Microsoft Probes if DeepSeek-linked Group Improperly Obtained OpenAI Data, Bloomberg News Reports. Microsoft (MSFT.O) opened a new tab, and OpenAI is probing if data output from the ChatGPT maker's technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence (AI) startup DeepSeek, Bloomberg News reported on Tuesday. Microsoft's security researchers observed that, in the fall, individuals they believed to be connected to DeepSeek exfiltrating a large amount of data using the OpenAI's application programming interface (API), the report said. OpenAI's API is the main way that software developers and business customers buy OpenAI's services. Microsoft, the largest investor in OpenAI, notified the company of suspicious activity, according to the Bloomberg report. Low-cost Chinese AI startup DeepSeek, an alternative to U.S. rivals, sparked a tech stock selloff on Monday as its free AI assistant overtook OpenAI's ChatGPT on Apple's (AAPL.O), opened a new tab App Store in the United States. David Sacks, the White House's AI and crypto czar, told, opens new tab Fox News in an interview earlier on Tuesday that it was "possible" that DeepSeek stole intellectual property from the United States. "There's substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI's models," Sacks said. Asked for comment on the Bloomberg report, an OpenAI spokesperson echoed Sacks in a statement that noted China-based companies and others were constantly attempting to replicate the models of leading U.S. AI companies, without specifically naming DeepSeek or any other company. "We engage in counter-measures to protect our IP, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the U.S. government to best protect the most capable models from efforts by adversaries and competitors to take U.S. technology." Microsoft declined to comment, while DeepSeek could not be immediately reached for a comment. #DeepSeekAI #Microsoft #LatestNews #Newsfeed #AILatestNews.
Time Booster Marketing’s Post
More Relevant Posts
-
“ Microsoft , opens new tab and OpenAI are probing if data output from the ChatGPT maker's technology was obtained in an unauthorized manner by a group linked to Chinese artificial intelligence (AI) startup DeepSeek, Bloomberg News reported on Tuesday. Microsoft's security researchers observed that, in the fall, individuals they believed to be connected to DeepSeek exfiltrating a large amount of data using the OpenAI's application programming interface (API), the report said. “ From Reuters
To view or add a comment, sign in
-
“OpenAI, the company behind ChatGPT, says it has proof that the Chinese start-up DeepSeek used its technology to create a competing artificial intelligence model — fueling concerns about intellectual property theft in the fast-growing industry…. …….Security researchers at Microsoft, which has poured billions into OpenAI, discovered last fall that individuals with possible links to DeepSeek were harvesting vast troves of data through OpenAI’s application programming interface, or API, sources told Bloomberg.” Congressional China Hawks will come down hard on export controls and distillation prevention. "As we go forward... it is critically important that we are working closely with the US government to best protect the most capable models," OpenAI statement
To view or add a comment, sign in
-
OpenAI said Wednesday that it is probing whether Chinese artificial intelligence startup DeepSeek improperly used its data to launch the low-cost AI model that sparked a market selloff Monday. #openai #DeepSeek #ai #Microsoft #chatgpt #aiapps #deepseekai #businessdor https://lnkd.in/dNNArEcT
To view or add a comment, sign in
-
By Eleanor Olcott via Financial Times OpenAI has found evidence that Chinese AI start-up DeepSeek used its proprietary models to train an open-source competitor, potentially violating intellectual property rights through a process called "distillation." Microsoft and OpenAI blocked accounts suspected to belong to DeepSeek after investigations last year, while industry insiders highlight that distillation is a common practice, making enforcement challenging. DeepSeek's rapid success with its R1 reasoning model—trained at a fraction of the cost of comparable US models—has rattled Silicon Valley, contributing to a 17% drop in Nvidia’s stock amid fears that expensive AI hardware investments may be unnecessary. 𝕏: I'm so sorry I can't stop laughing. OpenAI, the company built on stealing literally the entire internet, is crying because DeepSeek may have trained on the outputs from ChatGPT. They're crying their eyes out. What a bunch of hypocritical little babies. - Ed Zitron (@edzitron) https://lnkd.in/gk3Da5f4
To view or add a comment, sign in
-
⚡ OpenAI has introduced “ChatGPT Gov”, a revolutionary tool for the public sector. This innovative AI tool aims to transform the way government departments deliver services. For details, read the article…
To view or add a comment, sign in
-
💥 𝗔𝗜 𝗠𝗢𝗗𝗘𝗟 𝗗𝗜𝗦𝗧𝗜𝗟𝗟𝗔𝗧𝗜𝗢𝗡: 𝗜𝗡𝗦𝗣𝗜𝗥𝗔𝗧𝗜𝗢𝗡 𝗢𝗥 𝗜𝗠𝗜𝗧𝗔𝗧𝗜𝗢𝗡? ©️ In the latest development in the DeepSeek Saga, Microsoft and OpenAI are reportedly investigating whether DeepSeek, exfiltrated #data without its consent, via a process known as Knowledge Distillation or KD, to train its R1 model. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗗𝗶𝘀𝘁𝗶𝗹𝗹𝗮𝘁𝗶𝗼𝗻? A machine learning (ML) technique whereby a larger pre-trained model (the Parent Model) transfers learnings to an as yet untrained model (the Student Model). The Student Model asks the Parent Model millions of questions and then, via ML, learns to mimic the reasoning. In the case of DeepSeek-R1, the student has seemingly become the master, with US export controls acting as the mother of invention. 𝗪𝗵𝗮𝘁'𝘀 𝘁𝗵𝗲 𝗮𝗹𝗹𝗲𝗴𝗮𝘁𝗶𝗼𝗻? Microsoft claims that there is substantial evidence that DeepSeek distilled the knowledge of open AI's models using OpenAI's application programming interface (#API), to train its R1 model. Microsoft, OpenAI's largest investor, notified DeepSeek yesterday of its identification of suspicious activity, according to a Bloomberg report. It seems that DeepSeek is yet to comment. 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝗻𝗼𝘄? If credible, evidence of KD by DeepSeek would infringe OpenAI's Terms of Service, and potentially, its intellectual property (IP) rights. There are of course several barriers to OpenAI taking any legal action, including the obvious jurisdictional challenges. And of course, OpenAI is no stranger to such claims of IP infringement. President Trump's new #AI and #Crypto Czar, David Sacks, in an interview with Fox News yesterday, predicted that in the coming days and weeks, AI model providers will take active steps to prevent unauthorised distillation by third parties. More to follow on this from my team and me at Charles Russell Speechlys- stay tuned. #charlesrussellspeechlys #ai #machinelearning #training #datasets #copyright #intellectualproperty #competition
To view or add a comment, sign in
-
OpenAI launches multitasking AI. OpenAI is introducing an #AItool capable of managing a user’s computer while simultaneously handling various tasks, such as organizing travel and placing online orders. Named Operator, this application is currently accessible to a limited number of ChatGPT Pro subscribers. #OpenAI refers to it as a "research preview" and seeks feedback from early users to enhance its functionality. The Operator functions as an AI agent, a technology developed by companies like Google and Anthropic. What are your thoughts? How do you see AI agents influencing the future of your industry? We'd love to hear your insights in the comments below. #aiagents #openaioperator
To view or add a comment, sign in
-
DeepSeek: Microsoft & OpenAI Suspect Borrowed Tech 🕵️♂️💻 News reports suggesting that Microsoft and OpenAI are investigating whether groups linked to DeepSeek stole data from OpenAI. Microsoft security researchers observed groups they believe are linked to DeepSeek exfiltrating large amounts of data from OpenAI several months ago. #Microsoft #OpenAI #DeepSeek #AI #ArtificialIntelligence
To view or add a comment, sign in
-
Wow this is just another reason why it is a must to have MDM and MTD solutions to protect your employees, your customers, and your business’ reputation. Think twice before you let company devices have acess to these types of tools. Reach out to me and Eric W. if you want some guidance!
DeepSeek, the new AI platform from China is a hot topic. Unauthorized use of OpenAI's platform may a reason that DeepSeek cost less to train than other AI platforms. Microsoft and OpenAI, the company behind ChatGPT, are investigating whether an OpenAI Application Programming Interface (API) was used to exfiltrate a large amount of data last fall. The individuals that performed the data exfiltration are believed to be connected to DeepSeek. The data privacy and US national security concerns for DeepSeek are even worse than they are for TikTok. Collected data is stored and processed on servers located in China. Here are some additional concerns: ⌨️ Keystroke logging: DeepSeek tracks everything you type in the app. This data can be mined for personally identifying information and intellectual property. 💻 IP Address Collection: Technical information about your device, including IP address is collected by DeepSeek. This can be used to determine your location and other details about the user. 💽 Third-party Data: Deepseek may combine your data with info from other sources. This can involve building profiles for advertising and other analytics. Combining information from various sources and be used to reveal many details about DeepSeek users. As always, use caution when providing information to AI platforms. Everything you provide can be used to train the AI platform or further other data collection objectives of the AI platform.
To view or add a comment, sign in
-
-
We all now know how OpenAI took the world by storm and democratized access to large language models (LLM) and demonstrated to everyone how useful #GenerativeAI can be. We also witnessed several "oh shit" moments when proprietary information ended up in the public through the continuous re-finement/training of the public models. OpenAI with ChatGPT pulled a page from social media platform playbook and offered free access to their tools with the intent to train and optimize their models based on the usage patterns and content from the mostly happy users. They learned a lot about the hallucinations and continue to improve their models. While this is great for the "common" end users, corporate users need to think about privacy and compliance as well. There's a strong case for private instances when using LLMs and GenAI internally in organizations. Regulations, compiance and privacy often dictate that information cannot risk getting commingled with public domain. One of the first to offer a convenient way to build these private instances was AWS with their Bedrock framework and various underlying foundation models including Anthropic's Claude to build a truly private environments. Later Google Gemini and to a degree AzureAI have followed suite. Just recently even Accenture announced their platform built with Meta Llama and Nvidia to offer services for private implementations and some companies have acquired private licenses from OpenAI to run in their private data centers. Organizations and decision makers should ask themselves the question "would I be OK if our data shows up as an answer to our competitor's ChatGPT prompt?" - if the answer is yes, by all means use Copilot and other tools that are available for free or very low cost. If the answer is no, then the first thing you need to do is to ensure training and compliance for your workforce and ideally offer them an environment that is private and safe for everyone to use. There is a huge level of usefulness of these GenAI solutions for various functions such as customer service, marketing, HR, IT etc. that it is detrimental not to leverage them. One final point - various LLMs have different strenghts depending on the use case and their cost highly depends on the way they are being used. It's important to consider the approaches and use cases while balancing the costs. For instance, It might be tempting to use the LLM to create abstracts for complex project plans but there might be a large hidden cost through the token consumption that may make an alternative route a smarter one. Work with a partner who can help you navigate through these questions and guide you to the best implementation roadmap!
To view or add a comment, sign in
-