Microsoft wants to put an end to its cybersecurity woes with AI and automation
A new initiative to use AI will improve Microsoft's defenses against attacks, responses to attacks, and security for customers.
What you need to know
- A new report reveals that Microsoft has announced the Secure Future Initiative (SFI) to improve its cybersecurity efforts.
- This is in direct response to growing cybersecurity threats and more frequent attacks and exploits levied against Microsoft services in recent years.
- The initiative will see the company use AI and automation to improve the security and stability of its software development.
- The company will also build an AI-powered cyber shield to detect new threats faster than current methods are able.
- Finally, Microsoft plans to improve security for its customers with more thorough encryption and superior out-of-the-box security options.
Cybersecurity is a real and pervasive threat in our digital world, and Microsoft is a constant target thanks to its influence in cloud, AI, and software. The company has been afflicted by a myriad of cyber attacks in recent months and years, with multiple security flaws and exploits discovered in Microsoft Azure and other cloud products. These dangerous security weaknesses combined with criticism levied against Microsoft's security efforts have clearly driven the company to take substantial action, and that action has been revealed today.
According to a new report from the Verge, Microsoft has announced the Secure Future Initiative (SFI), a three-tier program to improve the company's cybersecurity across all of its products and for all its customers. The initiative is highly driven by AI and automation, and will hopefully see major advancements in cybersecurity for Microsoft (and the industry as a whole).
How is Microsoft using AI in security?
To begin, Microsoft intends to use AI and automation, specifically the CodeQL code analysis engine developed by GitHub and integrated with Copilot. The AI-powered engine will help with static and dynamic code analysis, which Microsoft is calling the dynamic security development lifecycle. It should ideally aid Microsoft's developers in finding and fixing bugs in software and AI development. With this tool, security exploits and flaws will be more reliably found and dealt with before they ever reach customers.
How is Microsoft using AI to transform software development?
Microsoft is looking to use AI in 3 specific ways. First, the team at Microsoft is looking to transform the way software is developed by using automation and AI. Most cybersecurity engineers and good software developers know that security has to be baked into software as it is developed and can't be tacked on afterward. In 2004, Microsoft coined the term Security Development Lifecycle (SDL). Microsoft is now evolving this idea to dynamic SDL (dSDL). This is done with the hope that AI will allow for better continuous integration and continuous delivery during all phases of the lifecycle.
With the promise to deploy CodeQL for code analysis to 100 percent of commercial products, they hope to use AI to completely eliminate software vulnerabilities in the build phase before the software is ever even pushed to the public. Microsoft also discusses the need to meet customers where they are and work with legacy infrastructure by offering better security controls in all of their products such as with multi-factor authentication.
How is Microsoft using AI to help with identity protection?
The second thing that is being upgraded is the identity protection Microsoft uses. One of the most successful causes of breaches and security incidents is the compromise of a user's credentials. Microsoft wants to make it harder for a malicious actor or criminal operator to log in as a user, even if they somehow were able to get the username and password. They plan to do this by moving identity signing keys to an Azure HSM which will encrypt signing keys while at rest, in transit, and while being used in computational processes. They also promise automated key rotation for better security. They plan on enforcing standard identity libraries across all of Microsoft for better security.
Most of these changes are not only internal to Microsoft but will apply to all of their customers both personal and enterprise; as long as the techniques used are sound and built with both security and convenience in mind, they should be great improvements.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
How is Microsoft using AI to respond faster to vulnerabilities?
The final way that Microsoft is planning to utilize AI in its security is through its incident and vulnerability response with rapid updates for the cloud to remediate vulnerabilities. Microsoft is promising with AI, they will be able to cut the time to mitigate cloud vulnerabilities by 50 percent.
Two of the biggest issues facing cybersecurity and corporations today are long detection times and long recovery times. Detection time is how long it takes for a company to realize they have been compromised or breached.
Recovery time is how long it takes to get their network and devices back to a pre-compromised state. As we can see with the recent Boeing breach, many times the ransomware group has to tell the companies that they have been breached. Unfortunately, detection time can be upwards of several months. Recovery on the other hand can take even longer, usually causing significant business costs from loss of revenue and needing to hire third-party specialists for incident response.
Microsoft is promising that the help of Microsoft Security Copilot will allow incident responders to act with "machine speed" as they battle threat actors and attempt to repel attacks.
Why is Microsoft using AI for security?
Microsoft has decided to integrate AI so completely into their entire company, that at its core the software, security protections, and even incident response will be completely saturated with AI logic and potentially its fallacies. However, Microsoft was stuck between a rock and a hard place and AI was one way it thought it could escape.
This is a needed evolution for Microsoft as they have been the target and at the center of several high-profile security incidents and breaches over the last few years. Microsoft has been attacked by Chinese hackers, Russian hackers were able to compromise Microsoft Teams, DDOS attacks have been used to disrupt Office 365, and a Microsoft breach affected 65,000 people in 111 countries. These are just some of the issues it has had to deal with in recent memory.
Do you think Microsoft should be trusting AI with its security? Do you think AI can deliver on all of these promises made by Microsoft? Let us know in the comments.
Zachary Boddy (They / Them) is a Staff Writer for Windows Central, primarily focused on covering the latest news in tech and gaming, the best Xbox and PC games, and the most interesting Windows and Xbox hardware. They have been gaming and writing for most of their life starting with the original Xbox, and started out as a freelancer for Windows Central and its sister sites in 2019. Now a full-fledged Staff Writer, Zachary has expanded from only writing about all things Minecraft to covering practically everything on which Windows Central is an expert, especially when it comes to Microsoft. You can find Zachary on Twitter @BoddyZachary.