The Microsoft Team Racing to Catch Bugs Before They Happen


Like a rush As cybercriminals, hackers and state-backed fraudsters continue to flood the area with digital attacks and aggressive campaigns around the world, it’s no surprise that the creator of the ubiquitous Windows operating system is focused on defending security . Microsoft’s Patch Tuesday update releases often contain fixes for critical vulnerabilities, including those that are being actively exploited by attackers around the world.

The company already has the groups needed to look for weaknesses in its code (the “red team”) and develop mitigations (the “blue team”). But recently, this format evolved again to hopefully promote more collaboration and interdisciplinary work. of spotting even more bugs and flaws before things start to spiral. Known as Microsoft Offensive Research & Security Engineering, or Morse, the department combines the red team, the blue team and the so-called green team, which focuses on finding flaws or taking weaknesses. the red team has found them and fixed them more systemically through changes in how things are done within an organization.

“People are convinced that you can’t move forward without investing in security,” says David Weston, Microsoft’s 10-year vice president of enterprise and operating system security. “I’ve been at safety for a long time. For most of my career, we were considered nuisances. Now, if anything, leaders come up to me and say, ‘Dave, am I OK?’ Did we do everything we could?’ It’s been a big change.”

Morse has been working to promote safe coding practices at Microsoft, so fewer bugs end up in the company’s software in the first place. OneFuzz, an open-source Azure testing framework, allows Microsoft developers to be constantly, automatically testing their code with all kinds of unusual use cases to uncover flaws that wouldn’t be noticeable if the software was just used exactly as intended.

The combined team has also been at the forefront of promoting the use of safer programming languages ​​(such as Rust) across the company. And they’ve also advocated for building security analysis tools directly into the actual software compiler used in the company’s production workflow. That change has had an impact, Weston says, because it means developers aren’t doing what-if analysis in a simulated environment where some bugs can be overlooked a step removed from actual production.

Morse’s team says the shift to proactive security has led to real progress. In one recent example, Morse members were researching legacy software, an important part of the group’s work, since much of the Windows code base was developed before these extended security reviews. While examining how Microsoft had implemented Transport Layer Security 1.3, the foundational cryptographic protocol used on networks like the Internet for secure communication, Morse discovered a remotely exploitable bug that could have allowed attackers to gain access to targets’ devices.

As Mitch Adair, Microsoft’s chief security officer for Cloud Security, said: “It would have been as bad as it gets. TLS is used to protect basically every service product that Microsoft uses.”



Source link

Leave a Comment