Bugcrowd is delighted to announce that Umesh Shankar, Corporate Vice President of Data, Privacy & Security Engineering at Microsoft AI, has joined our Board of Advisors. Shankar’s extensive experience in data protection, privacy, and AI-driven security will be invaluable as we continue to innovate our Platform. 
“I’m inspired by Bugcrowd’s mission to help organizations proactively uncover and address vulnerabilities, strengthening cybersecurity through collaboration and innovation,” says Shankar. “I am excited to join Bugcrowd’s Board of Advisors to help contribute to its efforts as it explores new ways to harness AI, foster trust, and support organizations in addressing emerging security challenges.”

Before joining Microsoft, Shankar dedicated over 18 years to Google, where he served as a Distinguished Engineer and the Chief Technologist for Google Cloud Security. At Google, he led transformative security and privacy initiatives across areas such as data protection, key management, authentication, authorization, insider risk, software supply chain security, and data governance. Shankar also played a key role in integrating generative AI-powered features into Google’s security products, revolutionizing security management through automation. Additionally, he spent three years contributing to Google Assistant, focusing on developer tools, identity, monetization, and discovery, further honing his expertise in assistive technologies. Today, Shankar focuses on ensuring that Microsoft AI delivers maximum product value while upholding user trust and advancing privacy-first engineering practices across Microsoft AI’s offerings. 

Shankar holds a PhD and MS in Computer Science from the University of California, Berkeley, with a specialization in security and privacy, as well as a BA in Computer Science from Harvard University.

 

Get to know Umesh Shankar

What attracted you to this advisory role with Bugcrowd? What about Bugcrowd interests you?

I was introduced to Dave Gerry, Bugcrowd CEO, via Ben Fried, a member of the Bugcrowd Advisory Board. I was really impressed with the depth of the vision that Dave articulated and the potential of AI to bring even greater scale. Most companies cannot afford to dedicate specific teams to bug bounty programs, pen tests, and internal red teaming. Bugcrowd democratizes access to hackers who regularly find real and actionable security issues, which benefits us all. When Dave asked me to be an advisor, I was excited to help bring this vision to life.

Conversations about incorporating AI into security strategies are hot right now, but how can organizations actually build something that solves real-world security problems with AI?

I think companies will get the most leverage from two areas:

  • The tools they already use—This path keeps AI capabilities close to existing workflows and data so that they can operate efficiently and evolve human workflows in tandem. It has a low cost of adoption relatively to other approaches.
  • Using AI as the crossbar that joins disparate tools and data—This one is more forward-looking, but the classic problem with security tools is that you have too many of them. Writing custom code and tools to extract information from each source, join it, and analyze it is very costly and is often limited to a fixed set of queries. AI is well-suited to tackling the problem of understanding data with different schemas, querying it, and reasoning across it from a single high level intent. Right now, there are some practical limits with large schemas and domain understanding, but it seems likely that at least response and hunt activities will move in this direction.

How has AI already changed the security landscape? How do you predict it will continue to change security over the next few years?

Security practices are still very manual, whether these involve configuring cloud deployment, writing secure code, or investigating alerts. So there is a lot of low-hanging fruit. We are already seeing AI transform these activities. 

One area I’m personally excited about is the ability to “close the loop”—not just detect an issue but to generate a fix, get it deployed safely, and verify that the issue was fixed. We now have the tools to do these things almost completely manually in some cases, with human approval as a key part until we gain more confidence. Response is, in my opinion, the harder and relatively less loved half of detection and response. AI could make a huge dent here.

Code analysis at scale is also a tantalizing possibility. This does not have to be AI-only; we can use existing analysis tools and feed the results and relevant parts of code into an LLM. Here, too, if we can create an environment for an AI agent to ask questions of the code, try out patches and fixes, and inspect runtime behavior, we could start to see real autonomous bug finding and fixing.