Before joining Microsoft, Shankar dedicated over 18 years to Google, where he served as a Distinguished Engineer and the Chief Technologist for Google Cloud Security. At Google, he led transformative security and privacy initiatives across areas such as data protection, key management, authentication, authorization, insider risk, software supply chain security, and data governance. Shankar also played a key role in integrating generative AI-powered features into Google’s security products, revolutionizing security management through automation. Additionally, he spent three years contributing to Google Assistant, focusing on developer tools, identity, monetization, and discovery, further honing his expertise in assistive technologies. Today, Shankar focuses on ensuring that Microsoft AI delivers maximum product value while upholding user trust and advancing privacy-first engineering practices across Microsoft AI’s offerings.
Shankar holds a PhD and MS in Computer Science from the University of California, Berkeley, with a specialization in security and privacy, as well as a BA in Computer Science from Harvard University.
What attracted you to this advisory role with Bugcrowd? What about Bugcrowd interests you?
I was introduced to Dave Gerry, Bugcrowd CEO, via Ben Fried, a member of the Bugcrowd Advisory Board. I was really impressed with the depth of the vision that Dave articulated and the potential of AI to bring even greater scale. Most companies cannot afford to dedicate specific teams to bug bounty programs, pen tests, and internal red teaming. Bugcrowd democratizes access to hackers who regularly find real and actionable security issues, which benefits us all. When Dave asked me to be an advisor, I was excited to help bring this vision to life.
Conversations about incorporating AI into security strategies are hot right now, but how can organizations actually build something that solves real-world security problems with AI?
I think companies will get the most leverage from two areas:
How has AI already changed the security landscape? How do you predict it will continue to change security over the next few years?
Security practices are still very manual, whether these involve configuring cloud deployment, writing secure code, or investigating alerts. So there is a lot of low-hanging fruit. We are already seeing AI transform these activities.
One area I’m personally excited about is the ability to “close the loop”—not just detect an issue but to generate a fix, get it deployed safely, and verify that the issue was fixed. We now have the tools to do these things almost completely manually in some cases, with human approval as a key part until we gain more confidence. Response is, in my opinion, the harder and relatively less loved half of detection and response. AI could make a huge dent here.
Code analysis at scale is also a tantalizing possibility. This does not have to be AI-only; we can use existing analysis tools and feed the results and relevant parts of code into an LLM. Here, too, if we can create an environment for an AI agent to ask questions of the code, try out patches and fixes, and inspect runtime behavior, we could start to see real autonomous bug finding and fixing.