“In a regulated industry,

you cannot rush adoption—you

need a granular understanding

of the risks.”

– Sandip Wadje

Sandip Wadje, Managing Director and Global Head of Emerging Technology Risks at BNP Paribas, is a seasoned Cyber Security and Technology Risk leader with 23 years of experience across Cyber Security, Operational Risk, IT Risk, Internal Controls, and Compliance. Since 2017, he has led BNP Paribas’ global oversight of emerging technologies, focusing on Cloud, Artificial Intelligence, Digital Assets, and Threat Intelligence. Sandip is an active contributor to the European financial services cyber community, serving at the European Cybercrime Centre (EC3) and co-chairing the European Financial Services Round Table (EFR) Cyber Experts Group. Known for his ability to influence regulatory guidelines and engage with board-level stakeholders, he brings a unique mix of consulting, start-up, and industry expertise to the global technology risk landscape.

Let’s have a look at the conversation CyberSecurity88 had with Mr. Sandip Wadje:



How do you approach the evolving cybersecurity landscape, particularly with the rapid integration of digital assets and an increasingly distributed attack surface?


The attack surface has indeed become more distributed with the adoption of emerging technologies. Take digital assets as an example: blockchain requires collaboration across multiple financial institutions, partners, and ecosystems. Much of that infrastructure runs in the public cloud, which creates exposure beyond the traditional firewall. Suddenly, APIs and cross-institutional dependencies open new types of attack surfaces that have not been tackled before including ability to stay resilient as it’s a shared infrastructure.

My advice is to stay proactive. Whether the technology is digital assets, AI, or even quantum, the first step is ensuring that executives are educated on what the technology adoption means to business. From a risk management perspective, we then need clear frameworks, policies, and procedures before rolling out these technologies. In a regulated industry, you cannot rush adoption—you need a granular understanding of both the risks and the intelligence landscape around them. That foundation shapes how we move forward.



With the rise of generative AI, what unique challenges and opportunities do you foresee for cybersecurity in banking?


This is a broad and fascinating question. Many cybersecurity tools available in the market today are, in effect, “post-AI” marketing wrapped around “pre-AI” infrastructure. The bigger challenge is that business adoption of AI is moving rapidly—whether in credit underwriting, drafting customised client proposals across all business lines using one agent. Entire pre-AI workflows are being ripped out and replaced with agentic processes. The cybersecurity implications of this are still emerging.

It’s also important to distinguish between risks to AI and risks due to AI. AI models themselves are vulnerable, but they also introduce new risks when generating content, including deepfakes and synthetic data. In my view, the most significant near-term risk is AI’s ability to generate content at scale that can be abused for fraud, manipulation, or social engineering. This space is evolving quickly, and it requires a living catalog of risks that we can continuously reassessed.



How do you view integrating emerging technologies like AI into existing cybersecurity tools and frameworks?


The critical question is not just integration, but necessity. Do we retrofit AI into legacy tools, or do we accept that post-AI IT will look fundamentally different? Today, AI is already being layered into security operations—AI-driven red-teaming, anomaly detection, and faster log analysis. But these are still tied to legacy problems such as software vulnerabilities and traditional monitoring.

When agentic applications begin to dominate, security events themselves will look different. It may require a completely new security architecture, not just an upgrade of existing tools. We’re still at the early stages of seeing how that plays out.



Looking ahead, what trends do you anticipate at the intersection of cybersecurity, AI, and digital transformation?


Over the next five years, the landscape will change profoundly. Models will keep evolving, and with them, the way we analyze, enrich, and act on data. Much of a knowledge worker’s job has traditionally been about processing information. AI will accelerate the rationalization and automation of those activities across both front- and back-office functions.

From a cybersecurity standpoint, this means the scope of protection expands: it’s not just about systems and infrastructure but about safeguarding the very decision-making processes of organizations. The pace of digital transformation will only quicken.



Generative AI adoption presents both opportunities and governance challenges for banks. Where do you see the biggest hurdles, and how can banks prepare?


The first hurdle is conceptual: AI means different things to different people. Without a common understanding, it’s difficult to seize opportunities or mitigate risks. Education is therefore the first step—ensuring that leadership and staff share the same worldview of what AI is and how it is used.

Governance then becomes the critical layer. In financial institutions, the scrutiny is less about the technical workings of AI and more about its application: is it trustworthy, explainable, and aligned with regulation and customer expectations?

Preparing for this requires not just technology investment but cultural change—reskilling employees, scaling adoption responsibly, and embedding governance frameworks that ensure AI is used ethically and transparently.



Finally, what advice would you offer to cybersecurity professionals entering the industry?


Learn AI. Challenge yourself to think about how you can use AI to solve existing cybersecurity problems. Run small pilots and projects—your next role will almost certainly be a combination of cybersecurity and AI.

Get your hands dirty: experiment with models, learn Python, test ideas. The best way to prepare is to build practical insight into how AI reshapes cybersecurity. That’s what the “post-AI” job market will demand.