The AI landscape in 2025 kicked off with a bang, bringing fundamental questions about trust, transparency and security. From DeepSeek R-1’s launch to the AI Action Summit, more and more questions are being asked about how to secure AI systems while ensuring their safe and ethical deployment.
The UK Government’s Laboratory for AI Security Research (LASR), which Plexal is a partner of, is designed to address the biggest challenges in AI security by fostering collaboration across sectors, disciplines and borders.
It’s a fast-moving field we’ve been exploring with our panel and networking event series, LASR Lates, which this time took place in Cheltenham where we brought together industry leaders, academics and investors to explore the challenges and opportunities in AI security today.
The evening featured insights from key figures in the field including:
- Holly Smith, Innovation Lead at Plexal (Moderator)
- Louise Cushnahan, Head of Innovation at LASR partner CSIT’s CyberAI Hub, part of Queen’s University Belfast
- Dave Palmer, an investor in cyber security and AI startups with years of experience at Darktrace
- Darren Borland, Senior Software Engineer at Pytilia and is in the LASR Validate cohort, our first programme designed to support innovators with their development of AI security products
One recurring theme throughout the discussion was the importance of data provenance. As Darren highlighted, the challenges AI models face often stem from their build. Ensuring ongoing security of models, understanding human interactions with AI and developing robust anomaly detection systems are all crucial to mitigating risks.
The need for explainability, meaning AI’s ability to justify its decision making, was another focus, particularly in industries such as financial services, cyber security and healthcare, where trust and transparency are paramount.

On our LASR Validate programme, Pytilia’s tackling another critical issue: securing feedback loops in AI systems. AI security isn’t just about preventing malicious attacks, it’s also about ensuring that human input into AI systems remains trustworthy and secure.
From an investment perspective, Dave Palmer underscored that AI security is a lifecycle problem, not just a software problem. The landscape is evolving quickly, with massive startups in the US – like HiddenLayer and Protect AI – focusing on pre- and post-deployment security, while UK-based Mindgard is working on pre-deployment. The challenge isn’t just securing AI before it’s deployed, it’s about continuously defending against adversarial threats throughout the AI lifecycle.
How do we ensure AI systems are secure before we deploy them?
The cyber security industry has learned hard lessons in the past, often addressing security retrospectively. AI presents an opportunity to do things differently, baking security into AI systems from the outset.
Louise Cushnahan highlighted that academic research often explores AI security challenges well in advance, addressing potential issues before they gain widespread industry attention. The Cyber AI Hub has been exploring AI security since 2023, engaging in industry-led projects with major players like Thales and NVIDIA. The goal is to expand this model across the UK, ensuring that research and intellectual property are leveraged to benefit the wider economy.
Collaboration between academia, industry and government – often called the triple-helix model – has proven invaluable. Darren reflected on his experiences with innovation frameworks that help SMEs connect with problem owners, ensuring that solutions are developed with real-world applications in mind.

AI security isn’t a niche concern
As AI continues to evolve, so do the threats. A pressing question from the audience was whether AI is creating entirely new threats that we haven’t seen before. The consensus? Yes. AI-powered attacks are becoming more sophisticated, faster and harder to detect. Adversarial AI can now identify weaknesses and exploit failures at an unprecedented scale, raising the stakes for cyber security professionals.
Looking further ahead, Dave and Darren discussed the implications of artificial general intelligence (AGI) and the emergence of AI agent networks. As AI systems become more autonomous and interact with each other in unpredictable ways, securing them will only become more complex. How do we put guardrails on loosely connected AI agents? What happens when security measures fail?

AI security isn’t a niche concern – it’s at the forefront of economic growth, national security and technological innovation. Whether it’s ensuring data provenance, securing feedback loops or regulating AI-powered threats, the need for collaboration across academia, industry and government has never been greater.
And at LASR Lates Cheltenham, the message was clear: AI security is both an urgent challenge and an exciting opportunity. If we get it right, we have a chance to build AI systems that aren’t only powerful but also safe, secure, transparent and resilient.
The conversation doesn’t stop here – it’s just getting started. Let us know if you want a LASR Lates near you.
