Abstract

Artificial intelligence (AI) will be increasingly integrated into the London Stock Exchange between 2025 and 2030 to strengthen fraud detection and market surveillance. As trading becomes faster and more complex, traditional monitoring systems struggle to analyse large volumes of transactional data and identify irregular human behaviour efficiently. AI-driven algorithms are expected to process real-time market activity at high speed and precision, enabling the detection of unusual trading patterns, identifying potential cases of market manipulation and maintaining vast datasets with significantly greater accuracy than current rule-based systems. This study evaluates the effectiveness of machine learning and pattern-recognition models in monitoring trading activity on the London Stock Exchange by comparing their performance with traditional surveillance methods. By analysing their accuracy, detection latency, adaptability to new fraud strategies and rates of false positives, the research expects to demonstrate that AI-enhanced surveillance systems outperform conventional approaches across multiple dimensions. The study anticipates that AI will not only reduce the workload for analysts and speed up investigations but will also enable more proactive market protection by identifying risks before they escalate. The findings will be significant for sustaining market integrity, strengthening investor confidence and guiding responsible AI adoption in global financial markets over the next five years.

Introduction

Financial markets play a highly critical role in supporting global economic activity; however, as trading volumes grow and financial instruments become more complex, maintaining fair and stable markets has become increasingly challenging. Market surveillance is becoming increasingly important as fraud becomes more advanced. In order to create a safe and stable market, the integration of artificial intelligence (AI) becomes clear. Traders use a variety of methods to try and cheat the market, like insider trading, spoofing and price manipulation. All of these can be difficult and time-consuming for automated systems and human analysts to detect, creating gaps in the system. They tend to happen across many different spots and create slight disturbances while hiding under the masses of legit trades and activity. These methods are near impossible for dated systems to detect.

Before the integration of AI, the London Stock Exchange (LSE) relied on traditional surveillance methods in which analysts manually had to review transactions and use a cross-checking data method in order to detect fraud. Cross-checking is where an analyst compares multiple independent sources to try to identify anything out of the ordinary. Though it worked, the approach was slow, labour-intensive and unable to keep up with the rapid expansion of electronic trading. Although automated rule-based systems were later introduced to improve efficiency, they generated a high numbers of false positives and struggled to adapt to evolving fraud techniques. These automated systems were not able to work on their own and would only detect what they were specifically programmed to detect, meaning they would not flag unknown patterns or newly developed ways of fraud. Fraudsters would also be able to adapt and change their methodologies in order to work around these new automated systems. 

AI offers an efficient solution to these limitations. This is because modern AI systems are able to analyse vast volumes of complex trading data quickly and to a high standard. Furthermore, they are able to reduce false alerts, identify subtle anomalies and even predict potentially fraudulent activity before it takes place (Cao, 2022). AI systems excel at pattern recognition and identifying irregularities, which makes them particularly effective when looking at fraud in financial markets. These new AI models are also able to detect never-before-seen patterns and ways of fraud through unsupervised learning. Another important thing to note is how AI is able to accurately predict the flow of the market and early signs of misconduct. As financial regulators around the world move toward AI-driven oversight, the London Stock Exchange is entering an important transition point, where adopting advanced AI technologies will be essential for keeping its markets fair, secure and competitive on a global scale. This research examines how AI-driven algorithms can be strategically integrated into the LSE’s surveillance system to strengthen fraud detection and market oversight between 2025 to 2030.

As of 2025 the London Stock Exchange has begun integrating AI through its AI-powered Surveillance Guide, which uses Amazon Bedrock and Anthropic’s Claude models to automate news sensitivity analysis and support analysts in triaging potential market abuse, such as spoofing, across MiFID asset classes. Partnerships with Databricks enable AI-ready data integration for real-time risk monitoring, anomaly detection and compliance via Agent Bricks frameworks. While these systems represent a major step forward from traditional rule-based alerts, they still face challenges with scalability and occasional false positives. Over the next five years, improvements will become increasingly important as trading volumes continue to rise, new instruments like tokenised assets are introduced, high-frequency strategies accelerate market activity and AI-driven threats such as deepfakes emerge. Proactive, adaptive surveillance will therefore be essential to maintain market integrity and ensure compliance with FCA regulations.

Aim of the Research

The aim of this research is to examine how AI algorithms can be integrated into the London Stock Exchange market surveillance system to enhance fraud detection and market oversight between 2025 and 2030. The study investigates how AI can improve detection speed, accuracy and adaptability in complex trading environments, including the identification of unusual trading patterns, forecasting of potential manipulation and reduction of false positives for human analysts. It also evaluates AI’s ability to support real-time, large-scale monitoring during high-frequency trading, where traditional systems often miss subtle anomalies. Additionally, the research considers key adoption factors such as transparency, explainability, regulatory compliance and integration with existing LSE infrastructure to determine practical and sustainable implementation strategies. 

This research aims to compare traditional surveillance methods with AI-driven fraud detection on the London Stock Exchange, focusing on their ability to detect and prevent fraudulent activity. Key areas of analysis include:

  • Detection accuracy: How effectively each approach identifies fraudulent trades, including true positives, false positives and false negatives.

  • Detection speed and efficiency: The time taken to flag suspicious activity and the reduction in analyst workload.

  • Operational and cost implications: The impact on processing large volumes of trades, resource use and overall cost savings.

  • Adaptability to emerging fraud: The ability of AI systems to learn from new data and anticipate novel fraud patterns compared to static rule-based methods.

  • Current LSE tools and future potential: Limitations of existing systems and the expected improvements and adoption of AI-driven surveillance between 2025 and 2030.

Ultimately, the study aims to provide evidence-based recommendations for deploying AI-driven surveillance tools responsibly, helping the LSE maintain markets that are fair, secure and globally competitive over the next five years.

Mechanisms of AI-Driven Surveillance

AI-driven algorithms represent a big shift from traditional methods of surveillance, which are currently used at the LSE. Unlike manual systems, which rely on static rules and human review, AI uses machine learning. AI uses these advanced machine learning processes to analyse vast volumes of trading data in real time, which can identify subtle anomalies, adapt to emerging fraud techniques and even anticipate potentially fraudulent activity before it happens. Considering this, we can now move on to presenting the data we have found and analysed.

Past Use of Fraud Detection Methods at the London Stock Exchange

Prior to AI adoption, London Stock Exchange Group relied on rule-based surveillance systems which generated automated alerts for suspicious trading across MiFID asset classes, followed by manual triage by analysts reviewing news, sentiment and patterns. ​​These older methods had a lot of problems: they produced too many false alarms, could not keep up with the LSE’s massive trading volumes (over £1 trillion a year) and relied heavily on slow, manual work from analysts who had to piece together evidence for things like insider trading or spoofing.

Methods

The AI framework for fraud detection uses language models-driven anomaly detection to identify both established and emerging forms of market abuse using language patterns. Encoders (LLMs), such as BERT, can understand language patterns. By doing so, it plays a central role by extracting contextual features from unstructured data sources. This capability allows the system to detect anomalous trading behaviours through quantitative irregularities and through linguistic cues embedded in associated metadata. The overarching objective is to enable a detection pipeline that learns from historical and real-time trading data, adapting to evolving fraud strategies and systematically highlighting suspicious activities.

The foundation of the framework is a complicated dataset that uses a wide range of structured market variables, including trade executions, order submissions and cancellations, timestamps, participant identifiers, price and volume trajectories, and order book characteristics. These datasets are augmented with metadata such as geographic information, IP addresses, device fingerprints and temporal interaction patterns. When authorised, unstructured sources such as broker notes, trade commentary, public disclosures and external information streams are incorporated using natural language processing methodologies. This multimodal data environment enables the system to detect market sentiment or irregular textual patterns that may correspond to insider trading or other manipulative practices.

Supervised learning techniques are deployed using previously confirmed instances of illicit behaviours, including insider trading, spoofing, layering, wash trading and anomalous order-to-trade ratios. Models such as decision trees, random forests and gradient boosting algorithms (e.g., XGBoost, LightGBM) are particularly suited to these tasks due to their ability to expand on the errors made previously and correct them. Given the inherent class imbalance in financial fraud datasets, the framework allows for better detection against sophisticated fraud that might slip past traditional methods.

As many novel fraud schemes lack historical labels, supervised methods alone are insufficient for comprehensive detection. The framework, therefore, integrates unsupervised and hybrid learning techniques capable of identifying deviations from established market norms. Clustering algorithms such as k-means, DBSCAN and hierarchical clustering are employed to reveal transaction clusters that diverge significantly from typical trading behaviours. In parallel, anomaly detection methods, including isolation forests, one-class support vector machines and autoencoder-based models, monitor live data streams for irregularities in timing, order sequencing, order-to-trade ratios or activity surges from infrequent market participants. Autoencoders trained on normal trading patterns are particularly effective in flagging sequences with atypical latent representations.

The combined use of supervised classifiers, unsupervised anomaly detection mechanisms and LLM-driven contextual analysis yields a hybrid architecture that enhances the system’s capacity to detect both known and emerging forms of financial misconduct. This integrated approach supports real-time risk monitoring, strengthens adaptive model performance in dynamic market environments and provides a scalable foundation for advanced fraud analytics in the London Stock Exchange.

Comparative Analysis: AI Versus Traditional Surveillance

The data shows that AI-driven surveillance systems outperform traditional rule-based and manual monitoring in almost every key area (Maheshwari & Chatnani, 2023; Bagwe, 2024). AI fraud detection achieves true positive rates of 92.7-96%, compared to about 78.3% for traditional systems, while cutting false positives down to 2-3% – a reduction of 42.7-68.2% (Bagwe, 2024). False negatives are also lower thanks to AI’s ability to learn and adapt, overcoming one of the main weaknesses of rigid, rule-based systems that often miss new fraud patterns (Neontri, n.d.). 

Figure 1 visualises this effect by comparing false-positive rates before and after AI implementation: the legacy system produces more than half of its alerts incorrectly (51.4%), whereas the AI‑enhanced system reduces this to 11.9% – a 76.8% drop in false positives (Bagwe, 2024).

Figure 1. False‑positive rates before and after AI implementation, recreated from data reported in IJSA.

In terms of speed, AI can process millions of trades per second in real time, with an average response of 76.4 ms (and 99% of cases under 124 ms), reducing detection times from days to under five minutes (Ibitola, 2025; Gopalakirshnan, 2025a). Traditional systems are slower, often requiring hours or even days due to batch processing and manual reviews (Fraud Net, 2024).

Figure 2 shows this contrast in detection latency, with the legacy system taking 27.4 hours on average to identify fraud, compared with just 3.2 seconds for the AI‑enhanced system, representing an improvement of roughly four orders of magnitude in reaction time (Gopalakirshnan, 2025a).

Figure 2. False‑positive rates before and after AI implementation, recreated from data reported in IJSA.

AI also makes life easier for analysts by reducing alerts by 87-93%, saving roughly 115 minutes per analyst each day, resulting in investigations being completed up to 90% faster (Ibitola, 2025; Cognizant, 2022). Cost efficiency improves also as AI can generate 200-300% return on investment by lowering fraud losses, cutting operational costs and streamlining investigations (Gopalakirshnan, 2025b). Examples from exchanges like Nasdaq show that deep‑learning models can detect complex fraud patterns, such as spoofing, while drastically reducing daily alerts from around 1,000 to a much more manageable number (Hao, 2019).

At the London Stock Exchange, AI is already being used through the AI-powered Surveillance Guide, built with large-scale language and analytics models, which improves detection accuracy by analysing trades alongside news and market sentiment (Kellaway et al., 2025). Early results show better detection quality and efficiency, especially for complex and fast‑moving market behaviour that traditional systems struggle with, and the London Stock Exchange plans to expand this system with more data sources, fine‑tuned models and real-time validation alongside manual checks (Smith & Dimase, 2025; LSEG, 2025).

Implications, Challenges and the Future of AI at the LSE

AI-integrated programs make detecting fraud much more efficient within the London Stock Exchange. They present a much lower percentage of false positives, as well as an almost instantaneous detection of suspicious activity. This aligns with our results, which showed true positive rates of 92.7-96% for AI versus 78.3% for traditional systems, and false positives reduced from 51.4% to 11.9%.

Despite the benefits, the integration of AI also presents unique challenges, including the black box problem, which refers to a lack of explainability within the AI, leading to a plethora of problems like bias, loopholes, targetting and non-compliance. To address this, the LSE’s strategy must align with UK financial regulation, specifically guidance from the Financial Conduct Authority (FCA). The issues of bias and lack of explainability become critical regulatory challenges. The deployment of explainable AI (XAI) techniques is not optional; it is required to demonstrate accountability and fairness. This demands that the LSE go beyond the high accuracy rates found in our results and implement systems that allow analysts to transition into AI Risk Managers, responsible for understanding and validating the machine’s outputs and correcting for potential failures, such as when model performance degrades over time.

Furthermore, the high speed and reliance on complex ML models introduce new systemic risks. The financial gains realised by the LSE may encourage other exchanges to adopt similar solutions. If multiple markets rely on similar model architectures, a single flaw or vulnerability in one system could be exploited across the globe through a correlated failure, a risk known as algorithmic synchronicity. Therefore, the LSE must proactively focus on cybersecurity safeguards that protect the model’s integrity from adversarial attacks (like data poisoning), ensuring that the system designed for stability does not become a new source of risk.

As the LSE progresses toward full AI integration by 2030, future research should move beyond proving efficiency to examining the long-term governance of these systems. Specific studies could focus on validating the effectiveness of XAI techniques in regulatory appeals, assessing the impact of unified interoperable surveillance standards on global financial stability and defining the optimal training pathways for the new AI Risk Manager role. This comprehensive approach ensures that the powerful enhancements offered by AI are implemented responsibly and sustainably.

Strategic Roadmap: 2025-2030

Over 2025-2030, the London Stock Exchange Group will advance by integrating additional data sources, including financials, market feeds and NLP analysis of news and social signals, while fine-tuning Anthropic’s Claude models on Amazon Bedrock. Coverage will expand to complex abuse types, such as layering and wash trades, with live deployment complemented by manual validation to ensure scalability amid high-frequency trading surges and the growth of tokenised assets. Analysts will work closely with AI through hybrid workflows, using prioritised dashboards to validate flagged trades and make decisions, combining AI’s speed with human judgement for nuanced cases like spoofing. AI will increasingly support pre-trade surveillance, processing millions of events per second to detect anomalies in real time and prevent fraud escalation, while continuous learning techniques like autoencoders enable the system to adapt to emerging threats, including deepfakes and novel schemes, in response to the projected 21% rise in AI-driven fraud. Partnerships with Databricks will enhance predictive analytics across markets and graph neural networks will identify order anomalies more efficiently. To maintain trust and compliance, explainable AI audits will monitor black-box risks, bias and cybersecurity vulnerabilities, aligned with FCA regulations. These improvements are expected to boost detection accuracy to over 98%, reduce false positives to below 5% and handle trading volumes exceeding £2 trillion, strengthening LSE’s global reputation and keeping it competitive with exchanges like Nasdaq while promoting proactive, adaptive market oversight.

Conclusion

Artificial intelligence is positioned to become a central component of fraud detection and market surveillance on the London Stock Exchange between 2025 and 2030. The evidence demonstrates that AI-driven systems significantly outperform traditional monitoring approaches by analysing vast quantities of real-time trading data, identifying subtle anomalies and reducing false positives that previously slowed investigations and overwhelmed analysts. With faster detection speeds, greater adaptability to newly emerging fraud techniques and improved predictive capabilities, AI enables a more proactive approach to safeguarding market integrity rather than reacting only after manipulation has occurred. However, the research also shows that full reliance on AI is neither realistic nor responsible. The black-box nature of complex models, the risk of algorithmic bias and the possibility of over-dependence highlight the ongoing need for human expertise in oversight and verification. For the London Stock Exchange to fully benefit from AI-enhanced surveillance, the most effective path forward lies in hybrid systems that combine advanced machine-learning models with skilled human analysts. Such collaboration supports both efficiency and ethical governance. Overall, AI represents a major opportunity for creating a more secure and transparent trading environment. If implemented responsibly – with a focus on explainability, regulatory compliance and continuous refinement – AI will play a crucial role in protecting investors, reducing financial crime and strengthening the long-term stability and competitiveness of the London Stock Exchange and global financial markets.

Bibliography

Bagwe, C. (2024) Fraud Detection in Financial Institutions: AI VS. Traditional Methods, International Journal of Scientific Research & Engineering Trends, 10(6), pp. 3274-3279.

Bank of England (2025) Financial Stability in Focus: Artificial intelligence in the financial system, Bank of England [online]. <https://www.bankofengland.co.uk/financial-stability-in-focus/2025/april-2025>

Cognizant (n.d.) AI saves $20M in fraud losses, Cognizant [online]. <https://www.cognizant.com/us/en/case-studies/ai-machine-learning-fraud-detection>

Fraud Net (2024) Fraud Detection Using Machine Learning vs. Rules-Based Systems, Fraud Net [online]. <https://www.fraud.net/resources/fraud-detection-using-machine-learning-vs-rules-based-systems#the-basics-rules-based-systems-vs-machine-learning>

Gopalakirshnan, S. (2025a) Fraud Detection Agent vs Traditional Risk Monitoring: What’s More Effective in 2025? VLink [online]. <https://vlinkinfo.com/blog/fraud-detection-agent-vs-traditional-risk-monitoring>

Gopalakirshnan, S. (2025b) Cost of Developing an AI-Based Fraud Detection Agent for Your Business, VLink [online]. <https://vlinkinfo.com/blog/cost-of-ai-fraud-detection-agent>

Hao, K. (2019) AI will now watch for fraudsters on the world’s largest stock exchange, MIT Technology Review [online]. <https://www.technologyreview.com/2019/11/07/65063/nasdaq-ai-will-now-watch-for-fraudsters-on-the-worlds-largest-stock-exchange/>

Holland, D. (2025) Composing the future: Balancing innovation and human expertise in financial markets, FCA [online]. <https://www.fca.org.uk/news/speeches/composing-future-balancing-innovation-and-human-expertise-financial-markets>

Ibitola, J. (2025) How AI Reduces Operational Strain and Cuts Costs in AML Compliance Workflows, Flagright [online]. <https://www.flagright.com/post/how-ai-reduces-strain-and-costs-in-aml-compliance-workflows>

Kellaway, C., Withanawasam, R. & Chester, R. (2025) How London Stock Exchange Group is detecting market abuse with their AI-powered Surveillance Guide on Amazon Bedrock, HKU School of Professional and Continuing Education [online]. <https://aihub.hkuspace.hku.hk/2025/09/11/how-london-stock-exchange-group-is-detecting-market-abuse-with-their-ai-powered-surveillance-guide-on-amazon-bedrock/>

Keyrus (n.d.) AI and Fraud Prevention: How Financial Institutions Can Stay Ahead in 2025, Keyrus [online]. <https://keyrus.com/uk/en/insights/ai-and-fraud-prevention-how-financial-institutions-can-stay-ahead-in-2025>

LSEG (2025) London Stock Exchange Group plc: Q3 2025 Trading Update, LSEG [online]. <https://www.lseg.com/en/media-centre/press-releases/2025/london-stock-exchange-group-plc-q3-2025-trading-update>

Maheshwari, S. & Chatnani, N.N. (2023) Applications of artificial intelligence and machine learning-based supervisory technology in financial markets surveillance: A review of literature, Journal of Securities Operations & Custody, 16(2), pp. 101-120.

Neontri (n.d.) Neontri [online]. <https://neontri.com/>

Orelaja, A.. & Oluwabusola, A.V. (2025) AI-Driven Fraud Detection in Financial Markets: Predictive Modeling for Risk Mitigation and Compliance Enhancement, International Journal of Innovative Science and Research Technology, 10(5), pp. 4509-4520.

Smith, L. & Dimase, V. (2025) The era of surveillance: Driving change through innovation, LSEG [online]. <https://www.lseg.com/en/insights/fx/the-era-of-surveillance-driving-change-through-innovation>