Modern telcos must approach AI in Business Support Systems with caution and care
By Industry Contributor 20 January 2026 | Categories: newsBy Damian Burnett, Sales Director at VAS-X
Artificial Intelligence (AI) has become a strategic enabler across all industries. Whether it’s apps that suggest content based on user behaviour, systems that automatically optimise service performance, or dashboards projecting revenue and churn, everyone is looking for ways to leverage AI within their products. And they want to do so as quickly as possible. But in the rush to automate, we run the risk of forgetting that not every problem is an AI problem and not every system should be left to algorithms.
Since the rise of ChatGPT, we’ve seen countless examples of AI’s potential. From generating fully functional websites and mobile apps in minutes to creating complex data analytics platforms with minimal human input, it’s clear that AI has potential to accelerate development in ways that were once unimaginable. Some AI tools can even take a concept from idea to working prototype in a single day, automatically generating code, testing functionality, and producing documentation along the way. But when it comes to mission-critical systems, innovation without caution can have negative consequences.
Business Support Systems (BSS) are the backbone of customer-facing telecoms operations, managing everything from billing and subscriptions to service fulfillment, revenue management and product marketing. This means that BSS missteps don’t just inconvenience users; a single outage can cost millions and permanently damage a brand’s reputation.
And let’s face it, mistakes happen. Some high-profile examples include the UK Post Office’s Horizon scandal, which saw hundreds being wrongfully accused of theft based on inaccurate data from an automated accounting system. More recently, Air Canada landed in hot water after a customer service chatbot promised a discount to a passenger that actually wasn't available. The AI Vibe Coding platform, acquired by Wix itself, fell victim to hackers when authentication vulnerabilities were exploited.
The problem with AI and BSS
Applying AI to BSS without oversight and strict controls can introduce a whole spectrum of risks. AI models may hallucinate and generate incorrect charges or credits, and misconfiguration may cause outages or expose you to fraud. Because a BSS deals with important business, customer, and financial data, there are also big compliance and security risks associated with using AI to build them. If sensitive customer data is mishandled by an AI model or if AI makes decisions that violate telecom regulations, you can’t just throw your hands up in the air and say, “the AI did it”.
I’m not saying that AI should not be used to augment your development efforts, but the risks associated with having AI generate tens of thousands of lines of code that are put into commercial production cannot be underestimated.
There are several ways to leverage AI to enhance BSS development, such as boilerplate code, inline code reviews and analysis, using AI to flag anomalies for human review or to apply predictive analytics to anticipate issues before they occur.
Ultimately, AI can automate repetitive tasks, cross-check your work, detect issues and make recommendations, but humans should make the final calls. At VAS-X, our philosophy is that BSS systems are far too critical to serve as AI playgrounds. Experimenting with AI should be done responsibly, with human oversight and strong governance. AI can make BSS development faster and more efficient, but it cannot replace the precision, accountability, and governance required to develop one of the telecom industry’s most business-critical systems.
Most Read Articles

Have Your Say
What new tech or developments are you most anticipating this year?

