Robotics and artificial intelligence are hot topics right now. But there are some things to take into consideration before jumping into your AI or machine learning project. Pagero’s Bengt Nilsson, CEO, and Esin Saribatir, VP Marketing, explain all you need to know about the importance of data accuracy and why all smart networks are not built the same.
Hot topics with a key limitation
Robotic process automation (RPA) and artificial intelligence (AI), including machine learning (ML), are some of the hottest topics at the moment, with claims that they will form the future of everything from disease research and social injustice to pioneering innovation.
The potential for each of these technologies is great – and even greater when combined and operated together. But here is a key limitation that many executives overlook when jumping to incorporate these techniques into their business operations – and that is that the value of these techniques is dependent on, and fundamentally limited by, the accuracy of the data with which they operate.
RPA, AI and ML come with digitalisation
RPA, AI and ML are dependent on digitalisation. Only with digital operations can businesses create the data needed to capitalise on each of these tools.
While businesses have been driving the digital revolution for a number of years, governments are now increasingly leading digital change through regulation, as improvements in technology and tools and broader adoption highlight its value in reducing the grey economy and realising tax revenue.
However, the digital revolution is an imperfect beast. Many early technologies and tools came with challenges associated with data accuracy. Multiple touch points and the need for human intervention and handling meant that while operations are increasingly digital, data accuracy has typically been poor.
“Poor data quality costs businesses an exorbitant and unacceptable amount on a daily basis.”
Today, many executives identify poor data quality as the biggest risk and greatest challenge in their business operations. And while estimates vary, it is safe to say that poor data quality costs businesses an exorbitant and unacceptable amount on a daily basis.
Many within digitalisation consider data quality to be the single most significant limiting factor for operational success as businesses become more digitalised and automated. To tackle this challenge, a new process (and industry) was born – data cleansing.
Data cleansing – a fix, not a cure
Data cleansing – the process of detecting and correcting corrupt or inaccurate records – has emerged as a fundamental process within digital operations. We need only consider the growing market associated with data quality tools to appreciate the importance and value that businesses are placing on ensuring the quality of data in their digital operations.
However, while data cleansing does help businesses improve their digital operations, it is an imperfect solution.
With businesses moving more and more towards multiple, virtually-integrated systems, data quality often comes with the cost of increased response time and reduced efficiency. Accordingly, many businesses are opting for – at best case – iterative data cleansing processes that mean inaccurate records are often actioned before they have been validated, resulting in large error-handling costs. These costs tend to be under-estimated as errors are carried downstream and can be difficult to identify, track and correct across multiple systems and operations.
Worst case – and unfortunately what we most often seen – operators export data to stand-alone tools such as Excel in order to manually process it. Once processed, data may be imported back into ERPs, but, more often, it is quarantined away for the sole use of a single business unit – or user – limiting its value for decision-making purposes.
This manual and time-consuming process largely negates the efficiencies that digitalisation offers businesses, as well as adds a large potential for human error. Thankfully, as technology continues to advance, we are now moving to an era where post-activity data cleansing is becoming a thing of the past, surpassed by the benefits offered by smart networks.
Smart networks – a new era
With improvements in technology, businesses are now able to leverage the power of smart networks for their operations. These networks have the potential to, in essence, act as gatekeepers to the systems and processes that businesses may be using. By validating data through pre-determined business rules before data enters the system and is actioned, smart networks cut down response time and increase efficiency, in effect rendering data cleansing redundant.
A significant business advantage of smart networks is that the responsibility of tracking and ensuring local compliance becomes one of the network, not the business. For businesses operating across multiple markets with often changing legislative requirements, this alleviates a significant burden and cost.
Rather than a business needing to track changing regulations, a smart network is designed to validate outgoing data according to current local compliance needs and enrich where necessary, significantly reducing the burden to businesses of processing time and costs, including those of human error.
“With improvements in technology, businesses are able to leverage the power of smart networks for their operations.”
Aside from compliance benefits, perhaps most important is that smart networks ensure that data that has not been validated cannot enter the system, meaning it cannot be actioned and associated costs are not incurred.
While such an approach was costly and time-consuming in the past, current technology and tools, such as invoice matching, make these networks fast and efficient. They are also the basis for touchless operations – where identification, enrichment, validation and correction are increasingly automated and occur without human intervention. By limiting the need for human intervention, smart networks address another of the greatest vulnerabilities and limiting factors to data accuracy.
With improved data accuracy, smart networks provide the basis for businesses to tap into the true value of robotic process automation, artificial intelligence and machine learning. But, the ability for a network to enable this is dependent on it being designed and built for open operations.
Why smart networks must be open
The debate around proprietary vs open technology is not a new one. Undoubtedly one could easily make a case for the value of either approach.
And when considering the business value of one versus the other, and taking profit maximisation into consideration, proprietary technology may in fact come out on top. But when it comes to data accuracy and smart networks, anything other than an open network will fail in the long-run.
It is difficult to argue against the increasing rate of technological change and the pace at which new techniques and tools are emerging. As these technologies and tools become more accessible, the business and operational landscape is becoming increasingly dynamic – and unexpected.
For smart networks – and businesses – to survive this landscape, they must be designed and built for openness. That is, they must be designed so that they can easily connect and integrate with these technologies. But what does that look like from an operational perspective?
What it means to be ‘open’
To be truly open, smart networks must be designed to integrate seamlessly with new technologies and solutions as they emerge. This also brings real value to businesses, as they then have the full freedom of choice for their own operations, without the burden of needing to consider what systems or processes their business partners are using.
For businesses that are already connected to a network, smart networks must be built for interoperability. In the early days of digitalisation, large buyers had all the power and could dictate terms to their suppliers, including the business network of choice.
But what we are seeing now, with the dynamic and fasted pace of technological change, is that being limited to a business network often also means being restricted to which new technologies a business can leverage for its operations.
“Smart networks must maintain a technology focus to ensure that the platform can keep pace with innovation.”
This is akin to burying one’s head in the sand by assuming that every technology and tool that you need for future business success is currently at your disposal, and remaining flexible and adaptable serves no purpose or function. Which brings us to innovation. Smart networks must maintain a technology focus to ensure that the platform is able to keep pace with innovation and changing requirements.
These requirements are not just technological, but also regulatory. As governments are increasingly driving the global digital transformation, a network cannot risk the potential for their value to be undermined or eroded by the mere fact that they cannot meet local compliance requirements.
Not all smart networks are equal
So before jumping into your AI or machine learning project, take a moment to consider – how accurate is your data, and can you leverage the power of smart, open networks for better decision-making, operations and performance? For many businesses, the answer to this question is an overwhelming and resounding ‘YES!’.
But before locking in your network of choice, keep in mind that not all smart networks are built the same – especially when it comes to the integration, interoperability and innovation necessary to be truly open.