Artificial Intelligence (AI) and robotic process automation (RPA) are transforming the retail sector, allowing retailers to improve their supply chain, achieve business process efficiencies, and evolve and personalise the customer journey.
With technology now being used to analyse and process data, key elements of the standard contractual framework must be rethought to ensure the success of these strategic digital transformation programmes, creating the retailers of the future. This article will explore key factors retailers should consider when contracting for these evolving technologies.
Intellectual property rights and ownership of data
One of the main benefits of AI is that it allows retailers to derive hugely valuable data from large raw data sets to improve personalisation and customer experience. The value of this data is significantly increased when resold, so ensuring it cannot be sold onto the broader market by the software provider is critical. Retailers should ensure that data is returned to them on termination or expiry and is not retained by the platform provider. The data must be provided in a readable form that would allow its reuse by the retailer with another tool or system, so the value of the investment is not lost. Furthermore, the contract needs to address not just the data set but also the use of derivative data that can also be valuable, and which should be protected from a customer’s point of view.
Similarly, the system ‘learnings’ (i.e. what the company’s data ‘teaches’ the provider’s algorithm) may have more market value than traditional customer modifications, as they improve the provider’s platform, making it more marketable to future customers. Even if the derived data itself is not re-sold by the platform provider, the speed (and therefore cost) at which similar raw data sets could be used to create derived data would be significantly reduced. ‘Learnings’ are therefore of greater interest and use to the provider and competitors. Any agreement must contain provisions examining all the different output types, considering exclusivity of use (if technologically feasible with that platform) or, in the absence of that, commercial benefits the retailer may accrue.
Any AI system is, by definition, based on the processing of a large volume of data. When used to improve personalisation and the customer experience, for example, non-personal data may become attributable to a particular person (so falling under the European General Data Protection Regulation (GDPR)) as a result of the pattern-matching techniques used by the system. This therefore needs to be anticipated upfront, ensuring that the relevant consents are obtained.
Additionally, if the solution evolves to make its own decisions about the treatment of the data, it will be vital to consider how this can be managed within the data controller and data processor framework. Article 22 of the GDPR provides that a data subject has "the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her," so the need for technical controls and human oversight remains a key control that any data controller needs to consider, by reference to the nature of the decisions being made.
Bias and errors in data
If value and business improvements are being derived from data, care should be taken at each stage of the contracting process. Where has the data come from? Is there a risk of polluted data and bias through previous use and if so, how should the company’s data set and the provider’s platform be combined to ensure this is addressed?
Data set biases and inaccuracies will be reflected in outputs from a trained system, so it is in all users’ interests to ensure the quality of the data fed into the system. Working together with a platform provider to understand the source of previously used data, but also a forensic assessment of the quality of your own data, is key. As decision-making is becoming increasingly opaque, retailers should also consider strong audit and transparency provisions, to ensure there is clear traceability and accountability of data use and ‘learnings,’ alongside a mechanism for recourse.
Impact and treatment of service failures
It is not unreasonable for retailers to seek corresponding improvements in service levels where RPA and AI solutions are deployed, due to higher frequency of tasks undertaken and likely improvements in speed and accuracy.
That said, any incidents or failures that do arise are, as a result, much more likely to be perpetuated and therefore catastrophic, with a significant impact on the retailer’s business. A price adjustment by way of a service credit is unlikely to be adequate. Historic service level regimes focusing on timeliness and quality will need to be examined for each deployment, determining if they are appropriate. The risk of perpetuation of mistakes means that human supervision of the technology outputs will still be beneficial, to mitigate the risks of perpetuation of errors. Similarly, the impact of non-availability of the system is likely to be compounded by the absence of a manual work-around.
Retailers should consider this in the same way as one would for any business-critical technology platform; thinking about the impact of down-time both in peak and non-peak trading periods, and making sure that the business impact of service failures is catered for within the contract. There are a range of contracting positions we currently see in the market, as both customers and suppliers assess new risk profiles. Retailers should consider their contracting risk profiles carefully, assessing how the contract would respond to a significant outage impacting trading or a significant loss of derived data and ‘learnings’ and consider liability provisions through this lens.
This should also be considered alongside the traceability of decision-making and errors in data as outlined above. Without transparency and audit rights, the lines will become increasingly blurred as to whether something is caused by the data set or the system itself.
Exit and avoiding supplier lock in
Key for a retailers is the value of being able to use the derived data set. From the outset it is important to consider what would happen if things break down, particularly where an AI or RPA deployment is often part of a larger technology services engagement. Ideally, a retailer may wish to negotiate the ability for it to continue to licence to AI / RPA tools on a standalone basis after the original term, so the commercial and operational basis on which this can be drawn down is clear. The data should be returned at the end of the contract in a readable form so that the retailer can use it with another tool or system, retaining the value of the investment . Details of the business rules applied to the system in its deployment for the retailer should be included.
So what next?
Both suppliers and customers alike are still exploring their own contracting positions regarding AI and RPA deployments. In a sector where the stakes are high, investment decisions must be based on clear cost-benefit analysis. AI and RPA have the potential to save businesses significant costs, so it is prudent to target it to deployments that can really transform a business. Simultaneously, it is important to be alive to the risks, paying close attention to how a contractual framework would respond to the different risks posed by AI and RPA technologies.
Chloe Forster is the legal director at DLA Piper.