An investigation by intelligence company Recorded Future this week has uncovered that a vast network of websites recognised under the umbrella name ‘CopyCop’ have been churning out articles which use real news articles as a starting point, before using concerning AI prompts to change the narrative to suit the target audience.
Of course, it may well be argued that this is nothing new – all journalists are arguably biased to some extent, and certain publications are known to be more left or right leaning than others. However, the scale on which AI has now made this possible (and via an almost completely automated process, too) is genuinely staggering. CopyCop alone have reportedly posted almost 19,000 individual articles already using this method.
The process appears to be based on some form of ChatGPT (a product from Sam Altman’s hugely successful OpenAI which gained real mainstream attention last year and has grown exponentially since). This is a free product (for the base version) and can be integrated in a number of different ways into separate products which then run autonomously – in short, a very powerful tool for disseminating propaganda.
The fact AI is involved here is made especially clear by the fact that some of these newly released articles explicitly mention their origins in-text. For example, the following line appears in one of the articles online:
“It is important to note that this article is written with the context provided by the text prompt. It highlights the cynical tone towards the US government, NATO, and US politicians. It also emphasises the perception of Republicans, Trump, DeSantis, Russia, and RFK Jr as positive figures, while Democrats, Biden, the war in Ukraine, big corporations, and big pharma are portrayed negatively”
Recorded Future were also able to go ‘behind the curtain’ to locate the prompts used to create some of the articles, some of which may well lead to prosecution.
As already mentioned, a huge number of articles have been generated here. However, even more concerning is how well they have performed online. Millions of views have reportedly been racked up by these pieces, and others have been officially shared via the social media of Russian embassies. They have performed particularly well on social media platforms like Facebook, where ‘echo chambers’ of extremist views have become particularly rife.
There are a huge number of legal problems to unpack here for aspiring legal professionals (solicitors, barristers, paralegals, etc) as part of their upcoming applications/interviews for such roles.
From an intellectual property (more specifically copyright) perspective, the use of existing articles in itself may be a problem. This has been a very common talking point over the last few months – if you ‘train’ an AI model on, or ask it to rewrite, a human-authored piece of work (where copyright will exist), does some form of reproduction of that information constitute a violation of intellectual property law?
Many courts around the world are currently grappling with this problem by trying to align outdated, crudely worded statutes with a rapidly developing, sophisticated industry which innately resists regulation. However, it has become clear through a number of court rulings so far that many jurisdictions are not willing to recognise AI as an author in and of itself.
This can most clearly be illustrated by the recent case of DABUS, an AI model, producing two patents which its creator tried to file in the name of the model itself. While South Africa’s IP commission was initially supportive of the idea (the relevant legislation there comes from the 1970s and does not define the term ‘inventor’, which again suggests new statutes are needed), most others have been hostile since.
In the UK, the IP Office rejected the filing because the inventor was not a ‘natural person’ as required (the European Patent Office later took broadly the same view). Law firms specialising in this area include boutique names like Bird & Bird alongside Magic Circle outfits like A&O Shearman (based on Legal500 rankings).
Furthermore, what data should these models be allowed access to in the first place? Data privacy is a profitable area for many corporate law firms (Legal500 names Hogan Lovells and Linklaters among others in Band 1). Although these CopyCop articles seem to mostly be trained on publicly available information (e.g. articles posted for everyone to see online already), there is still some debate to be had over whether the original authors could have foreseen that their work would have been used for this purpose.
Are there also proceedings based on the disinformation itself on the horizon here? In 2022, the UK Government announced they were prioritising foreign disinformation (especially on major political issues) as a topic to include under the new Online Safety Bill. Recent news articles have also discussed the new offence of ‘false communications’, which may also be relevant here (though that offence is likely more appropriate for text online which is designed to harm others, which is a possible but more remote effect of these CopyCop articles).
It is not just CopyCop who may be liable, however – recent UK legislation in particular has shifted the attention to the more identifiable target – the sites which willingly host this information. For example, sites like Facebook need to be increasingly diligent in which content they allow to pass through their filters (and which they flag up) in order to avoid prosecution when ‘fake news’ articles like those from CopyCop are shared widely on their platform. These large tech companies often complain this is an unreasonably onerous expense for them, and seek clarification from top lawyers on what exactly they will need to do in order to comply.
Loading More Content