Many creatives have now explored generative AI in some capacity, whether that’s out of curiosity or to deploy within a live project. It can be a divisive subject, with some favouring the speed and agility of ideas with power to break through creative block.
Others are sceptical – feeling threatened by the technology and concerned it will devalue or replace their craft. A debate which is going nowhere fast.
Whatever your take, there’s no denying it’s a powerful tool which has the ability to become more powerful as it evolves – faster, more accessible and potentially hazardous. As such, it will likely require some kind of boundaries, controls or regulations to address areas such as legality, ethics, professional and industrial use – to name a few.
Case in point, the internet can be a pretty scary, dark place and it doesn’t feel like there’s much regulation around online activity. The net has been around for decades but without a hard set of rules some very bad things are able take place.
Currently, generative AI is wide open and at the start of its journey, producing some truly inspiring and beautiful creative – from images, to audio, copy and moving image, but it also has the ability to create negative things too. For example, the recent Martin Lewis deep fake where generative AI recreated money expert Martin Lewis giving financial advice so accurately you’d be hard pushed to tell the difference between what the AI tool created and the real guru.
Not only that, there’s potential for political figures to be drawn into deep fakes and many other concerns such as copyright and reliability of data etc.
We’ve seen so many new generative AI apps being developed. The speed at which the technology is evolving is unprecedented, a result of our modern thirst for the new “shiny thing”.
So, how do we gain control and protect users? Where are the societal guidelines? How can we keep the benefits while simultaneously progressing the technology?
2LK has an internal AI council to discuss developments in and around AI – it helps keep us on the pulse and informed about what’s going on around us. The fact AI is evolving so quickly means it’s difficult to keep up, which makes the matter of regulation all the more pressing and equally, fundamental.
With sizable subjects such as ‘Ownership of creative’, ‘Data privacy’, ‘Transparency’, ‘Bias’, ‘Human oversight’, ‘Consent’, ‘Auditing and compliance’, ‘Research and innovation’ and ‘Enforcement’, there’s a lot to navigate.
So, we got our heads together to ponder how we feel some form of regulation could be initiated, manifested or carried out.
There’s a fine line between control/protection and the stifling/restriction of creativity and we must tread this tightrope carefully. In an ideal world, all those with a vested interest in how the regulations play out would have a say in setting them. The exponential development of technology in recent years has had many amazing impacts on the world. This should be celebrated and encouraged, not hindered by the introduction of too many rules and regulations.
Although referring to AI as a bigger topic (not just generative), in September 2021 the UK Government launched their National AI Strategy, an ambitious 10-year plan for the UK to remain a global AI superpower, which has a section on ‘Governing AI effectively’. In July 2022 they published a follow up paper ‘Establishing a pro-innovation approach to regulating AI’ setting out their early thinking on how to regulate AI.
Following on from this, the Department for Science, Innovation and Technology produced a white paper in March 2023 proposing rules for general purpose AI.
Instead of giving responsibility for AI governance to a new single regulator, the UK government is urging existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with independent approaches that suit the way AI is being used in their sectors. These regulators will be using existing laws rather than being given new powers.
Across the pond, in June 2023, the European Parliament approved their EU AI Act, a step towards the first formal regulation of AI in the West to become law.
At the end of August 2023, the UK’s House of Commons published a major report on the governance of (all, not just generative) AI following examination of the opportunities and risks by MPs in the Science, Innovation and Technology Committee. The report went one step further than the white paper from March and argued the UK should introduce an AI Bill to avoid falling behind regulators in the EU and US. The UK government has two months to respond to the report.
If you’d like to discuss supercharging your brand experiences, contact us to make the most of moments that matter.