Generative AI rules. Says who?

There's no denying Gen AI is a powerful tool in the right hands but also has the potential to be hazardous. So, how do we go about regulating it? 2LK Digital Design Director, Jamie Manfield, discusses our possible avenues to regulation.

Written by Jamie Manfield.

Why regulate?

Many creatives have now explored generative AI in some capacity, whether that’s out of curiosity or to deploy within a live project. It can be a divisive subject, with some favouring the speed and agility of ideas with power to break through creative block.

Others are sceptical – feeling threatened by the technology and concerned it will devalue or replace their craft. A debate which is going nowhere fast.

Now the dust has settled, after an industry wide crash course in generative AI (Midjourney and ChatGPT to be specific) people are beginning to form an opinion on whether they’re for or against.

Whatever your take, there’s no denying it’s a powerful tool which has the ability to become more powerful as it evolves – faster, more accessible and potentially hazardous. As such, it will likely require some kind of boundaries, controls or regulations to address areas such as legality, ethics, professional and industrial use – to name a few.

Case in point, the internet can be a pretty scary, dark place and it doesn’t feel like there’s much regulation around online activity. The net has been around for decades but without a hard set of rules some very bad things are able take place.

Currently, generative AI is wide open and at the start of its journey, producing some truly inspiring and beautiful creative – from images, to audio, copy and moving image, but it also has the ability to create negative things too. For example, the recent Martin Lewis deep fake where generative AI recreated money expert Martin Lewis giving financial advice so accurately you’d be hard pushed to tell the difference between what the AI tool created and the real guru.

Not only that, there’s potential for political figures to be drawn into deep fakes and many other concerns such as copyright and reliability of data etc.

We’ve seen so many new generative AI apps being developed. The speed at which the technology is evolving is unprecedented, a result of our modern thirst for the new “shiny thing”.

So, how do we gain control and protect users? Where are the societal guidelines? How can we keep the benefits while simultaneously progressing the technology?

2LK has an internal AI council to discuss developments in and around AI – it helps keep us on the pulse and informed about what’s going on around us. The fact AI is evolving so quickly means it’s difficult to keep up, which makes the matter of regulation all the more pressing and equally, fundamental.

With sizable subjects such as ‘Ownership of creative’, ‘Data privacy’, ‘Transparency’, ‘Bias’,  ‘Human oversight’, ‘Consent’, ‘Auditing and compliance’, ‘Research and innovation’ and ‘Enforcement’, there’s a lot to navigate.

So, we got our heads together to ponder how we feel some form of regulation could be initiated, manifested or carried out.

Possible avenues

  1. Could we see the first generation of crowdsourced regulation? Let the public decide the parameters of what’s acceptable/not acceptable, creating humanised boundaries developing an unwritten set of ‘do’s’ and ‘don’ts’ of generative AI?
    Pro: It’s created by users, for users.
    Con: Subjective and slow to develop.
  2. A recent article in The Drum explained that some creative agencies are taking matters into their own hands – drawing up in-house charters for the regulation of AI.
    Pro: Bespoke set of guidelines perfectly tailored to each business.
    Con: Lack of consistency across the industry.
  3. Industry bodies step up and fill the current gap in formal regulation with a set of industry-specific rules.
    Pro: Rules will be appropriate for the specific industry. Furthermore, they’d be governed by a collection of topic experts who have the industry’s best interests at heart (impartial).
    Con: May only work if communicated to the central government.
  4. Tech manufacturers, e.g. OpenAI, setting boundaries and parameters within the software to restrict certain negative uses. After all, the software is possibly more capable of detecting malicious content than humans.
    Pro: Easier to roll out and applicable to all users.
    Con: Could be manipulated for commercial gain and could restrict creative freedom. Lack of impartiality.
  5. Central government regulation implementation.
    Con: Slow to implement (refer back to internet example), would have to lean on industry bodies.
    Pro: Single source of governance, enforceable by law, likely to receive funding and provide stronger outline communication.

What now?

There’s a fine line between control/protection and the stifling/restriction of creativity and we must tread this tightrope carefully. In an ideal world, all those with a vested interest in how the regulations play out would have a say in setting them. The exponential development of technology in recent years has had many amazing impacts on the world. This should be celebrated and encouraged, not hindered by the introduction of too many rules and regulations.

Although referring to AI as a bigger topic (not just generative), in September 2021 the UK Government launched their National AI Strategy, an ambitious 10-year plan for the UK to remain a global AI superpower, which has a section on ‘Governing AI effectively’. In July 2022 they published a follow up paper ‘Establishing a pro-innovation approach to regulating AI’ setting out their early thinking on how to regulate AI.

Following on from this, the Department for Science, Innovation and Technology produced a white paper in March 2023 proposing rules for general purpose AI.

Instead of giving responsibility for AI governance to a new single regulator, the UK government is urging existing regulators – such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority – to come up with independent approaches that suit the way AI is being used in their sectors. These regulators will be using existing laws rather than being given new powers.

Across the pond, in June 2023, the European Parliament approved their EU AI Act, a step towards the first formal regulation of AI in the West to become law.

At the end of August 2023, the UK’s House of Commons published a major report on the governance of (all, not just generative) AI following examination of the opportunities and risks by MPs in the Science, Innovation and Technology Committee. The report went one step further than the white paper from March and argued the UK should introduce an AI Bill to avoid falling behind regulators in the EU and US. The UK government has two months to respond to the report.


Let’s talk.

If you’d like to discuss supercharging your brand experiences, contact us to make the most of moments that matter.

More reading:

Seven ways AI is inspiring creatives and what tools to use.
Generative AI: Fad or future?
What did MWC 2023 tell us about the future impact of AI on brand experiences?