Tim Paul

Talking about design and AI

06 Oct 2024

Way back in March 2024 I ran a 90 minute online workshop as part of Services Week 2024.

I wanted to create a space for designers to discuss the potential impact of AI on our roles, and on the people who run and use the services we design.

Let’s Talk About Design and AI Let’s Talk About Design and AI

It was going to go on a government blog, but as that never panned out I'm publishing it here instead.

Warning - 6 months is a long time!

AI and our attitude towards it has no doubt changed since March, so this write-up is best treated as a snapshot of a moment in time.

The workshop #

We were joined by about 25 people from 10 different organisations, including GDS, Defra, MoJ, MoD, DfE, NHS, Home Office, Local Government and even the BFI.

Participants were a mixture of designers, user researchers, delivery managers, content designers and developers.

Here's what we talked about...

How are we already using AI? #

First we shared examples of how we’re using AI personally or in our organisations.

Some people described how their departments were starting to use AI chatbots to reduce the burden on their call centres. Others talked about how they had used Large Language Models (LLMs) like Chat GPT to help them plan, write user stories, critique their work, storyboard videos or write code.

A few use cases were more experimental. For example, Kuba Bartwicki has tried using an LLM to generate HTML prototypes of pages on GOV.UK, and I've been using one to try and convert PDF forms into web forms.

The positive impact of AI #

Next we considered the ways in which AI and automation might have a positive impact on our roles.

Many participants were hopeful about how AI could help automate aspects of their work, allowing them to be more productive, focus on more valuable tasks, or simply work less.

Examples of the kinds of things they knew or imagined could be at least partially automated included:

Participants identified numerous points in their workflow when AI and automation could be of use.

“I have used AI for planning my UX activities and writing my user stories

The general idea was to use AI to accelerate the delivery of tasks or improve the quality of outputs, rather than to completely replace humans. This was usually proposed in one of two ways:

  1. an AI creates a draft, then a human checks and amends it
  2. a human creates a draft, then an AI checks and amends it

Finally, some participants reported using AI to provide more general support in their role:

“Chat GPT has been a really great source of advice, mentoring and guidance - both professionally and personally”

We also discussed the potential benefits of increased automation on public services themselves.

Services might become simpler, more efficient and better able to adapt to our individual circumstances, and operational staff might have more time to dedicate to complex cases.

However we also recognised that our ability to realise these benefits will depend on what organisations decide to do with any AI-derived efficiency gains.

Finally, one participant pointed out that an ageing population, and repeated exposure to Covid, may mean that we will all come to rely more on automation technologies in the future.

The negative impact of AI #

We then moved on to consider the potential negative impacts of AI and automation.

5 thematic concerns emerged:

  1. the impact on the demand for our roles
  2. the impact on the quality of work that gets done
  3. the impact of a growing dependency on AI technology
  4. the impact on the environment, vulnerable people etc.
  5. the impact on users of public services

The impact on the demand for our roles

Participants were concerned that if more aspects of our roles were automated, fewer people would be required in those roles, making it harder for them to find work.

We wondered if even the perception of AIs capabilities could have an impact, regardless of what the technology can actually do, and whether skills like design could become devalued over time.

A related observation was that the people who hire UCD practitioners sometimes value the physical assets they produce more than the less tangible work of building trust, consensus and flow in a team.

“As a user researcher, there’s a lot of trust building and empathy that goes into the work that I do. I’d be worried if my job was entirely replaced by AI as I think the nuances and body language that I pick up on would be lost”

Finally, we asked whether hype around AI is drawing attention and funding away from equally important but more ‘boring’ work. What is the opportunity cost of all of this?

The impact on the quality of work that gets done

A general concern was that the outputs of generative AI can tend towards the generic and unimaginative. That we may miss opportunities to truly innovate or to tailor services to their specific users' needs.

Another concern was that generative AI tools can miss nuance, are prone to errors and hallucinations, and that if we relied too much on them it would start to impact on the quality of work we produce.

Finally, someone wondered if some practitioners would use AI to get ahead, without gaining the experience needed to spot where AI outputs are flawed, or work in a more nuanced, strategic way.

The impact of a growing dependency on AI technology

Current leading AI technologies are provided by a relatively small number of private companies. People expressed concern that we may become too dependent on them.

They asked:

“I feel emotional intelligence plays a big part in the delivery manager role — if this was removed, it could have a negative impact on individuals in a team”

The impact on the environment, vulnerable people etc.

Participants were generally aware of the environmental impact of AI technologies that result in part from their increased use of energy and water in the data centres that house them.

Some also knew about the exploitation of cheap labour - of people hired to tag items or delete offensive, traumatic content from AI training data sets.

Others observed that the work of creatives was added to AI training data sets without their consent or compensation, and that the datasets can contain biases against certain groups of people.

One person worried that the current hype around AI is creating pressure to use it inappropriately, in cases where a different technology or approach could be just as effective and less harmful or expensive.

The impact on users of public services

Finally, we discussed the potential risks of increased automation of public services themselves.

For example, the risk that:

How can we influence things? #

In our final discussion we asked how we might positively influence the development and adoption of AI technologies, so that its benefits are realised and its harms avoided.

A number of strategies emerged:

Get involved: Don’t pretend this isn’t happening - get curious about AI tools. Try them out and share what you learn with others. Help develop new, inclusive design patterns for AI systems. Talk to AI experts and use the opportunity to share knowledge.

Help establish standards: Advocate for the development and adoption of standards for the ethical and transparent use of AI technologies. Propose incorporating guidance on AI into existing standards, like the Government Service Standard and the GOV.UK Design System.

Promote the ethical use of AI: Remind teams of the potential for biassed and inaccurate outputs from AI. Encourage them to start with small, low risk pilots and build trust with users. Make sure they factor in the environmental and social costs associated with these technologies.

Show the value of UCD: Demonstrate how iterative, user-centred design can turn a technology into a useful product or service. Highlight the more social aspects of design practice - the consensus building, engagement and influencing. Ensure service teams stay connected to their users via research.

Adapt your practice: Be ready to become more of a generalist, using AI to support skills gaps you may have. Establish constructive ways to work with your new data science and tech colleagues.

Conclusions #

The discussions were lively and interesting. The chance to talk to people from other organisations generated multiple viewpoints, and helped build a picture of what’s happening ‘out there’.

“I really enjoyed the session and I liked the fact that it wasn’t too rigid. It meant that there was some leeway for discussion and was very relaxed in terms of tempo.”

We learned that practitioners are already using AI to advise, support and accelerate their productivity at work. The technology is good enough to make this worthwhile in some cases.

A lot of the discussion was quite tentative, with participants acknowledging that it felt very early to be making definitive claims about the impact of these latest technologies.

Participants showed a good awareness of the potential benefits and harms of AI and automation technologies, and a pragmatic desire to find out more about them.

However, because AI is a nebulous concept, I think we were sometimes tempted to imagine it as a kind of universal salve that could fix all the issues we’ve experienced in our jobs and with public services.

Likewise, some of the benefits we attributed to AI are the same ones we attributed to ‘digitisation’ a decade ago - cheaper, more efficient, better run services. Less bureaucracy, more meaningful work.

The final discussion left us with some practical strategies for positively influencing the application of these technologies, and has certainly inspired me to explore more AI tools and resources.

Thanks #

Huge thanks to everyone who attended and filled the workshop with fascinating conversations. Thanks to my GDS colleagues Kelly, Kuba, Monica and Lucy, who agreed to help me facilitate, and did such a good job.

Finally, thanks to the folk who kindly agreed to not come to the session, so we could make the group as diverse as possible whilst keeping the size manageable. I hope this post makes up for that a little.


Tim Paul