By Katherine Mathieson, CEO, British Science Association

Following yesterday's Brexit Science and Innovation summit, I've been thinking about the future of technology – and in particular, the role that public acceptance plays in it. Yesterday’s meeting to explore the future of science and innovation in our uncertain, Brexit-focussed nation made me think back to the British Science Association's 2017 Huxley Summit. Held at the Royal Institution in November, business leaders, politicians, and scientists debated ways in which we can and should learn from the past when planning for the future.

Sometimes people react in ways in which scientists don’t expect or understand. Looking ahead, what will the public psyche towards science be like in the coming years? I believe that AI, above all technologies, will dominate the headlines. But will the mood be positive, precautious, or downright petrified?

Technology has the power to change our world, but it’s not unstoppable. Genetically modified (GM) crops are a prime example of how public perception can pull the plug on science and its revolutionary potential. Despite a huge amount of research into GM crops and their impact on human health and the environment, the UK public largely remains resistant to their widespread introduction in agriculture and industry.

So, what knowledge can be gleaned from the GM story? I don’t believe the public are stupid. Neither are they an unimportant part of innovation; it’s quite the opposite. Echoing discussions at the Huxley Summit, I think that rather than jumping to the conclusion that the public are naive and will believe any scare story fed to them, it is more useful to ask, as Evan Davies did, “why are some lies more appealing than other lies, when many lies are available?”. This is true of Brexit, and is also true of GM. We should work on the assumption that the public is intelligent and will gravitate towards good sense rather than bad. It’s not enough to blame their apparent lack of expertise or “unnecessary” fear to explain away the barriers they may put up to new technologies.

A session from the 2017 Huxley Summit: The will of the people? Science and innovation in a post-truth

Last month, the world biggest technology show, The Consumer Electronics Show (CES), took place in Las Vegas, and AI was centre stage. The show is said to “set the tone for the year ahead”, so it seems impossible to overlook the impact that AI will have, and probably already is having, on our daily lives.

The scare stories surrounding AI are hard to ignore. Art, and films in particular, are defining AI in people’s minds. The fourth season of Black Mirror launched on Netflix just before the New Year; it was thought-provoking and jarring, and what followed were the scores of detailed analyses about our possible dystopian future. While the stories being told are probably over-cautious, I do think that Black Mirror portrays an interesting message, one which relates strongly to this debate – the impact of the human side of technology.

I don’t believe that technology is inherently good or bad, rather, it’s the people who develop and use it that matter. I think that the major lesson we could learn from the GM disaster and the current narrative in a lot of science fiction, is that we need full diversity and inclusion. As Sarah Drinkwater, Head of Google Campus London, argued: “the more we seek equality, the fewer problems we’ll see”.

If AI was around decades ago, it may well have been racist, homophobic, and sexist. However, this is something we’re starting to see now, in technology that we all use every day. The new iPhone X, for example, is reportedly failing to distinguish between Chinese users. This points to the lack of diversity in the industry. As a society, we don’t want technology that’s “frozen in time”, exhibiting the worst of our prejudices and biases.

I think that some parallels can be seen here with the GM debate and why scientists failed to convince the public that it would be good for us. It was a small, homogenous group of people - mainly large, US-based, multinational companies - who developed the science and then introduced it way too late, with too few voices having been involved in the process. This inevitably led to controversies, mistrust, and ultimately rejection. If we don’t act soon with AI and include the public entirely, then we risk rejection again. We need to communicate fully, openly, and to everybody. We also need to give them ownership and choices. Justin King, former Chief Executive of Sainsbury’s, said that consumers are only ever asking three questions: “Do I get a choice? Do I have a chance to change my mind? What’s in it for me?”.

The Huxley Summit brought up many fascinating points and seeing the Brexit Science and Innovation summit unfold yesterday gave me reason again to reflect on the topics we explored. I am passionate about the role science and innovation can play in our futures, but it must be done in the right way. If people of all backgrounds and beliefs can challenge, debate and shape science, then this will impact positively on technological advancements in the future.

I think that’s what these summits are all about: bringing people together to explore how we humans can lead and navigate our technologically advanced world. Science is too important to just be left to the scientists. The crucial moment for science is when it faces the wider public – when it’s applied, when it becomes a technology, when people start to use it - interacting with humans in all our complexity. This is what we must not forget.

To hear more about the debate, visit the British Science Association’s YouTube channel for full videos of the discussions.