Website Logo
Follow us

The Impact of AI on the Future of Health & Safety

CATEGORY: Opinion

21st August 2023

In a webinar recently somebody asked me if there was a possibility OSH professionals could be replaced by artificial intelligence (AI). It’s not an unreasonable question. Since Open AI’s ChatGPT chatbot was released for general use last November, it has super-charged debate about the potential impact of systems that can synthesise and present information on almost any subject in response to a query. That’s partly because ChatGPT is the first public-facing AI that isn’t highly restricted in its scope – it is light years away from the automated customer service assistants that pop up on retail websites asking if there’s anything they can help you with. But it’s also because so many people who tested it found it returned useful, knowledgeable answers.

 

This competence in dealing with complex requests for information suddenly made concrete what had been talked about in the abstract for more than a decade: AI could displace a lot of people in the workforce. A report by accountants Deloitte forecasts it will sweep away to 46% of jobs in some professions.

But not for a while in the health and safety sphere I think. Cost cutting at the expense of safety has long been one of the cardinal sins when courts are judging a company’s culpability for workplace fatalities and injuries, it is listed as an “aggravating factor” in the sentencing guidelines for judges in England and Wales. If there was evidence that a serious incident had been made possible by the replacement of human oversight, in the form of OSH practitioners, by less costly automated support, prosecutors would have an open goal.

 

There is also the question of dependability. Large Language Models (LLMs) like the one ChatGPT rests on work well where there is a lot of relevant data for them to draw on. There is a reasonable volume of material with my name attached published online, but almost none of it biographical. So as an experiment to see how an LLM would fare with a small dataset – Chat GPT learned from a scrape of a majority of the world wide web’s pages last Autumn – I asked it to write a Wikipedia entry on me. The resulting 300 words contained 35 separate pieces of information. Only four of them were correct (I am a British journalist, born in London and I did – long ago – study English literature. Everything else was wrong, from my year of birth, through my entire CV, to where I live.

 

The AI didn’t express any caution about these findings or note the shallowness of the data pool it had fished in, all the errors were stated authoritatively, because it is programmed to return something convincing, whatever it is asked.

Convincing but unreliable is not going to inspire confidence among organisations looking for safety assurance. For the time being AI is unlikely to threaten OSH professionals, whose time is increasingly work is taken up with helping foster strong safety cultures rather than just advising on technical issues. It may even support them; there are already AI safety systems that use site CCTV as their eyes to spot and log hazardous behaviour such as straying outside pedestrian routes in warehouses.

 

Those professions where the Deloitte report predicted a near-majority of jobs could go were administration, accountancy and law; its forecasts for more physically demanding trades such as construction and maintenance were as low as 4%.

But these are early days; AI systems will be refined, and, crucially, will refine themselves because they are designed to keep learning.

 

Open AI has already launched an API (application programming interface) for GPT-4 (the successor to the system used by ChatGPT) that allows it to talk to and manipulate other pieces of software. This capacity, known as agentic AI, allows GPT-4 not just to offer solutions to the problems it is set, but to put the solutions into practice. At present that means that if you ask it to supervise promotion of a product it will not just draw up a strategy (in seconds) but will design the materials and the mailshot.

 

It’s not too far-fetched to think of a time when an AI programme – and there are many in development – could manipulate a robot-mounted camera to investigate a particular confined space, then write a risk assessment, permit criteria and pre-task training tailored to that space. In specialist applications like these AI could replace consultants once it could be proved that its assessments were more thorough than a human. (Then you would have to justify to an insurer or an enforcing authority why you relied on a human rather than the superior AI.)

 

To get to that point of near omniscience about confined spaces, the AI would have to be fed on a very rich diet of all the potential hazard combinations in different configurations of tanks or bulkheads. We are a way off that.

 

So the short answer to that question of whether AI will replace health and safety professionals is: possibly, eventually, in some cases, but it won’t be in the next 20 years. And when it does, it might be because automation has already replaced many the people the OSH professionals are employed to protect.

 

Guest blog written by writer, editor and speaker, Louis Wustemann.

If you’d like to know how Houston can help you, get in touch with our team