Pros and cons of ChatGPT technology for safety profession
I recently began experimenting with OpenAI’s chatbot, ChatGPT. If you haven’t experienced this powerful tool yet, you almost certainly will soon. ChatGPT is an experimental chatbot that uses technology referred to as a large language model to produce human-like answers to nuanced questions. I asked ChatGPT to explain what a large language model is in simple terms, and here’s what it told me, “A large language model is a deep learning model that uses text data to generate, recognize and classify text.” When asked how the chatbot itself works, it replied, “ChatGPT is a chatbot technology that uses natural language processing and machine learning to generate text conversations. It works by taking the user’s input and using algorithms to generate a response that is similar to how a human would respond. The chatbot learns from the conversation history and is able to generate more natural responses over time.”
After using ChatGPT for a few minutes, my first thought was this technology would wreak havoc for educators. You don’t have to be an evil genius to see the potential goldmine for plagiarists that this technology offers. Memories of struggling to write essays for university courses entered my mind, and I couldn’t help but thinking what a wonderful tool this would have been… if only to get me started.
Using chatbots to write safety documents
I couldn't help but consider the implications this technology might have on the safety profession. After all, ours is a profession plump with documentation, procedures, policies, and many more reams of documents filled with paragraph upon paragraph of written word. For better or worse, the safety profession is a landscape that is incentivized to find the most efficient ways to write safety documents so the company might win the bid, appease the client, or placate the regulator. Previously, this has meant that many safety documents are reformatted, revamped, and recycled from other documents. Entire safety programs are copy-pasted by consultants and others and the find and replace function is used to scrape from the document the name of the previous company that used it and implant that of the next. Could technologies like ChatGPT change this?
I asked ChatGPT to write a health and safety policy statement for a construction company and here is what it produced:
“At [Company Name], the health and safety of our employees, contractors, and the general public is our top priority. We are committed to providing a safe and healthy work environment for all individuals involved in our projects.
To achieve this, we will:
- Comply with all applicable health and safety laws, regulations, and industry standards.
- Establish and maintain effective health and safety programs and procedures, including training, hazard identification and assessment, and incident reporting and investigation.
- Encourage open communication and cooperation between management, employees, and contractors to identify and address health and safety concerns.
- Continuously evaluate and improve our health and safety performance.
- Provide necessary resources and support to ensure that health and safety is integrated into all aspects of our work.
We expect all employees, contractors, and visitors to our worksites to take responsibility for their own health and safety and the health and safety of those around them. We will hold ourselves and others accountable for upholding these standards."
Not bad, right? I asked it to produce a procedure explaining how to safely use a ladder, and it gave me seven simple instructions to follow. The instructions were clear and presented in a logical way, starting with instructing the user to inspect the ladder and ending with guidance on how to lower it and close it before storing it away.
It’s far from perfect
Much has already been written about problems with the information that ChatGPT produces. Other users have shown the chatbot can produce answers that are biased, in racist and sexist ways. And it didn’t take long to see that someone relying on the technology for comprehensive safety expertise could be misled. For example, when I asked ChatGPT to produce a safe work procedure for drilling through concrete with a hammer drill, it provided this instruction, “Gather the necessary safety equipment, including safety glasses, ear protection, and a dust mask.” This instruction may appear sound, but safety professionals will notice that it lacks some important specificity, specifically with respect to the type of respiratory protection required.
Should we use it to write safety documentation?
In many ways, this is the wrong question to ask. There is no doubt that this technology will be used in countless professions for innumerable purposes. Safety will be no different. As producers, consumers, and reviewers of safety documentation, we should ask ourselves, how might we use this technology responsibly?
In a profession that sees no shortage of demand for written documents, these chatbots may have the potential to free up much of a safety practitioner’s time, allowing them to spend more of it in the field getting to know the work and the people who are doing it. The procedures I asked ChatGPT to produce were flawed, but they were not entirely useless. They could provide a good starting point from which someone with subject matter expertise could review and revise the document to make it more accurate and relevant to their specific working conditions. At least for now, the subject matter expertise of a human who is educated, trained, and experienced in health and safety is still required to make sure we aren’t producing documentation that could put the health and safety of employees at risk.