Skip to content
Menu

Intelligence-enabled workplace health and safety

Safe Work Month and Mental Health Week 2020

Speaker: Maureen Hassall, Associate Professor and Director and co-founder of UQ R!SK (at the University of Queensland).

For those at the forefront of work health and safety, Maureen explores some of the threats, opportunities and emergent risks associated with intelligence enabled WHS, and how advancements in technology may be optimised to improve WHS.

Intelligence-enabled workplace health and safety

Hello everyone. And welcome to Workplace Health and Safety Queensland's second last presentation during Safe Work Month 2020. This one's all about intelligence based workplace health and safety with Maureen Hassell. I'm Chris Bumblus, media manager for the Office of Industrial Relations, your host for today. Firstly, I'd like to acknowledge the traditional custodians of the land on which we meet today, and pay my respects to their elders past, present and emerging. I'd like to extend that respect to Aboriginal and Torres Straight, honor the peoples watching today. We are in the final days of Safe Work Month 2020, putting the spotlight on work health and safety. Thanks to COVID-19 safe work month this year was a little different. We are still delivering vital safety and return to work information, but instead of live events and functions, it's all coming to you digitally. Today's session will be delivered by Maureen Hassell associate professor and director of UQ RISK at the University of Queensland. Maureen's research and consulting work focuses on working collaboratively with industry to develop and apply innovative, practical, and effective user-centered solutions that address the range of potentially catastrophic risks faced by industry. Maureen also provides risk management and human factors advice, education and training for undergraduates, masters, PhD students, and industry practitioners and leaders. She holds a bachelor's degree in engineering and psychology, an MBA and PhD in cognitive systems engineering, and is a chartered and registered professional engineer, and a certified professional ergonomist. Very shortly, Maureen will look at how we are moving into the fourth industrial revolution. What new opportunities and threats will emerge that can have a positive and negative impact on workers, health, and safety, as well as workplace health and safety management. She'll delve into some of the research exploring how to use this tech to effectively leverage the benefits without introducing more threats. The good news is at the end of her presentation, there's a Q&A session, and Maureen will respond to your questions. Do ask a question or offer a comment. Please use the chat function at any time during this session. we'll get to as many questions and comments as time permits. Now let's welcome in Maureen Hassell. Please join us.

Thank you, Chris. And thank you for this opportunity to talk about intelligence enabled workplace health and safety. My talk's going to discuss how step change improvements in workplace health and safety will not come from buying the latest, greatest technology, but from adopting evidence-based, user-centered systemic solutions that address well-defined problems and opportunities. I saw this quote that's at the bottom of the screen just recently, and I think there's some truth in the fact that we get drawn into buying the greatest, latest, greatest piece of tech. But in reality we need to ask ourselves, is it gonna solve our problem, or is it gonna cause us more issues? And I'll talk through some of the tech that's coming with industry 4.0, and the potential upside and downside of that tech. But before I do, I want to spend a couple of slides, and the first part of this presentation just talking about what is workplace health and safety? So what is a workplace? Now, as I mentioned we need to take a systemic perspective. So when I think of workplaces, I think in systems terms, and on the slide is a diagram of what a system is. It has boundaries, it has inputs, and it has outputs. Internally, it has components, and those components comprise people, plant and equipment, or procedures, or processes. And then external to the system, disturbances and changes that come from technology, that come from the market and financial pressures, international national, and local regulations and policies, and public awareness, and perceptions from stakeholders. So when we're looking at, will we introduce technology into these systems and will these technologies help or hinder workplace health and safety? We need to think, in my opinion, of the whole system impacts, and not just a localized impact. Next, I want to talk about what is health and safety? So for me, health and safety comprises four things. On the health side, it comprises occupational health and worker well-being. On the safety side it comprises personal safety and process safety. And I distinguish them because I think occupational health and personal safety comprises different measures and different competencies if we're going to implement them, if we're going to assess them, then those on the worker wellbeing and process safety side. They're measured differently, and they should be assessed differently. So some of the definitions there that we traditionally use for safety and health have been freedom from harm, danger, injury, ill health, damage, or loss. But when we're looking at worker well-being, and process safety, we need to extend those definitions beyond freedom to harm, to what are our vulnerabilities? And how can we optimize our resilience to those vulnerabilities? So when we think about measures, occupational health and safety are usually measured, usually measured with events, or have we had harm-related potential events, or actual events? These events usually happen over a very short period of time. They usually happen to individuals, and they're not often bespoke to the workplace. For example, falling from heights can happen in all sorts of workplaces and does happen in all sorts of workplaces. But when it comes to worker well-being and process safety these events manifest over a longer period of time. They usually occur because there's either flaws in designs, or there there's drift in what happens in the the system. These events are often rarer. So it's difficult to measure from history, because the past is no prediction of the future. So we need to think about them differently. And I think this is important for us to understand when we're looking to industry 4.0 technology. And what is going to help us improve safety and health outcomes, and what is actually gonna cause more problems than it's gonna solve? So what is our challenge? What is our problem? On the slide we see some data that's available in the area of health and safety. The left-hand graphs show the workers' compensation data. Where we have lost time claims information expressed as number of claims, expressed as lost time, and expressed as cost per claim. And on the right-hand side of the slide we have Australian worker fatalities over time. And we also have an analysis of major losses for onshore gas as, an example of a process of a system safety event. When we look at these statistics we see particularly over the last five years there's been no discernible improvement. And we've put in a lot of effort, and there's been a lot of work done to try and deliver step change improvements in safety. But we're not seeing it in our statistics, when we look at our statistics. And on the fatality slide the orange is the Australian data, and the purple is the UK data. You can see there's a bit of a gap, between how we're going, and these are total number of fatalities, and how they're going in the UK. Which suggests that we could do better. And I think we can do better. Not only in personal safety, but also in process and system safety. But before we grab a solution we need to understand what is the problem in a little more detail. And there's been a number of studies that have looked at analyzing these incidents, or analyzing individual incidents, and a couple are shown on the slide. And what we're finding is when we analyze these events, most of them, the majority of them are known events. There's very few novel or surprise events happening. When we look at what is causing those events this an example of a study done by Jarvis & Goodard of the oil and gas industry. They found that some of them are due to mechanical integrity error, mechanical integrity failure. And a lot of those failures caused, because of inspection issues. The ones that weren't due to mechanical integrity failures were due to weaknesses in hazard identification, or weaknesses in controls. So in this analysis, in this study, there was three main causes. One, was an issue with inspections, so we knew what the integrity was, an issue with hazard identification, and an issue with controls, or weaknesses in our controls. Similarly, the Dreamworld coroner's report found that there was issues with the risk assessment, and we failed to implement what was basic controls that would have prevented the fatality. And more recently in Queensland, there was a report done in the mining industry. And I looked at the fatalities there. And the findings from that report highlighted that controls meant to prevent harm from being ineffective, or unenforced, or absence, and with no adequate supervision to identify that there were issues with the controls, and to remedy these shortfalls. So when I look at a lot of the events that are occurring that underpin the statistics that I showed you on the previous slide, and I look at the analysis performed by others, To stop harming people, I think we need to find more intelligent ways to identify hazards, or to help people identify hazards, and to select, monitor and maintain controls. And in this space, I think industry 4.0 tech can help us. It can help the frontline people identify hazards and select, monitor and manage controls. It can also help management support those frontline people in these endeavors. So the analogy that I use is the road directory analogy. We used to use books of street maps. An example there is shown of the Brisbane old street directory. Now these were static in time, that were . And they were based on historical data, so they were never up to date. And we've gone from them to the now more compact, more dynamic, real time navigation systems that we use. And some of these systems, and now not only give us current status of traffic, where we should go, but also predictive. So they can advise us of what to look out for, and they can divert us around congestion, before we actually get to the congestion. So I think if we can do that for straight directories, what can we do with industry 4.0 technology in the area of health and safety? We have volumous static paper-based systems in our risks registers, our hazard identification processes, our incident reporting processes, and our emergency response and business continuity plans. And they're often separate as shown on the slides. They might be in spreadsheets, but they're still predominantly paper-based, they're not very intelligent. They're not very integrated. They're not very predictive. And they're often out of date. and done at points in time just like the old street directories. I think if we could have industry 4.0 tech helpers intelligently integrate and digitize our systems we could deliver step change improvements in workplace health and safety. Not only for occupational health and personal safety, but also for worker wellbeing, and process and system safety. Before I talk about how I just want to outline what industry 4.0 is. So the first industrial revolution came to us with the advent of steam energy. The second came with the advent of electricity. The third was with the digital computer. And the fourth industrial revolution, which is often summarized to industry 4.0 is the ecosystems of cyber-physical, advanced data analytics, automation and autonomy, and the industrial internet of things that underpin the first three. Cyber-physical systems, they represent systems that compose of part digital, part physical. And examples include digital twins, and the virtual reality and augmented reality technologies. Automation and autonomy talks to technologies that don't necessarily rely on human input in real time. So autonomy usually is systems or subsystems that work on its own by itself. Automation is where we've got remote control, or human supervision of the technology. Advanced data analytics encompasses both big data and predictive analytics, as well as the family of digital systems known as artificial intelligence. And I'll talk a little bit more about that in a minute. And then there's the industry internet of things. So I'm gonna go through each of these four main areas of industry 4.0, and how I see it could help workplace health and safety, and what are some of the potential upsides and downsides. Starting with digital twins. So a digital twin is a digital model of the real world that can be fed by hypothetical, historical, or real time data or a combination. Examples of how it's used is in system design. So you can build a digital twin before you build a physical system in the real world, and test and assess it. They can be used for control system design. And alarm rationalization And prioritization is an example of where they're used. So optimizing the control system devices, optimizing the number and locations of census. And optimizing the alarm systems that you put in to prevent the issues around alarm flooding, and around alarm prioritization. They're often used for training, operated training is a classic example of the use of the digital twins. They can be used for risk assessments, to troubleshoot. They can be used to assess proposed changes, and to manage those changes. And they can be used in scenario analysis to forecast potential future outcomes of current performance or proposed changes. The upside of these twins is that they can provide objective, consequence and risk analysis. A lot of risk analysis and hazard assessment is done subjectively now. With digital twins, we can make that more objective. We can use these twins to project and foresight impacts of changes. So with digital twins, we can speed them up. So they go many times real time speed. So you can run many days in an hour to see how things are gonna propagate through a system, how changes are gonna propagate And whether they're gonna cause issues or not. You can also use them to optimize those systems. The downside is the maintenance of these twins. So these twins are built on representations of the plant. There's a saying in the modeling industry that no model's accurate, but some models are useful. To be useful they need to be maintained. Now in reality, and in a lot of our industrial systems, we struggle to maintain our our technical drawings, our paint IDs, we struggled to maintain our procedures, and our documentation to keep them up to date. There's, in some ways, extra, or specialized resources required to maintain these twins. If we don't maintain them, then they'll drift in their ability to accurately represent our systems. And once we've got drifts then we've got issues of potential accuracy, and potential errors coming in to how they're used and how they're interpreted. Along the same lines you need, you often need specialists competencies. And usually it's a group of competencies required to use and interpret these twins. So you have domain experts required, and you have computer experts required. And very rarely does that expert sit within, expertise sit within one person. There's issues with accuracy and completeness. Even if you invest a lot of resources, and your models are up to date, they're still not complete, and accurate representations of systems. There are still areas where they won't correctly and validly represent your system. And that needs to be understood when interpreting the results and when using these twins, and the costs. So the more complex, the more accurate your twin is the more expensive it will be. We're doing research around digital twins. And research is happening more all around the world, as well as use cases. The use of digital twins is actually being accelerated with COVID, because now people are using them more to do hazard assessments. All those people that had to ramp up processing plants that make hand sanitizers that are hazardous facilities, use tech-like digital twins to help them do so safely. So examples of the research that we're doing is we're exploring how to leverage the full value from digital twins to improve system safety or process safety. Which includes developing ways for people to assess the affordances and constraints of such models. So if you're gonna buy or adopt digital twin technology we're developing guidelines of how you can assess what that technology should be used for, what its limitations are. And we're identifying requirements for exemplar implementation of enterprise wide applications of digital twins, right across the lifecycle. So from conception through to construction, through to commissioning operations maintenance and decommissioning. What are some of those exemplar implementations versus the ones that haven't panned so well? We're doing some research to identify what are the key factors for successful selection implementation and use of digital twins? The next industry 4.0 technology I'd like to talk to is the virtual and augmented reality technologies. These are where we provide users with interfaces, through which we can present them with information from the real world, or from computer generated information or a combination. The technologies VR is shown at the top, and AR is shown at the bottom, involves a combination of both the real-world information, and computer generated information. They're often used for training, and there's a lot of VR applications around training. They're used for expert advisory systems, and the AI shown with the mechanic at the bottom is supposed to be conveying that the mechanic is talking to an expert as they're trying to fix the engine of that car. And that expert could be anywhere in the world. And again, this is a technology that has been accelerated in use through COVID, especially through the auditing of risk and safety type audits. It can be used for hazard identification. The hazard identification can be either built in through AI, through these systems, or again, supplemented with experts' advice, also seeing what you're seeing, or what the user is seeing. They can be used for control monitoring, inspections, audits, and they can also be used for incident analysis. Where it's not only the person on the ground looking at the incident, but they're also connected to other people who can ask questions, or suggest things to look for, or evidence to collect, because they're connected via these systems. So the upside of this technology is the real time access and the ability to interact with real time information from the infield person, but to others around the world, anywhere around the world. As well as from infield people to other infield people. It's a chance to bring the expert information to the person on the ground when they need it. And if it's designed well, in the format that they need it. The downsides that have come out of this are around motion sickness and image stability issues. If you've got a person wearing a headset like on the bottom then there's usually a lot of movement in the vision that's fed back to the expert. And that can create issues if you've got people in VR. If you haven't been in VR you sort of get a weird sense of what reality is, and that can lead to issues of motion sickness. It can also lead to slip, trip, fall issues, or distraction issues and impairment risk. There's wearability issues with both types of technologies, and there's acceptance issues with both types of technologies. And by acceptance is when you've got, when you're wearing these technologies you also can be tracked. So your movements can be tracked, and that might be your whole body, but also your eye movements can be tracked. Some of them can track your physiological activity that's happening in your body, which poses issues about privacy, and issues, other performance issues. Which all need and should be dealt with before introducing these technologies. We've seen it in the IVMS world, the introduction of in vehicle monitoring. The difference between the good implementations that have delivered successful safety outcomes, versus the ones that have really been rejected and cause problems with acceptance. Is how that end user is incorporated in the implementation process, how their needs are incorporated in it, and how this technology helps them rather than being fed to management. Because the technology needs to help the people at the risk phase. And again with this technology there are issues with cost and maintenance. So we're collecting some information and research about how this technology, or the use of this technologies has been advanced, or accelerated through COVID 19. And one of the things that the users are finding in trying these technologies to deal with the fact that a lot of us are now trying to remote work. Our experts can't fly around the world anymore. But they still need to get access to infield observations and information. And some other work that we're doing is how can we superimpose or accompany these technologies with AI? So artificial intelligence to improve and help the decision maker, or to tailor the information that the decision maker or the wearer gets to what their needs are. The next technology is the automation and autonomy. So semi-automated to fully autonomous plant and equipment is being used more and more in industry. And it's being used to perform work that might be hazardous, or to perform work that might be repetitive. So it's better done by a machine than a person. Examples include autonomous vehicles, or automated plant drones and robots. Some use cases that's used a lot in the assembly lines. As shown from the left-hand side, And I think that case study's really interesting, because one of the issues with autonomy is who is accountable when things go wrong? And we had a person killed in the VW factory in Germany. And some of the reports said that the robot killed the person. And some of the reports said the person died because of human error. So we've got this split issue at the moment. And the Uber crash is one of the areas where it's playing out. At the moment, in the States, the driver of the Uber vehicle that was on autonomous mode, that killed a pedestrian as he's just recently been charged. That hasn't been heard in the courts yet. But we're keeping an eye on what happens there, because there's some important learnings. I think for anyone considering introducing autonomy is how best to introduce it without creating new opportunities to hurt people. So the upside is that it removes people from hazardous, where carriers and from doing harmful work. The downside is really it takes time and effort to understand how we're changing that human system interaction. And are we introducing, or are we creating more hazardous interactions? And if we still require the human to be the supervisory control, or the person to step in and save us when things go wrong. How are we maintaining their functionality and capabilities, so they can do that when it's required? And we're doing some work around that space. Currently we're exploring, which are the best risk assessment approaches from the end-user perspective? So we're involving industry. Which is the best risk assessment approaches for them to identify those human autonomous system interaction, hazards, and risks? So the autonomy and any of these tech builders will sell you a system, but they won't necessarily tell you what the risks will be in your context. And it's up to the end user organization to do that work. And our risk assessment processes haven't been fully tested particularly from an end user perspective, as to what works best for them in identifying the hazards that matter, and the risks that matter, and the controls, the best way to control and manage them. Next category is data analytics and AI. And this, these are terms that are used quite variably around the world and even within a country or across different disciplines. I've put an example of what it entails from data science to artificial intelligence, to machine learning to reinforce learning to deep learning. It's a whole field of things of processes by which data is transposed from data to being information that provides knowledge, and then insights. So insightful knowledge, knowledge that will help decision makers make the correct decisions at the right times. The use cases for data analytics and AI are meta analysis to identify hazards, and component failures, as well as deviations. So these systems usually are set up to either do meta analysis, or they're set up to look for patterns and similarities, or look for deviations in plant, procedure, people, and overall system performance. So the upside of them is that they can be, if correctly used, they can be developed real time evidence about how a system's performing. And what the vulnerabilities in that system are, and what , what's working well and what's not. And they can provide those insights to inform the decision makers, if the system is designed correctly. The downsides are the data so garbage in garbage out is a common term. If the data is not the correct data, and if the data is not cleaned to be a good concise data set that will inform not confound the decision-maker. Then we'll have what the output is. So a lot of effort. When we're doing this work we usually put a lot of effort into data cleaning, particularly when we're looking at historical data, so it will deliver insights. And even when you've got a good data set there are still bias that can come in with algorithms, because you train the algorithms on data. If it's clean they will still be biased by how you're training them, and on the data set that you're training them on. There are limitations to the algorithms, or the data that comes out, and uncertainties. And there's potentially privacy issues depending on what data you're using. Some of the research we're doing in this space is we're applying machine learning, and deep learning applications for operator vigilance, and lone worker safety support. So how can we use these types of systems to connect into the devices and the equipment that line workers already have? To provide them with meaningful information about their own vigilance and health and safety, and the potential hazards that they might come across. We're also doing meta analysis on process, and incident information. And we're just starting a project using machine learning, and camera vision to identify when people are coming near zones, where they could be impacted by say, lifted loads or moving vehicles. So we can make the operator of the load in the vehicle aware that they've got potential people encroaching on their line of fire zones. The last category of the industry 4.0 tech is the industrial internet of thing. It's using the internet to connect sensors, instruments, devices, machines, equipment, and plant together via digital networks. It allows for the integration of the data, and the data analysis. It supports the use of remote control, and supervision and communications. The downside is the integrity of the internet. So the stability and the reliability of the internet and cybersecurity issues. There's also potential downsides with managing volumes of data, and knowing where to put the sensors that collect the data can have a massive impact on the data that's being collected. And the accuracy and the ability to correctly assess the safety or the vulnerability of a system. There is also, as you get more interconnected with your data, and your plant and your sensors, there's an increased consequence if something goes wrong. So the more connected you are, the higher the consequences when things go wrong. Because they're interconnected the ripple effect is bigger. So what we're doing in the area of research is we're developing what we call the industry 4.0 ecosystem models. So we're trying to model what these ecosystem of elements should look like in exemplary implementations, that are underpinned by the industrial internet of things. And what that ecosystem should look like if you're trying to improve risk assessment, and workplace health and safety outcomes. So one of the things that we've found out so far is that the end user is super critical to successful use of technology. And industry 4.0 is exactly the same in this. The inherent opportunities and the threats only come about by understanding that, not only just the tech we're usually pretty good at that. But more importantly how the people are gonna use the tech, and how the tech's going to work for the people. So will the tech be accepted and used? How, when, why will it be used? Often there's intended use of tech, and there's also unintended use of tech that comes about. And we need to understand both. Will the users find it useful and usable? Plenty of tech has sat on the shelf unused after the initial interest phase has waned. And will the outputs be used correctly, and used to inform or mislead the decision makers? So there's some examples around about, that I've put on the slide to sort of help convey some of the things that are considered here. And the first one is, the actual use of the system is driven by people's behavior and their intentions, but it's also driven by their perceptions of how useful it is and how easy it is. And you could surround that with the organizational goals and culture. The next one is whether the tech is actually useful, and this goes to the false positives, and false negatives that you're see in psychology literature that the real world something could be true, and the tech could tell you it's either true or false. And again, in the real world, something could be false but the tech could tell you it's true or it's false. So you can have some correct answers coming about the real world from tech, but you can also have some incorrect answers. And the two categories of incorrect answers, the false alarm and the missed vulnerability, have different consequences. So unless when you're doing risk assessments around the introduction of tech, thinking about all those four quadrants then you might miss something. And the last one is taking the end-user perspective. Firstly, we need to understand what are their needs? What are their issues? What are their wants? And then we select a tech, or we refine, or we develop, and we do all those things to meet their needs. We don't select the tech and then trying to change the user to use the tech. We understand what their needs are, particularly with health and safety. And then we find the tech that will help them not hinder them. We observe, and we spend a lot of our research observing, and talking to end users. And we get a very different perspective from them than you would read in the literature. So asking them is critical in this space to get the outcomes that we want. It's got to work from them, because at the end of the day, they're the ones that are right at that risk phase. They're the ones that need to identify the hazards and control them. You can only do that by using empirical base. So infield, user-centered design options. We need to be speaking to the users, and understanding their needs. And that's not to say that the tech providers and the other stakeholders management are not included, but we can't exclude the end users. That's where the priorities should be, first and foremost. So my concluding remarks. So we're moving into the fourth industrial revolution. And this has been accelerated, I think by COVID. And it's characterized by cyber physical systems automation, advanced data analytics, and the industrial internet of things. This revolution will bring both opportunities to significantly improve workers' health and safety, as well as threats. And it might bring new ones. The best outcomes will be achieved by those who use systems thinking. So we'll put the whole impact. And we combine that with the user-centered, evidence-based risk approaches to understand how best to capture those upside advantages, while eliminating, or mitigating the downside advantages from that tech. And I put the two cartoons on the slide, because I think it's important that we can't keep doing the same. We can't keep hurting people the way we're hurting them, we can't keep having the fatalities that we're having. We need to do something different. Industry 4.0 might offer some opportunities there, and we should look at it. And handing back to you now, Chris.

Thanks, Maureen. That brings to a conclusion Maureen's presentation. Of course, we will go into a question and answer session now. So if you haven't already done so, and you really want to get a question through to us, don't forget, use the chat box, your name, your question, and we'll get to as many as we can. And Maureen, you've got a wealth of information stored up there in that professor head of yours. And we need to delve into that. Let me just play devil's advocate first, because this will be a question you'll get asked a lot. And there's certainly been a lot of media and discussion about this. If we're entering an age of automation, autonomy, we're entering, should we be fearful of this? I know younger people approach all the gizmos in a much more positive line than us older folk. But aren't they sensibly replacing our jobs? Are we not at risk of, you know, being replaced by machines?

It's an interesting question. And some of our tasks will be replaced by machines for sure. But that frees us up to do other tasks. Humans are really good at thinking. Machines are not there yet. We are the only entities that can solve problems in real time, and adjust our functioning to it. Machines are not there yet. So a lot of what we see in industry, particularly around safety, requires us to do that. Requires us to just adjust what we observe, adjust how we assess it, so we get the correct answer for that context.

So we're moving into an integrated period, perhaps where there's a mix of both?

Yes, I think there will always be a mix of both. There'll always be a role for both.

Yeah.

Absolutely.

So the age of "The Terminator," I don't have to worry about Arnold Schwarzenegger walking into my office and saying, Chris you're out.

No, not today.

Let's move on to your questions. And first up, it's Simon, and thanks everyone for joining us this afternoon. Simon asks, "Can you explain the difference between personal safety and process safety?"

Okay, yes. So there is an overlap. But personal safety tends to be when an individual safety is at risk because of what happens in a short period of time, and what happens around the tasks that they are doing, or the activities that they're performing. So it really is a loss of control of what that person's doing. It usually happens over a short period of time, and the consequences are usually very closely associated with what that person's doing. System safety tends to happen over a longer period of time. It happens because we lose control of a system, or there is a loss of containment from that system. You get a drift into failure often, or a flooring design that's revealed over time. And then the system itself, when that's lost the consequences are usually catastrophic. So they're usually far reaching, they're usually long lasting. And they, you know, they have ramifications not only for safety, but for the business entity. Examples of such events are the Dreamworld event. They are the Hazelwood mine fire in Victoria, they are the Deep Water Horizon event in America, the Fukushima event. They're usually due to, as I said, a loss of control of the system, they're bespoke to the system. Personal safety events can happen regardless, usually regardless of what industry you're in.

Let's move on to another question. This one comes from Elizabeth and thank you, Elizabeth. She asked, "Will AI have systems that can't be bypassed for safety, e.g. people removing safety guards on machines which results in crushes, amputated limbs, and even death?" We know that this machine, or this guard is for our own safety, yet we'll bypass it, or we'll modify it or we'll do something. That in the end, just because it hasn't happened doesn't mean the risk isn't there, and ultimately it will happen, Murphy's Law.

Yes. So you can do that without AI now. Particularly in normal operations, you can do it without AI. Where it gets tricky is when you've got those live work activities, typically around maintenance and commissioning. And we certainly can use AI to inform operators of when they're getting into hazardous situations. You know, there's amazing stuff now, with the gloves, and with wearable cameras that can be superimposed with AI to help people understand when they're getting into those spots. Because people are not good when they're troubleshooting, and when they're maintenance to step back. We know that we get focused on solving the problem, working the issue. So that's when AI can help us go, hang on a second, just stop for a second. In words, we know that words are better than beep, beeps.

Just along those lines looking at AI development, tech, all that sort of stuff. We've seen development for doctors taking patients over the internet, and things like that. We've seen them for assessments of various kinds. Can you see a time where you can have an expert panel, or another set of eyes back in the office, and the guy out on the road is trying to make an assessment of whatever process or job that is?" But then can go back to the office to say, "I'm faced with this, what are you guys recommend?"

Well, it's actually happening with the guy in the field.

[Chris] Yeah.

So wearing some of the tech, or with the cameras, with the mobile devices, they are filming and talking. They are looking at situations in the field, or they're looking at a piece of equipment, for example, in the field. And they're beaming the vision, and having the conversation back to the manufacturer, which could be anywhere in the world or back to the, say the expert engineers who are sitting in a corporate office somewhere, because they can't fly it in the moment. That's happening now.

Okay. That's good. Along these lines, Pete then says, "Can small businesses develop this type of technology on a budget?" You know, budget is a big word these days. You know, we're all scrimping and saving, and trying to keep an eye on budgets. But Pete wants to know, can we do, can we move into this territory without having to spend an army?

So it depends on where the tech is up to, and the scope of the techs. Definitely around, the provision or vision back to corporate offices, that's definitely doable. Around the digital twin, it's probably a bit of a budget at the moment. Some of the companies working hard to make that available to people. That's, it's a bit of a mixed bag, yeah.

Just talking about data then, and you spoke about clean data, and you spoke about data still being influenced. If we're going to be fair dinkum about this we really need to feed the data in, and feed as much as it in? Is it about quantity or is it about quality data?

Yeah. you have to apply the critical thinking up front about what is the data that you need. We haven't done that historically more is better. You know, we know that, that's the history of safety, but it's not the case. More is flooding it, is diluting us. I think if we can tip to too much. So if you put a hundred sensors out instead of the right one sensor in the right spot, you get a very different outcome safety wise. So we need to, humans need to be very critical about what is the right data? What is the right sensors to have out there? What is the right data to collect? Aand what is the right frequency to collect that?

So length of time to get really solid data, rather than fleeting data?

[Maureen] Yeah.

I don't know what the terminology is.

If you've got your sampling frequency one second data is a whole lot more data than one minute data, which is best? Only humans who understand the context can do that, can make that decision. There's no one right rule that applies everywhere.

Evan says, "I'm very interested in further information on the use of AI to warn a field operator of hazardous situations." So preemptive rather than reactive.

So we can do, and again, it depends on the context, and we've done some around the lone worker. So we can set the lone worker up, so we can visually, ask them to visually film the job that they're doing, or the area that they're in, and have AI highlight areas that they should, could be a hazard, and they need to make a decision. We can set the tech up, so we can do things like the rate at which the human puts their information into their hand held device, can be compared with their historical rate, for example. And if it's a bit different that day the tech can have a conversation with the user to say, "Is everything okay? Things seem a little bit strange today." So there's a range of things that we can do there.

Yep. All right, let's move on to Brian. Brian asks, "How will I overcome the unique accidents, or incidents and humans can adapt? How will we teach AI to meet these unique events?"

So the unique events are special, because they're unique, right? So AI is best, the early levels of AI, the machine learning, is best suited for the reoccurring events where we can train the AI. The unique events, we've gotta set the AI up to look for the the weak indicators. So usually around systems safety they're bespoke to the system. Around the personal safety type events, they're a little bit harder to do.

Angela would like to know, "Do you have any more details on the AI research you're doing at the moment?"

I have heat.

Well, can you give me a pricey then?

So we've got about five cases. One, is around putting cameras on cranes to detect people in impact zones. One's on helping, what sort of tech can we provide the lone worker with AI enabled, AI connections to their own tech and to their vehicles, to provide them the user, with feedback on whether they're entering, or vulnerable to a hazardous situation. We're doing AI on historical incident events. So we're interpreting volumes of huge volumes of historical data to see what that is throwing up. There's a range of them.

That could have huge ramifications for various industries, really.

Yes. Oh, yes. We'll actually start getting an integrated picture of what matters.

Last question, so let's go to Aaron's question. He says, "It's," he says thank you for the presentation, he says it was very interesting. Last question belongs to Maureen though. No?

A critical control-

Oh Maureen, I would like to know, it's you he's asking, you see. "I would like to know more about your work with critical control management. How can industry 4.0 tech help or not help critical control work?"

One of the important differences in critical control work is the emphasis on identifying that controls are implemented, and they're verified. At the moment we're verifying controls through subjective observations. Yes, it looks effective, no it doesn't. Industry 4.0, check can get us to, what is the true effectiveness of that control? Is it 60% effective? Is it a 100% effective? We can do it in real time. And we can provide that information back to those who need to see it, rather than just a point in time subjective, yes. I think the control's okay.

Thanks everyone for your questions. Really appreciate your participating today. Maureen, just to wrap up today's session, one final take home message for our viewers.

So don't ignore the tech. It can help us with the safety, but take the user-centered, evidence-based project type analysis to work out what's gonna work best for you.

Maureen Hassell, thank you very much for a thought provoking presentation. And thanks to everyone who tuned in today, especially those who offered insightful questions, and of course your comments. Sadly, that's all we have time for today. Safe Work Month is almost finished for this year, but there's still one more session to go. A chat with ex-Olympian, Hayley Lewis. That's happening tomorrow afternoon, not to be missed. And it's not too late to register for this session. So register, please register, Hayley Lewis tomorrow afternoon. Don't forget, there's Workplace Health and Safety Queensland has a full catalog of industry, and topic specific video case studies, podcasts, speaker recordings, and webinars and films, to help you take action to improve your WHS and return to work outcomes. These valuable resources are free to download and share from worksafe.qld.gov.au. And the good news is Maureen's presentation will be online very, very shortly. Just in case you miss something, or you want to go back and review the professor's teachings. Thanks again for supporting Safe Work month everyone, and remember work safe, home safe.