Engaging with your workers
In this webinar presentation, David Whitefield, Director of People and Risk, will apply principles of social psychology to explore social arrangements in the workplace (including culture, language and leadership). You will learn how people make sense of risk, and how you can harness and shape this understanding to more effectively engage workers and increase their participation in workplace health and safety systems.
Watch the recording of the webinar, or download the presentation (PDF, 2759.42 KB) .
Download a copy of this film (MP4, 10 MB)
- Read transcript
Musculoskeletal Disorders Symposium – Invest in your people: build your business
MC: Allicia Bailey
Speaker: David Whitefield
Welcome to today's webinar on Engaging with Workers. So the topic and expert speaker for the sessions is brought to you by the Office of Industrial Relations which is comprised of the Electrical Safety Office, Workplace Health and Safety Queensland and the Workers' Compensation Regulator. So the Office of Industrial Relations is committed to driving initiatives across the whole scheme that improve safety, wellbeing and return to work outcomes for both employers and workers.
Just to let you know that today's session is a specific initiative of Workplace Health and Safety Queensland and will explore how people make sense of risk. So this presentation is actually part of an encore series from a one and a half day face-to-face session that was offered by Workplace Health and Safety Queensland initially in Brisbane in March this year.
Just to let you know who I am, my name is Allicia Bailey and I'm the Manager of Engagement Services for the Workers' Compensation Regulator and I'll be your facilitator for today's webinar.
So just before I introduce you all to David Whitefield our expert presenter for today I just have a couple of quick tips just to make the most out of today's session for you. The presentation will go for approximately 50 minutes. At the end of the presentation there'll be an opportunity for David to answer some of your questions. So we really encourage you guys to take this opportunity. So if you do have a question at any point throughout the presentation just type it into the Q&A box in the bottom right hand corner of your screen and we'll collate these. Then we'll try and get through as many as we can at the conclusion of the webinar.
I will let you know that a copy of this webinar will be emailed through to you and it will be placed on our website.
So please share this resource with any of your networks that you guys feel may be benefit from today's session. So if anyone's experiencing any audio issues throughout the presentation please use the chat box. It's located at the bottom right hand side of the screen as well and communicate your problem to us so we can have our IT expert assist you just to make sure that you don't miss out.
So with that being said, let's get straight into it. So this is our first webinar for the series that will be presented by David Whitefield who's the Director of People and Risk. So David runs his own consultancy business working with clients who want to focus on the human side of safety primarily through the application of social psychological principles. So David's current work is built on more than 20 years of experience in the safety industry across a wide range of roles and organisations as well as tertiary quals in behavioural science, OH&S and social psychology. In today's webinar David will apply principles of social psychology to explore social arrangements in the workplace. You guys are going to learn how people make sense of risk, how you can harness and shape this understanding to more effectively engage workers and increase their participation in workplace health and safety systems.
So let me hand you over to David. He's going to present.
Thank you very much Allicia and welcome everybody. So let's get stuck straight into it. Essentially what I'm going to talk about is in two kind of – two aspects really.
I'm going to give a bit of context which is really about the background and it's also reflecting my biases and my world views. So I kind of look at the world and say everybody's got their own biases and I think it's important that you know where mine come from so that you can then put the content into some perspective. This obviously then allows you to reflect I guess on your own biases and how you see the world as well. So I'm going to talk about the world views of safety, risk and people and then some content. Fundamentally what we're going to get into is sort of how people tick because from my point of view engaging with people is actually really more about understanding other people. That's how I think I can engage a little better.
So let's move into it. So some of the first bit of context I want to sort of talk about is as Allicia said, I look at the world through a social psychological lens. Now social psych is fundamentally saying – asking the question of how our social arrangements influence our judgements. I love this photo because it's a great example of a social activity going on that involves some risk taking. It potentially involves some learning, maybe some harsh lessons depending on the outcome of that jump and where that kid lands, but the fact is there's three of them involved. So none of them are acting in isolation. They're all influencing the other in subtle ways. You can probably maybe even see yourself in that photo, whether you think you'd be the kid on the bike or the kid lying there as target practice or maybe the one – we might call that person the supervisor in the background who's monitoring it all.
But nothing's in isolation and nothing's neutral. So for me, and again this is my context, risk and learning is a social process. It can't happen in isolation. Even somebody who's working alone is still influenced by all the previous conversations and interactions they've had. So therefore my view is that I look at things through a human focus and I'm intentionally distinguishing there away from behaviour. So while I've got to be careful I guess about how we talk about some of this stuff I tend to look at more of a human focus than a behavioural focus. But it's not to say that behaviour based safety doesn't have some benefits. I just tend to take a broader view with it, as I said, a human focus.
The second thing is I want to introduce this idea of wicked problems. So this is potentially an unusual term within the safety field but wicked problems have actually been around – well forever, but the terminology of wicked problems was introduced in the '60s and '70s by a guy called Horst Rittel and more later written about by a guy called Conklin. So wicked problems exist where we have social diversity and issue fragmentation. So essentially that means lots of different people with different opinions and I think safety is a pretty good category for that world.
Wicked problems don't have a solution essentially. That's the main outcome. When you use words like "What's the best thing we can do here?", "What's the safest thing?" there's no one single solution and I think in safety if there was a solution surely we would have found it now given the amount of money and time we've spent on it. The question of course is "So what?" Well, with wicked problems what we try and do is change our language to the idea of tackling, not solving. So what that means is we move away from the idea that within safety we're trying to fix or solve anything, including by the way, people because fundamentally you can't fix people and you can't solve people. What we can do instead is tackle the issue and you'll hear this in the public sector by the way. You'll hear politicians talk about tackling health issues. So in safety it's just a bit of a different language. Say, "Well today we're tackling maybe a heights issue," or "tackling our systems," and it just changes the whole approach and moves away from the idea that there's one single solution.
Another bit of context I want to talk about is the way I think about risk. So, in this term I'm going to say risk about uncertainty. Traditionally in safety I think we're used to talking about risk as likelihood and consequences and I want to be really clear and say that it's not like that doesn't have some benefit. There are really, really useful times when it's good to be able to define risk as a number, but I think on a more general basis it can be useful to think about risk as uncertainty. Certainly if I was to look at the corporate world they don't see risk as bad but in safety we sort of demonise risk. We're trying to get rid of it all the time. Whereas if we talk about risk as uncertainty it's just a matter of well, how confident do you feel or how certain do you feel about it?
Now I've got a definition of risk there – "The effect of uncertainty on objectives." That definition actually comes from ISO31000. So when I talk to organisations and they say to me "Yeah, yeah we've got an ISO31000 system." That's the International Standard for Risk Management of course. When they tell me they've got that as their system but they don't have the effect of uncertainty on objectives as their definition of risk well then they're not really using the standard and to me it's really interesting that that standard, as I said, it's not so much about the consequences but how certain you are.
The reason I use this image by the way is that little kid up there on the back of Chris, that's Luke. That's my 9-year-old son and that's his chosen sport at the moment. His Saturday sport is flying trapeze. So when we're making a decision about whether to try a new trick or whether to fly out of lines, because he flies just with the safety of the net, not the lines themselves, getting out a risk matrix doesn't really serve a purpose. What's really more relevant for me is to say "Well how confident do I feel?", "How certain is the group of people involved that they're going to make this trick work?" and then if it doesn't work how certain am I that he won't be hurt too badly? So it's just a more general way of looking at risk and again, I'll come back to that a bit later on.
Finally the other bit of context I want to give you is to put some other things in balance. My focus is on the people side of things which I look at the bottom here. But this is a really classic sort of model safety saying "To achieve a safe outcome we need to have a safe place of work, safe systems and also safe people." I want to be really clear and say no way am I suggesting we don't need to make sure we're managing risk. In no way am I suggesting we don't need systems and procedures. I think we might have too many but that's an outcome of extending the process of systemising. My bias is to look at the people side of it. So I just want to be really clear with everybody in saying that's what I choose to focus on but the other two areas are vital. It is no use going into an organisation that is, you know, has significant risks that are hurting people on a weekly basis and starting to talk about culture and leadership because it's useless. You've got to get that stuff sorted first. So that's sort of my balance I suppose, is I'm focusing on the people side of it but I'm not suggesting the others aren't important. Just finally reflecting on the system side of it and this is the balance around how much we know and what energy and resources we put into it. Again I'd suggest we know a lot about risk and systems but we don't know heaps about people.
But when we talk about systems, this is the way I look at systems. So this is a bit more context for you before we get into content, that procedures are simply a mean about which people deviate. I don't view the world where procedures are something that people follow 100% of the time. If you look at the way you drive I doubt any of you drive to 100% compliance of the rules or the procedures the whole time. Even within your lane you constantly deviate within the lane, just hopefully not too far over it. So the concept of looking at procedures as "Well that's the way people behave," to me doesn't really fit. I look at procedures as being a guideline.
The other interesting thing is the idea of violations and I use that because it's some terms that have been used in traditional behavioural safety. Violations are a characteristic of systems. That is, they don't exist until we write down a procedure or a rule. So, traditionally we kind of tend to think about people as being the problem, as people being the violators. But there'd be no violation unless there was a procedure in the first place. So the source of the violation is not the person. It's the system. Now I'm not saying it's ethically okay for people to not follow rules, but the fundamental philosophical argument is that the problem doesn't lie at the person. The challenge is often around the systems and of course then procedures that are poorly designed or too demanding automatically create more violations. Again, I'm not distinguishing whether it's okay. It's just where the source of the problem is. So that's a bit of context for you. Risk is social, it's a wicked problem, so we tackle, not solve and it's again about uncertainty. My focus is on the people side of it.
So let's get into this idea of human focused safety. Now I've got a bunch of questions I want to ask you and I'm just going to get my laser pointer out of the way. I've got a bunch of questions I want to ask you. Of course I'm not going to hear you answer them. So just please answer them in your own head and I'm going to also answer them for you.
So, what's interesting for me is to ask this. Should we as an industry be interested in how people make sense of risk and uncertainty? I guess obviously my initial answer is yes, I think we should but I just think this is such a fundamental concept of not just knowing how people sort of might behave but thinking about how they think and how they make sense of risk and uncertainty because in the end almost everything has an element of greyness and uncertainty to it. So I'm really interested and I think we should be fascinated with how people make sense of this uncertainty.
Should we be fascinated about how people manage to operate safely nearly all the time where we see people as the source of safety not the problem to be fixed? So this is also another I guess, aspect of the way I think we're challenged in the way we deal with safety at the moment is we don't often talk about the fact that a person operates safely 99.99% of the time. We just tend to talk about when they're operating unsafely. So what's interesting is to say well, how do people manage to operate safely? How many problems do they solve? How many errors do they detect on their own, almost, well, unconsciously as I'm going to put to you a bit later on. How do they do all that all the time or nearly all the time? Whereas instead in safety we just tend to focus on that negative aspect of when something goes wrong. So I think we should be fascinated. How do people manage to operate safely nearly all the time? Assuming we're interested in that, then what does influence the way people think about risk because if we do want to then eventually maybe look at changing some behaviour, understanding how people think about and deal with risk and uncertainty, we can then influence behaviour.
What role does culture and leadership play? I'll answer that question, and then again assuming we're interested, how do I go about finding about what others are thinking about risk and I'll talk about that as being sense making. So, what I'm going to talk about with you guys is the idea of conscious and unconscious thinking. I'm going to introduce some terms of biases and heuristics. So some of you might have heard this term. Those terms interestingly are also used in the ISO31000 system. So the handbook that comes with the ISO31000 standard talks about the fact that people have biases in heuristics and any system should address that. So this isn't sort of really weird odd stuff. This is actually kind of mainstream risk management. It's just that it doesn't get a lot of guernsey in safety all the time. I'm going to talk to you about priming and how basically I can do something to change your behaviour in the future. I'm going to give you a bunch of examples of experiments there. Finally I'm going to talk about engagement, leadership and culture and I'm going to show you a model. I know I'm trying to get through a lot of information, so it is just an example of a model that sort of ties all this together and again presents just a way of thinking about safety and risk.
So, let's think about this idea of conscious and unconscious thinking and two systems. So this model I'm going to present to you is based sort of roughly on the work of Daniel Kahneman. There are other models out there. This is a pretty easy way of looking at it but I do want to clarify, this isn't that you've got two parts to your brain. You've got lots of parts to your brain, just to make you feel better. It's really about two systems and two kind of ways you think and it's also probably better to think about it as a spectrum, as in it's not like a switch where you go from one or the other. You tend to move from one to the other.
So what this model talks about is that we have a slow system and a fast system basically. Now the slow system is the one where we sort of know about really. That's our conscious. That's where we do our analysis. That's where we kind of think of ourselves as being rational and systematic. It's where we calculate. When you have to do a calculation, when you have to plan out a route or when you're reading a map or processing things, when you can hear yourself thinking, that's all done with your conscious. What's interesting is that kind of for many of us, unless we think about it, well that's all we think we do. But basically there's this whole other system operating in the background and Kahneman talks about this as being our fast system. Traditionally this is simply our unconscious. So your unconscious is incredibly fast and very efficient. You can see at the bottom of the list I've got some approximate sort of speeds. So this is sort of for the more maybe geeky or IT people out there, that that's some approximate sort of processing speeds that our slow only runs at about 16 to 40 bits per second whereas our fast can run at up to 10 billion bits per second. Probably as an example if I use maybe driving because this fits pretty well.
When you first learn to drive, in fact when you first learn to do most things that happens over in the slow system because we have to process and deal with things as we go. Over time of course you gradually start to automate these activities and that's where we bring this word of automaticity that you'll see over in the fast list here. Now automaticity is a bit of the technical kind of academic term. Another way of thinking about is autopilot, that is we tend to learn things and then run them on autopilot in the background and that's how we drive. The balance of course is, is that good or bad? Now, I guess imagine trying to drive now like you did when you first learnt to drive. I would argue it's not possible. You just can't do it. There's too much thinking to do and the trade-off when you're first learning to drive is you miss heaps of things. You can't pay attention to everything. As you go on autopilot it frees up some of those resources to start paying attention to other things. The trade-off with your autopilot is you don't know what it's doing. It's working in the background and I'm sure all of us have had the experience of arriving at a destination and then thinking back on the last half hour and going "I have no idea how I got here." Well the good news is your unconscious got you there and it works pretty well nearly all the time. Otherwise we'd be in a bit of trouble.
Your unconscious is arational. Now all that is is just some way of using words and language is really important. You're going to hear me talk about this more but if I talked about over in the slow system, rational thinking, normally the opposite to rational would be 'irrational' and I don't know about you, but I don't like being called 'irrational'. So, within the sort of social psych world where language is important we just use words like 'arational' because it's more neutral. The same as symmetrical and asymmetrical. So our thinking is arational. It's different and basically everybody's thinking is arational. It's what makes life interesting and also frustrating.
In the same way just if I give you an example with automaticity, the common safety term we might use for that is complacency but complacency has a negative affect. We don't go to a workgroup and say "Do you reckon you guys are being a bit complacent?" and any of them saying "Yeah, yep. That was me." They just don't like it. But if I said "Do you think we're running on autopilot?" it's a much more neutral term and it allows me to have more of a conversation. It's reflective of the way we actually operate in terms of our brain. Your unconscious is very efficient. It works on a system of satisficing. Again, I know a bit of a funny term there for you guys, but satisficing is just the terminology that your unconscious doesn't wait to take in all available data. It just makes decisions basically when there's enough information.
So, if we use the Melbourne Cup, that was yesterday, when you were deciding on what horse to bet on, even though I know you think that happened rationally, I know you think you made a decision over in your conscious, most of the work actually happened unconsciously. But you didn't process all the available data. You just couldn't. You just basically reached a point where you went "That'll do," and that's your unconscious working for you. Again, it gets you through most of the time. Driving is another example where you don't take in all the available information to make decisions. You constantly make decisions and you're constantly self checking and detecting errors as you go. It's an incredibly efficient system. It operates really well and really fast. There are some trade-offs with it and particularly if we again use driving, we can create some challenges where we operate unconsciously and on autopilot but then potentially we occupy our conscious brain. Maybe we are on the phone for example and what that then does is impact on our error detection systems.
So these two systems operate in really a spectrum all the time and of course if we think about the workforce then from a safety point of view and we start asking questions like "Well where do we operate from mostly?" I guess I would put forward an argument that we operate mostly in our unconscious.
So, that's right. I just wanted to share this one quote. I get a bit weird with you, but there was a quote – when you do some reading on the unconscious this guy says "Where do all the things I know go when I'm not thinking about them?" I'll just leave that one with you for a couple of seconds. So that just gives an indication of how vast our unconscious actually is.
So let's ask this question, right. Well do you believe me? Now I can't see what you're thinking there obviously, but for some you're like "Yeah sure. I get it." For others it can be quite challenging to suggest that most of my thinking is actually not rational and objective. Most of it's actually unconscious and automatic. So you're going to have to make up your own mind about whether you believe me or not. What's interesting is to ask some more questions on it in terms of "Which system made that decision?" So what we know from a lot of research is even when deciding on whether you believe me, that's actually still mostly an unconscious process and in fact again, Kahneman wrote a great book called Blink where there's a lot of research suggesting that most of our decisions, even big ones like what house to buy are all made unconsciously and that you then just spend a lot of time consciously trying to justify that decision. So again, just be aware you're making decisions all the time. You're evaluating whether you believe me or not.
Even the fact that I've referenced an author influences whether you believe me or not because the fact that I've referenced an author makes me sound more believable. Now I'm not lying. I just want you to be with me there but it's just interesting the way your brain works.
So which one does most of the work? Well your unconscious does a lot of the work obviously. You just can't exist in your conscious very much and I'm going to give you some examples of that. Which one makes sense of risk? So which one's doing all the actual analysis? Which one's doing all the decision making and the balancing? Again huge amounts of it happen in the unconscious. Where do your biases and heuristics sit? They again sit in the unconscious and I'm going to give you lots of examples of those. Which one can be primed? Again your unconscious is what can be primed and I'm going to give you examples of that as well. Which one is influenced by culture and leadership? Again, it's all happening over on that right hand side there.
So that's a little introduction into these systems that are operating into your brain. Let's now get into this discussion of biases, heuristics and priming and see if I can maybe push a little further down that path.
So as I said – sorry, I forgot about this. We're going to do a little test by the way. We're going to have a quick look at your unconscious doing some work and as I said, there, don't worry. It's all going to be okay because it's your unconscious and it can't be wrong.
So, here's a picture that I want you to look at and basically in looking at that photo it should look pretty normal to you essentially but what I'm going to do now is flick that photo over.
Now that's what that photo looks like up the right way and hopefully there's about 160 of you going "My God, that's a terrible photo. Please turn it off." So I'll go back to there. Now those of you that don't believe me, just turn your head upside down and have a look at the eyes and you'll see that in fact they are the same eyes when we turn them back there. So, what's going on here is your brain is constantly trying to make sense of the world and those eyes are up the right way. So they look fine to you even though the picture is up the right way and when you flick here your brain is just going "Whoa, whoa, whoa, whoa. That's just not right."
This is another little great example of the way the brain works. So what you do here is in your head, although some of you are alone, you can say this, you say the colour of the word out loud, not the word itself, the colour. So, red green blue, yellow blue red, red blue green – pretty easy, right - and then what you do is you repeat it now.
So you just say the colour that the word is printed in and what you should all feel is your brain slowing down. If you think about which system you're operating in, when you're back here even though I've asked you a question of "What colour is that word?" you've got rules in your head about what colour that word is and that rule matches what the word says. So your unconscious can quite easily process that. These are little micro rules. Your heuristics are working already. What's happening here is that rules don't work. Your expectation of the world is not being met and so if you think about what systems you're using, you've had to drag into your conscious system, back over to the left hand side of that slide earlier and you feel the slow down. So it's not wrong or right but this is a great example of how our brain is always processing information and always doing work for us.
Again another example here of with those calculations. It'd be a pretty rare person that could do both of those reasonably simple additions at the same time. You just couldn't. Your conscious brain just doesn't work that way. You'd have to do one at a time and you'd have to think about each one of them. You'd just have to do that work. It wouldn't take you long but you'd have to think about them. Whereas Steve Smith there, catching a ball for Australia at probably first slip or maybe second slip – go the Aussies tomorrow by the way with the cricket starting – how much calculation is he doing? Like what level of work is his brain doing to work out the trajectory and timing of that ball. It's a huge amount of data and those of us who did calculus at school a while ago realised how hard it is. Yet, his brain can do it automatically. If we asked him how he did it he would go "I don't know. I just do it," but he still did it.
Now what's interesting from a safety point of view is when things go wrong we fall into this trap and we say to people "What were you thinking?" and they go "I don't know." We say "Well you must have been thinking something," and they're like "Well, I'm just not sure." Now because people don't like thinking nothing they eventually fall back to "I guess I just wasn't paying attention," and we go "Oh good. Thanks. I've got a box for that one, so I can tick it."
So this is just a simple example of how our unconscious is incredibly powerful. Now we had to learn that. So don't get me wrong. Steve Smith had to learn that but just to be clear, you have to learn how to do 37 plus 55 as well.
Okay. I've got a quick magic trick. I need you to pick a card very quickly and concentrate on that card. Now because it's something I did earlier I already know which card you're going to pick. So you guys have all got that card in your head. I want you to remember that card and now I'm going to make that card disappear.
So for most of you that card should have disappeared if the priming worked a little bit earlier on. Of course I have no way of knowing but traditionally this does work and it works really well in big groups. Now I'm going to let you stew on that. Those of you who know how it works, it's all good. But those of you who don't, I'm going to tell you a little bit later on about how that magic trick worked.
All right. So essentially here's a summary. We're always trying to make sense of our world. You can't turn off your brain. Our sense making is done mostly in our fast system. Our fast system is efficient and as the name suggests, fast. However it's also fallible. So this is one of the fundamental principles that again I look at in the world is say people are fallible. We're not perfect. So when we design our systems and procedures we kind of design them for perfect people and we expect people to be perfect, but they're just not. We all make mistakes. People are biased. They use heuristics and can be primed. So as I said, this is what I want to concentrate on now is those three areas.
So, this is again another thought exercise for yourself but if you picture that line three where I've got "Below Average", "Average" and "Above Average" and you think if I asked you how good a driver you were, where would you put yourself on that line? Now, what we know is that about 83% of people put themselves above average and I'm not sure if you know about basic mathematics but that doesn't really work, right? We should get 50% of people put themselves at least below average or average or below, and the other 50% above it. This is a great example and you can try this on a group of people by the way, you just need honest people, this is a great example of what we call "optimism bias" where fundamentally we are slightly biased towards thinking that we're actually better than we are, and it exists across most areas by the way. Most of us think we're – sorry about this – smarter, taller and better looking and probably smell better than we actually do.
Now, is this a bad thing? I want to be really clear here. So from a general psychological point of view it's actually not that bad. There were some theories earlier on that you had to have a completely realistic view of yourself and there were some early sort of treatments back in the '60s of trying to beat this out of people. But what we know now is that having a slight optimism bias of yourself actually gives you some psychological resilience. However if it's very extensive, if you have a very high optimism bias, say about driving, well that may unconsciously influence the way you drive. You might think you've got a better reaction than other people and so you might drive closer to a car in front. Most people think that they're better. Most people think they won't have an incident and so just simply telling people they will actually doesn't defeat this because it's a completely unconscious bias.
Now, the other side of the street by the way, we don't want to be negatively biased this way because that's almost leading down to fear and phobia but we also don't want to be really very high optimism bias. So, it's not too bad to have a moderate one but it's just interesting that we've got it is more the point. So, I've just put some examples of some other cognitive biases that come out. Now, you go to Google and go to Wikipedia actually and Google or Wikipedia "cognitive bias" and you'll see there are sort of 200 odd biases there. They all unconsciously influence the way we see the world. We unconsciously think we're better than we are. We unconsciously with hindsight bias think that things wouldn't have happened to us. We unconsciously conform. So there's some great experiments showing that we want to fit in and what we do from a safety point of view of course is, you know, we go "Well if you see something that's unsafe speak up." I get that we should but I also understand from lots and lots of research that people are actually biased not to speak up. So when they don't, I guess I kind of go "Well I understand that," and there might be ways of defeating that. Don't ask people to vote in a group.
So for example when a supervisor says, "Okay guys, this is all common sense. Any questions?" what they're actually doing is priming which I'm going to talk about in a minute and they're asking them to speak up and put their hand up and say "Actually I have no common sense." So, we're triggering a whole lot of unconscious stuff there. Whereas if the supervisor said "All right guys. We do this job pretty often but we're all fallible. Let's just make sure we're on the same page. So let's just talk through how we're going to do this," and it's a much more different way of engaging and just the big picture of course is that what this webinar is about is engaging. We're biased to not see change.
So if I go back to the magic trick guys, I'm going to flick backwards here and hopefully I'm not going to break the system. So with this magic trick pick a different card, right. So pick a different card, it doesn't matter which one you pick and I'm going to make that one disappear as well. So hopefully that's disappeared. Now what you hopefully will realise is that in fact all the cards change. So none of the cards that are there appear here. I just got rid of – there's just one less card. So what that's an example of is change blindness.
We don't see a lot of change around us. Now part of that is because I've primed you because I said "Pick one card." Now of course we'd never do that in safety, would we? "Be careful of that," "Pay attention to this," "Watch out for that." So we actually prime change blindness by making people concentrate on one thing and we know that people don't see other things.
As I said, there's lots of other biases there. They're all really interesting to go and research and they all influence the way we think about things. Probably the other good example is availability bias. If I ask people "What's more dangerous between horses and sharks?" our immediate response tends to be "sharks" because it's very available to us that sharks are dangerous. Even playing some music can trigger that memory, but of course if we look at the stats horses would injure and kill many more people than sharks would, but it's very available to us through the media and so on that sharks are dangerous. So biases – we've all got them to different degrees of course and they all influence the way we think at an unconscious level.
When I talk about heuristics, now heuristics are little micro rules. You've built up all these rules about the way the world works. I've just got some examples here and most of them are quite innocent by the way. So six times six is an example of something you learnt and then it became automatic. You moved it to automaticity and you worked out a rule. So you don't have to calculate six times six. You have a rule ready to go for it. It's 36, right? Just thought I'd check with the room. A light switch is an example – I'm just checking – a light switch is an example of use of a heuristic. There's no instructions on it. You know how it works and what's interesting to give you an example of how automatic heuristics are, is the only time you notice them is when they don't meet your expectations. So you walk in a room and flick a light switch on you notice nothing. You walk in and flick it and nothing happens, that's when suddenly you're dragged back into your conscious brain and go "Hang on, what's going on?" and you've got to work it out. So these all exist there. When you drive, a brake light is an example of you know what that means. You don't have to think about it.
If I asked you to label the quadrants of that circle A to D you would all have a way of doing it but there'd be differences. If I said "Label it correctly" that tends to make people think a little bit because suddenly now you think you can get it wrong and that's an example of priming. Again you think about it but you've got a way. Most people by the way do reading order or do clockwise from either top left or top right.
Now, just to give examples of heuristics in safety because these exist everywhere as well, I've got an example of either Bird's Pyramid or Heinrich's Pyramid and this is a classic bit of folk lore that's existed in safety where we know that incidents occur in a ratio. We know that generally you'll have more minor incidents than serious incidents and that's what that pyramid represents. However, what then developed as a bit of a rule and a heuristic was that there's a casual link between them and therefore if we have fewer minor incidents we'll have fewer serious incidents. This suddenly grew into a whole rule. It became the way the world works. I used to teach it years ago when I used to teach and it's just not true. The energy sources that cause first aid injuries, the cuts and minor burns and maybe even rolled ankles are just not the same things that cause fatalities. So the suggestion that there's a causal link between them where fatalities are electricity and falls and vehicles, that's the fallacy but it's become a rule. The problem is that heuristic drives behaviour. It justifies concentrating on glasses and gloves because the rule is, well you know, "Mind the pennies and mind the pounds." If we fix the little stuff it'll get rid of the big stuff but it's just not true. But it's an example of the belief that's created that behaviour.
LTIFR is a great example too. That has meaning. If you ask a board or senior executive what LTIFR means to me I think they think it means safety, as in the way they behave. When it goes up they think we're less safe and when it goes down they think we're more safe. Most of us within health and safety know that that's complete crud but it's an example of how we have a heuristic, a rule, an unconscious rule about what that means and it drives our behaviour.
4801 – again another example there of when people have got certification and does that mean you're safer? I'll leave that one out there. So lots of examples of that. That's a quick trip around biases and heuristics. Let's talk about some priming.
So priming is where we use language or a sign or an artefact that influences later behaviour and I've just got examples of different experiments that have been done in this area. The first one up there is I've got a picture of a towel. So what they did in a hotel in Europe, they wanted people to reuse towels and the traditional approach of course is to put a sign up telling people to do it. That's what we do in safety, right? We tell people to do things because we think they don't know how to do it unless we tell them. So what they did is they put the traditional sign up. "Please reuse your towel to care for the environment." Then they tried two other variations. They tried one variation that said "We care for the environment," and then it said "72% of people who stay in this hotel reuse their towel," and another one said "We care for the environment. 72% of people who stayed in this room reused their towel." Of course what they found is that the second and third sign and in fact the third sign even more got the most towel reuse. So this is kind of counter-intuitive. We have a tradition where we need to tell people but what we actually know is unconsciously priming them with just information often gets a better result. Even this week I was paying something online, a registration for my wife and it was for the Psych Association. So yes, I'm married to a Psychologist which gives you give a little insight into life at home, but anyway. On the renewal form it said "98% of people renew online," and then it said "It's easy. This is how to do it." So it didn't ask me or tell me to renew online, it just gave me some information. But I know that that wording is more likely to lead people to renew online. So, priming is really subtle in the way it works.
They showed people a picture of a car crash down the bottom left hand corner there. They showed a group of people the same crash and then afterwards they asked them "can you please estimate the speed the vehicles were going at the time they…" and for one group they said "smashed" and the other group they said "hit" and what you can see there is that the group that were primed with the word "smash" estimated, you know, over 10 kilometres an hour faster on average than the group that were primed with the word "hit". So just that one word changes our perception of reality.
Four weeks later they came back and said "Can you remember seeing glass at the scene of that incident?" and the group that were primed with the word "smash" nearly half of them thought they saw glass and less than a quarter of the other group. Your memory is very unreliable. That's the other thing that tells you. It just isn't reliable. But one word.
So when we investigate incidents, "How fast do you think they were going?" well you've just primed them that they were going fast. So our language is incredibly important. Some other things – quick here because I am running out of time. This is in the top left box here are there's eight different ways of expressing exactly the same risk. I’m just going to let everybody think about that for a little bit. So I've got some positives and negatives. Some gain framing and some loss framing and then some double negatives and double positives.
What we know is that even though it's eight different ways of expressing the same risk the way you express it gets a different outcome. So, for example gain framing a risk, so "80% chance of success" will lead to more people choosing that than if I said "There's a 20% chance of failure." More people are influenced by gain than they are by loss. Again that's interesting from a safety point of view because half the time we're trying to convince people not to do things because of the loss. It's how lotto works by the way because lotto would never work if you actually knew the risks.
In America they've done studies on hurricanes and found that female hurricanes kill more people than male hurricanes and usually that gets an interesting response in the room. Even in this room I'm getting an interesting response. I'm sitting here with three lovely ladies and they're not too happy. So what they think's happening is when a hurricane is coming and it's got a name and a rating people make an unconscious risk assessment about whether to hide in their house, whether to evacuate, whether to leave, whether to do all these things and they think that the name actually subtly affects the threat perceived. So basically what they think is more masculine names create a slightly greater threat response. Now that's very culturally specific but 'Cyclone Brutus' gives us an unconscious greater threat than 'Cyclone Allicia' for example. Yeah, good choice there. I got a positive for that one.
If you're going to meet a stranger, get them to hold a hot cup of coffee first and they'll think more of you. So they did a great experiment where they gave people coming in to meet a stranger either hot or cold drinks and the people who held – they didn't know they were being tested at that stage by the way – but people who held hot drinks gave more favourable responses to strangers.
Finally – this is a great one – they had an honesty box at an office. So I'm sure you've all got these where people have to put money in. You put a dollar in to use the pod and it was at a university. They experimented one week where they put a set of eyes, so literally just eyes, nothing else and then the next week they replaced it with a flower, a picture of a flower. They alternated each week and they found that the week where they had the eyes up they had more money put in. So they think that the eyes unconsciously prime being watched. Now I'm not suggesting putting a big set of eyes up above your workplace because putting money in an honesty box is actually more of a conscious thought. So they think that when you're weighing up the odds of "Will I get caught?" the eyes unconsciously prime. So it doesn't work all the time.
Now I probably – I don't have time really to go through this one but there's some other great stuff around cognitive dissonance and some fantastic experiments where basically people just don't behave the way we expect them to behave. So I'm going to skip that one but if you want some information on it I can send you – there's a great YouTube clip basically about people managing conflicting thoughts in their head and how it changes their behaviour.
So let's put all this together. We're getting pretty close. This is where I now want to talk about engagement, leadership and culture because this is fundamentally about engagement. This is a model which kind of ties together the different concepts where we put you at the middle here, the leader, the influencer and it's about being an influencer, not a hero. So the leadership model that I would talk about is not "You need to be the hero leader" but it's "You are always influencing no matter what you do." So it's this idea that everyone's a leader or leading without a title which there's some popular books on that.
In this model the leader understands that the people they're leading, the followers, have biases, use heuristics, operate on autopilot and they do constant sense making. So they fundamentally understand how people think and that it's arational, that they use automatic thinking. They understand that and so they're curious about it. They understand that their role is to provide clarity under uncertainty. So this idea of vision and clarity with judgement under uncertainty.
So again, I'll give you an example. A supervisor saying "Righto guys, this is just common sense. Any questions?" is providing no clarity at all when people are in that group going "I don't think I know what I'm doing." None at all. Somebody going "We've all done this before but let's just step through the risks again is helping to provide clarity," and the reason they're willing to do that is because they understand everyone thinks differently.
As we build up this model the actual activity then all happens in this interaction between the leader and the follower. This is where the language comes into it. The language is what we say, but we also look at it from a social psych point of view the idea of discourse which is what's not said. So once again – hang on. I'll be a bit more controversial now. When a CEO stands up in a room and says "Anybody who doesn't think that we can prevent all incidents, I want them to stand up now." Now what they're potentially trying to do is be, I think inspiring and potentially motivational. But if we think about where all the power is going, what that CEO's doing in my opinion is sucking all the power in towards them because essentially if somebody stands up I don't think they're going to feel very empowered and in some organisations they won't work there very long.
So there's a huge amount of unconscious priming and framing that happens in all these conversations. When you use language like "I" it's singular. When you use language like "we" and "us" it's a more joined language. So there's always a trajectory or the language. So again one of the issues I personally have with zero harm is that there's a trajectory of perfection. To achieve zero harm we have to be perfect and we're not. It's just heading in the wrong direction.
You also look at the ethics. So when we're asking questions around "Why does the leader want them to stand up?" is this because they're trying to identify people who don't agree with them? Or is it because their ethics are "We want to help people in the end"? So these are some bigger questions to ask.
So when we use language, when we "tell" versus "tell me about", so "I need to tell you what to do," versus "Tell me about the job you're doing today?", "Tell me about what happens?" leaders also understand that the first question they ask primes. So when they get to site and always say "How's the schedule guys?", "How's the schedule guys?" they're priming that at a later stage the schedule is more important. I know they might not mean it, but they just understand that that happens. So they might say "Hey guys, how are you doing?" and then change their question every time. When you say "Whatever it takes" people might do whatever it takes.
So just being aware of the language there. When you use "I" and "you" versus "we". Even life saving rules I've got there as an example where a lot of organisations I work with love having life saving rules, but when I ask the workforce what they think of, "What are the consequences of breaching a life saving rule?" most of them tend to say "discipline." Now I say it's because it's in the words. So the words prime it. Although it says "life saving" the last word is "rules". So we're actually telling them this is about a rule and not about live saving. It's in the language we use. If you're going to have them just call them "live savers". Don't use the word "rule" and I think don't have them anyway, but that's my opinion.
We want to try and avoid absolutes because binary stuff from a language drives some unusual power. If we say "What's the answer?" or "That's wrong" it implies they have to be right or wrong. It implies there's one or the other. In classrooms for examples now they try and encourage teachers to say "What's an answer?" not "What's the answer?" It's a subtle bit of language but it changes the whole power.
As I said, complacency and common sense - these have a negative affect in safety. So I tend not to use them. I might talk about being on autopilot and change the whole concept of common sense. I just wouldn't use it normally. Even saying "What went wrong?" versus "What's the story?" our investigations are primed on blame. Even if you think they're not, when you're looking for fault it's primed on what went wrong. If your investigation is based just on "What's the story?" then it's a more open investigation process.
Now of course just to put all that together, that all happens in a culture. That all happens within multiple subcultures within the business and culture is complex. Culture's not simply "The way we do things around here." It is just far more complex than that and it's multilayered and it's a wicked problem in itself. So that's again a whole another – we could spend easily a day or two talking just about culture but again I want to say "This model works differently." If you look at kids, kids behave differently at home than for other people. Well they're the same kid. So the environment they're in changes their behaviour and so the same things happens for you and for work. You behave differently in sporting environments and at home and so on. Again, I'm just pointing sort of I guess how complex this actually is and how we've got to be very careful about simplifying behavioural approaches and sort of coming up with "Well people are wrong or right." People are actually really complex and 99.9% of the time actually operate safely.
So, in summary risk is social. So is learning. It's human focused. Safety is a wicked problem, so we can only tackle it. For me risk is about uncertainty, not a number. I'm not saying that there isn't sometimes some value in having a number but I'm saying more often than not I think uncertainty is a more easy way to think about it. Sorry, good language there. We still need to manage risks and have systems. We have little to no awareness of most of our thinking. So I'm sorry about that, those of you who like to think you're conscious. It makes sense for other people by the way. I know you think about other people is code, "What were you thinking?" but it applies to you as well. We're biased and use heuristics and that's totally okay because to suggest that someone else's bias is wrong makes them wrong. So biases and heuristics cannot be wrong.
We can be primed by language, artefacts and our environment. So nothing is neutral and everything has meaning. If you want people to have conversations at work that's great. If you decide to measure it and set KPIs that will influence it and it will change it, there's always a trade-off and leadership and culture influence all the time no matter what you do.
So this is really my last sort of slide here. What would it be like living in a world filled with uncertainty and greyness? What I hope is you've got to the point where you go "That seems to be our world." We live in a world filled with uncertainty and greyness. There are really no absolutes as much as we'd like them. What would it be like if we saw people – there's a typo there – what would it be like if we saw people as the source of safety rather than a problem to be fixed or controlled? Imagine that. Imagine if we saw our workforce as the centre of our safe operations? It's just a bit of a radical change isn't it, for some safety.
How would I engage with people if I knew that nothing is neutral and everything matters? So what would I think about if I realised that every word I use and every approach I have actually does influence the outcome I get? So how would I approach them and what sort of questions would I ask? Now they're some broader questions there but I guess what I'd say is starting with being curious, not judgemental. So I try and do this and it's just as hard for me as everyone else is when there's an incident I'm always like "Wow, I wonder what they were thinking?", "Isn't that interesting?", "What was going on there?" rather than, "I can't believe they did that. That was wrong."
I try and start with observing and listening and questioning and talking and connecting and wherever possible I'm like "Wow, tell me what's going on?" and I'm absolutely fascinated in what's happening there. I'm showing genuine curiosity and above all else, I'm fundamentally trying to seek to understand how people make sense of risk by understanding that people have their own way of making sense of risk.
So, I've probably gone a little bit over but at least I'm still under my hour and there's a little bit of time for questions. I think – do you want me to flick over or am I about to lose control here? Yes, I'll flick over there.
So thank you very much for sticking with us on that and I'm going to hand back over to I think, Allicia and may have some questions.
Thanks so much David. I am mindful of the time guys. So I think we might just take the couple of questions that we had and shoot them through to you if that's okay. Everyone as you can see on the screen David's email is there. So if you wanted to contact him directly he's kindly provided those details to you.
So, I just wanted to wrap up and conclude this webinar. We greatly appreciate your participation throughout.
You also have the opportunity to email us directly at email@example.com If you do have any after-thought questions that you wanted us to address we're happy to do that and just to let you know that – to reiterate I guess more so that this webinar will be placed on the WorkSafe Queensland website if you do want to access it at a later time.
So, David, thank you so much again. I think you provided incredible insight into our stakeholders and I just wanted to let you guys know that there are a couple more webinars in this encore series that will be available I guess in the next couple of weeks.
So the details of those are on the screen and registration is open. So feel free to jump online and secure your spot in those over the next two weeks or so. So, on behalf of the Workers' Compensation Regulator and Workplace Health and Safety Queensland we'd like to thank you guys so much for participating. We do really appreciate your feedback. So you will be shot through to a quick survey. We really want you guys to tell us what you want for future webinars. So if you do have two seconds please tell us what you think.
Thanks so much and bye for now.
[End of Transcript]
- Last updated
- 12 January 2018