Emergency Stop! – Failure to Connect! – Automatic Failure Notification! – Transaction Failed! – Critical Failure Imminent! – Your Car Will Self Destruct in 5 Seconds! …. Not matter the way the message is delivered, these are some obstacles we may face on any given day, well most anyway. But, what is it about these messages that makes us get back up and keep going? This is somewhat of a follow up post to my last two pieces about: What went wrong and why I was responsible.
You see, yes, something went wrong on the project I was working on and I was responsible. Yes I did learn from it and no, the project didn’t self-destruct. In fact, quite the opposite happened, we kept going. That little incident was only the beginning of a long string of events that brings us up-to-date on the project. After we recovered from my little incident, we went to start the project again and this time we literally blew up a piece of equipment. No fire and brimstone but it ruptured in a fashion that made an incredible mess, was extremely expensive, and added additional delays; thankfully no one was hurt. This incident could not have been foreseen, it was the result of an unexpected characteristic of the project site we were working at.
We emergency shutdown, we cleaned up the mess, and of course we had a big meeting on what to do next. Then, we went out to determine why the project responded in this manner and under these new circumstances what could we still hope to achieve. Eventually, after a very short while we established new test parameters and brought out new equipment.
So, we went for startup a third time and Vroom! We were going! Off to the races! After startup we were taking readings across the site to establish a baseline. When we got to the receiving end of the project, we noticed that it was rather quiet and it quickly sunk in that the new equipment had self-shut down. We scrambled (again) to restart the equipment while at the same time providing some bypass relief on the system so as not to overload it. Well, when that relief was added, Blammo! The line connecting the producing side to the receiving side over loaded and burst open. I think I literally almost cried. But, there’s no crying in baseball! So instead, a quick phone call to the producing side of the project along with some serious getup we were able to keep the production going by diverting it to a containment area. Another quick phone call to get the appropriate personnel to the project site and some quick hands by some right-proper technicians and the within an hour and a half we were back in business.
The clouds appeared to part at this point, metaphorically, it was actually ridiculously sunny. We brought the system slowly back online and did a full inspection of the whole system, all appeared to be running very well. I went home that night feeling better but apprehensive still. It was a good evening till: Rring! Rring! It was the evening site supervisor and we were having issues with the new piece of equipment recent brought out, the same one that shut down of its own volition earlier the same day. Now, a little more used to this issue the second time through, we trouble shot the problem into the night. The night supervisor trouble shot it much more than I and was doing his best pretty much all night long to keep the project quasi going. The result when this happens is reduced production by half which could affect the outcome of the test, the all-important data. In the morning we sent the technicians out again and after some investigation thought we had come to the root of the problem, and it was a simple fix and so again, we were back up running.
At this point we were good for about two hours. Yep, two hours when that same piece of equipment shut down. In fact, as I’m writing this post, I’m on and off the phone with the night supervisor and the equipment provider who is intern talking to a technician. Apparently the problem(s) we thought we addressed, we not in fact the problem after all. The result of this project/test relies heavily on us not shutting down once we start and varying degrees of production also contaminate the data gathered for final analysis. This is why there is such a strong push to keep the project going even under duress, under extraneous circumstances, even when stuff goes Ka Ka Ka Boom!
The point of all this is what is it that keeps those involved going after all of this? I have the pleasure to work with some seriously resilient people. In my seven here years I don’t think I’ve ever heard the words: “Nope, we can’t do that,” or: “That’s going to be too difficult.” Every time something is presented to these people, no matter how daunting, they find a way to get it done. But what’s more impressive is how they perform when things go wrong which has been a lot lately. They don’t quit, they find a way. It doesn’t matter the hours or the amount of work that is required, they are head down plowing through the difficulties; solution seekers.
So why? Many of these people aren’t even in my department or they are subcontracted to provide help where needed. And still the level of dedication and commitment is truly remarkable. I believe there are a couple of things that contribute to this, the first is ownership. As I discussed in the last post, commitment and engagement come from understanding, from knowing your part and how it fits in into the larger puzzle. That understanding comes from communicating, by pausing long enough to answer questions and to go out of your way to make sure each player understands the project from beginning to end, ensuring that their little piece is critical to the completion of the whole.
The second thing I believe is trust. When that individual is tasked with their part, after you have explained, communicated, and answered questions, leave them alone. You must trust that they will work in a timely fashion until the job is complete no matter the circumstances that the work has to be completed under. You must trust them that if they do run into a question, or an obstacle they can’t quite get around, that they will communicate that to you and in return trust you as the leader to assist in a solution. It’s cuticle that you involve them in any and every solution. Solicit their advice and give them room to operate.
Expert’s Input: In a piece by Jessie Sholl she gives an example of incredible resiliency and also The 5 Best Ways to Build Resiliency. It’s a very interesting read and I’ll leave most of it to you but what caught me by surprise is what she says about point-of-view. Specifically: “Resilient people are adept at seeing things from another person’s point of view.” It makes complete sense but I never considered it. Why? According to a study cited in the piece: “When we empathize with others, we feel less alone and less entrenched in pain. As a result, we recover faster.” So it’s perspective, something that is so much of a leaders responsibility, but to have a team built with those who can really consider the flip-side of the coin, invaluable.
In the end, you may not even have people with a resilient temperament, but when dropped in a circumstance that they first understand, that they second see they are relied upon, and that they are third trusted, I’ve yet to see the individual that doesn’t rise to the challenge. To wrap this up please remember that every day your team has a new opportunity to prove themselves. If they are trusted, then they can lose that trust based on what happens that day. But until that moment comes, they get to keep that trust so give it freely. On the other side, you may have those on your team that you don’t trust, and each new day that have an opportunity to gain your trust. So give those opportunities just as freely as you give trust.