Trust, Technology and Being Human
An engineering workflow needs to take human nature and the pros and cons of software recommendations into account.
Latest News
April 1, 2019
My wife called to say the car’s check engine light was on. No strange sounds, no strange smells, no odd vibrations, no smoke, no fire—so being a fallible human, I told her it would probably be OK for her to finish shopping and drive the short distance back home. This was during one of the many polar-vortex-induced cold snaps, so, also being a lazy human who doesn’t like the cold, I didn’t even open the hood when she returned. I dropped it off at the mechanic the next day.
“You’re not going to believe this,” my mechanic said. “When we popped open the hood, a rabbit hopped out.” Apparently the guy lifting the hood was startled enough by the stowaway that he screamed when it appeared.
We shared a laugh until he said, “It looks like the rabbit chewed about $400 worth of wiring.” Suddenly the bunny wasn’t so funny, at least not to me.
Ignorance Isn’t Bliss
Had I asked my wife to drive home right away and braved the cold night to investigate the check engine light, perhaps Bugs would have chomped through fewer wires as he was chauffeured about town. I can’t help but wonder how a future autonomous vehicle would react to a rabbit in the engine compartment. Would a rabbit-shaped idiot light blink? Will a soothing computer-generated voice warn that an unauthorized life form has been detected? No, but hopefully the car will be smart enough not to delay rectifying the situation like I did.
That’s the great thing about technology that makes suggestions based on available data vs. a human’s gut reaction. However, input data and algorithms aren’t perfect. A human is still required to evaluate and act on prompts—whether it’s a generative design suggestion, an additive manufacturing part orientation suggestion or a red light suggesting you should check your engine. Ignored Leporidae notwithstanding, that’s a good thing when it comes to engineering workflows.
Looking back on my decision-making process, I can pinpoint a number of factors that led me to react the way I did. It had been a long day, it was cold, it was dark, the chances of me figuring out what was wrong with the engine beyond its basic needs for cooling and filtration were slim. But, perhaps most importantly, I now realize that I subconsciously distrust automotive warning lights thanks to past experiences with overly sensitive tire pressure monitoring systems. I’m not alone. A friend of mine has been driving with his check engine light on for years. He might have a whole family of rabbits under the hood.
A Balancing Act
During DE’s recent computer-aided optimization Hot Seat webcast (available on-demand), the audience wanted us to ask our panelists about automated technologies taking away jobs from design engineers. The panelists agreed that their software was intended to help human experts make decisions, not make decisions for them.
With our phone apps pestering us to update, our smart watches telling us to stand up after we’ve gone to bed, and typing auto-corrections still highly questionable, the more immediate concern may be people ignoring data-based assistance. Engineering software vendors have the unenviable task of providing the right suggestions at the right time based on complex inputs. A few bad calls or too many nuisance notifications and automated advice may be ignored or alerts unheeded.
An engineering workflow needs to take human nature and the pros and cons of software recommendations into account. We’re all trying to do more in less time, but we can’t be lulled into over-reliance on automation, or afford to ignore data-based suggestions.
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Jamie GoochJamie Gooch is the former editorial director of Digital Engineering.
Follow DE