Latest News
July 2, 2019
The Ford Pinto case, often cited in business ethics classes, is a piece of automotive history the carmakers would like to forget—but is important for consumers to remember. The car, as it turned out, had a fatal design flaw. Though technically it adhered to the industry standards at the time, the fuel tank’s position made it prone to ruptures and leakage in rear-end collisions.
The defect became public in two landmark lawsuits: Grimshaw v. Ford Motor Company and State of Indiana v. Ford Motor Company. Subsequently 177 more cases were filed.
“Many of these cases stem from the pressure to get a high-quality product to the market on time to make a profit.”
Tragic in themselves for the loss of lives and injuries involved, the cases also exposed something else. “Ford knows the Pinto is a firetrap … Ford waited eight years [to address the defect] because its internal ‘cost-benefit analysis,’ which places a dollar value on human life, said it wasn’t profitable to make the changes sooner,” reports Mark Dowie in his Pulitzer-winning exposé “Pinto Madness” (Mother Jones, September 1977).
It’s one thing to miss a mode of failure, but quite another to find it and then miss the chance to fix it. For its economics-driven decision, the carmaker ultimately paid a much steeper cost in the erosion of consumer trust, negative brand image and punitive damages in litigations.
The technologies to identify and address different modes of failures have become much more robust, especially in the simulation-driven automotive sector. But the tug-of-war between profit margins and sound design decisions continues, and the process to eliminate blind spots in systems engineering remains incomplete. Beyond the automotive sector, engineering examples include the Samsung Galaxy Fold’s cracked screen and the grounded Boeing 737 Max. Potentially brand-breaking product failures continue to make headlines. It suggests, somewhere in the design processes and decision-making practices, certain essential components and safeguards are still missing, making consumers vulnerable to a repeat of the Pinto madness.
Production versus Recalls
According to Allianz Global Corporate & Specialty (AGCS), a corporate insurance carrier operating in 34 countries, “More cars were recalled than ever before in the U.S. during 2016—the third year in a row this phenomenon has occurred. According to the National Highway Traffic Safety Administration (NHTSA), 53.2 million vehicles had to be returned—over three times as many as during 2012 (16.5 million). This trend is mirrored across Europe.”
The data from the International Organization of Motor Vehicle Manufacturers (OICA) shows, in the last decade, U.S. car production for passenger and commercial models has increased 31%. In the same period, worldwide car production increased by 35%. But a similar increase is also shown in the number of compliance- and defect-associated recalls issued by the NHTSA. The U.S. agency’s published numbers indicate an increase in recalls by 33% between 2008 and 2018.
Carmakers sometimes voluntarily recall their products when they discover risky flaws and defects. NHTSA described them as “uninfluenced recalls.” Carmakers may also be ordered to issue a recall by the NHTSA, or prompted to do so when NHTSA launches an investigation. The agency calls them “influenced recalls.” The data from NHTSA’s 2018 annual report listing recalls from 1999 to 2018 shows a rise in voluntary recalls, while indicating a decrease in influenced recalls.
“Tougher regulation and harsher penalties, the rise of large multinational corporations and increasingly complex and consolidated supply chains, the socio-economic landscape, increasing threat of litigation, technological advances in product testing, as well as heightened consumer awareness—and growing use of social media,” says ACGS, “are just some of the contributing factors, which means product recall exposures have increased significantly over the past decade.”
The Promise of Systems Engineering
Keith Meintjes, a CIMdata fellow and executive consultant, is a veteran of the auto industry. Before becoming a consultant and industry analyst, he spent three decades at GM as a simulation manager, and then managed the automaker’s global CAEIT infrastructure. For him, many of the headline-making product disasters can be summed up as the failure to identify a failure mode.
“We also have a failure to deliver on the promises of systems engineering,” says Meintjes. “I think proper systems engineering would have allowed us to identify and avoid many of these failure modes.”
With systems engineering, products are simulated and tested with all the disparate components included at the systems level. That means testing is done with mechanical, electrical and software components all in the loop. The last two pieces—electronics and software—take on more critical roles as Internet of Things (IoT) devices increasingly rely on sensors and software to trigger and execute functions powered by chips and processors. Some failure modes may not be uncovered during the individual component’s testing, because it’s triggered by the interplay between the electromechanical parts and the control software. Systems-level simulation and testing could expose such failure modes.
This June The New York Times published an article examining the root causes of the two fatal Boeing 737 Max crashes (“Boeing Built Deadly Assumptions Into 737 Max, Blind to a Late Design Change,” Jack Nicas, Natalie Kitroeff, David Gelles, James Glanz, June 1, 2019). “The current and former employees point to the single, fateful decision to change the system, which led to a series of design mistakes and regulatory oversights,” the reporters write. “As Boeing rushed to get the plane done, many of the employees say, they didn’t recognize the importance of the decision. They described a compartmentalized approach, each of them focusing on a small part of the plane. The process left them without a complete view of a critical and ultimately dangerous system.”
Systems engineering as a concept has been around for quite some time, but most of the software supporting the process began to appear about two decades ago. Though engineering and manufacturing communities have shown a growing interest in them, they haven’t embraced the tools widely.
The reason? “It’s the complexity of the tools,” says Meintjes. “Tools like SysML [open source environment to model systems] are not executable, very difficult to use and require a large number of people at the end user companies to understand it.”
Software for MCAD, ECAD and simulation address subassembly and electronic component testing and simulation, but not at the systems level. Tools like SysML give users a way to map out the interconnections between various components, but the diagram works more as a visual representation, less as an executable digital replica of the system. Vendors such as PTC, Dassault Systémes, Siemens PLM Software and others have begun to introduce digital twin solutions as a way to fill the gap in systems-level design.
Do You Know What to Look For?
In 2007, NHTSA investigated two separate crashes involving a Lexus and a Camry. They both seemed to stem from stuck pedals that robbed the drivers of vehicle control. At the conclusion of its investigation, the U.S. safety agency put the blame on an all-weather floor mat, which caused the pedal to stick.
In its public records of the incidents, the U.S. Department of Transportation states, “The two mechanical safety defects identified by NHTSA more than a year ago—‘sticking’ accelerator pedals and a design flaw that enabled accelerator pedals to become trapped by floor mats—remain the only known causes for these kinds of unsafe unintended acceleration incidents.” Following the findings, carmaker Toyota recalled 8 million vehicles in the U.S. to address the floor mat issue.
Under normal circumstances, CAE engineers might not have considered such a mode of failure as a possibility to verify and test. Even in systems engineering, it is doubtful those in charge would have thought of adding the dimension, texture and orientation of the floor mat into the overall simulation scheme to see if a problem could occur.
“Unless you are looking for this mode of failure and you specifically model it for testing, there’s no way you would have captured it,” says Marc Halpern, VP analyst, Gartner. “It would be a good safety exercise for carmakers to look at the various modes of failures listed at the NHTSA’s site, then simulate them with their own products. And if you’re a medical device maker, you should do the same by looking at public recall data from the FDA.”
Take, for example, the well-publicized case of faulty Takata airbag inflators. At least 24 deaths and 300 injuries worldwide have led to the largest ever recall campaign. NHTSA says the root cause the use ammonium nitrate-based propellent in airbags without a chemical drying agent. Related settlements have cost automakers more than $1 billion so far.
People are generally more accepting of accidents that are caused by human error than those caused by errors in engineered systems, says John Browne, the author of “Make, Think, Imagine: Engineering the Future of Civilization” (August 2019, Pegasus Books). “Experts I spoke to while researching my book estimate that the public is only likely to welcome autonomous cars onto our roads when they are around 1,000 times safer than human drivers,” he says. “This raises the bar considerably for safety and calls into question the validity of today’s dominant testing regimes.”
With higher expectation comes the need for new methods to detect and prevent failures in the era of autonomous car. “New proposals are being made for better ways forward, including Intel Mobileye’s intention to build a logical mathematical framework that defines what situations on the road are dangerous, and ensures that autonomous automobiles will never make decisions that cause those situations to arise. It remains to be seen whether this promising idea will work in practice,” Browne adds.
Calling for Robust Design
Once considered a good cautionary measure and safeguard against unanticipated failure modes, overengineering is now a dirty word, a design sin. In the era of lightweighting and fuel economy, the preference is to design products, parts and components to be as light and thin as possible. But could this trend be making products vulnerable to unforeseen failures?
Lightweighting by itself is not an issue, assuming it’s looked at as part of the whole system when optimizing, says Halpern.
“The optimum should not be set at the cliff’s edge of a failure,” says Meintjes. “The solution is robust design, which ensures the product won’t fail due to variable usage or manufacturing quality. In addition, it should also ensure the product’s responses don’t change drastically due to variations in usage or operating conditions.”
In other words, the benchmark for optimization should be much more than the product’s ability to merely survive normal wear and tear and routine use. The so-called “optimal design” should retain sufficient structural muscles to survive occasional misuses, accidents and failure modes yet to be uncovered.
Digital Technologies, Non-Digital Culture
Joseph Anderson, president, Institute for Process Excellence (IpX), believes CM2 makes a huge difference in reducing product failures. In the business management lexicon, CM2 means configuration management at the enterprise level rather than the subgroup or workgroup level. Whereas the engineering-centric vantage point focuses on the product, the CM2-empowered enterprise vantage point encompasses product, system and services.
Reflecting on the recent high-profile recall cases, he said, “a common theme is, they stem from a lack of enterprise change management and configuration management processes. The majority of these companies still work in silos; they tend to view things from a silo and legacy vantage point.”
Part of CM2 is knowledge management, a critical mission for the manufacturing sector where the retiring veterans have intuitions and knowledge not formally recorded in any enterprise resource planning (ERP), customer relationship management (CRM), or product lifecycle management (PLM) systems.
“The day-to-day practical concerns of a business are learned over time,” says Anderson. “You have to capture that knowledge and transfer it to the up-and-coming workforce. New employees may have the technical know-how, but they lack the experience. That could lead to recognizing an issue only when the product reaches the field or catching it only in the nick of time.”
Catching a fatal design flaw in the nick of time usually presents a dilemma. With tooling and molds already fixed and orders waiting to be fulfilled, implementing a remedy comes at considerable cost and penalties. This is where the ethics of a manufacturer will be put to the test: Release a flawed product and hope that it won’t fail in the field? Or fix it at a high cost?
“Releasing something dangerous is always unacceptable,” says Browne. “It is vital that engineers consider both the intended and unintended consequences of the products they create. To create a world without risk would be impossible and counterproductive, but there is a great responsibility to manage that risk.”
Crash and Burn in the IoT Era
In 2016, reported cases of battery fire and explosion in the Samsung Galaxy Note 7 prompted Samsung to suspend sales of the model and recall it. The cellphone maker’s remedy was to replace the units the consumers had turned in with new units that supposedly addressed the battery hazard. But the replacement units themselves continued to exhibit a tendency to catch fire, prompting a second wave of recall. In the same year, Sony also recalled Sony VAIO laptops and Hoverboard LLC recalled its self-balancing scooter/hoverboard. In both cases, the culprit was the fire hazard of the lithium-ion battery pack.
Collectively, the cases were a wakeup call for the Consumer Product Safety Commission (CPSC), which regulates and monitors consumer products, ranging from gym equipment and home furniture to electronic toys and communication devices.
In his public statement summing up the Samsung Galaxy Note 7 recall, CPSC Chairman Elliot F. Kaye noted: “In the aftermath of massive hoverboard and smartphone battery recalls, we added to the CPSC’s 2017 operating plan a project for our technical staff to assess the state of high-density battery technology, innovations in the marketplace, gaps in safety standards and the research and regulatory activities in other countries.”
“The lithium-ion battery has delivered many benefits to us, but it’s also tricky to manage,” says Stephen Bailey, director of strategic marketing, Validation Systems Division, at software provider Mentor, a Siemens business division. “You have to make sure the cell doesn’t get damaged. You also need to prevent the cell from overheating. You have to figure out how it gets charged and how the heat dissipates.”
“I think proper systems engineering would have allowed us to identify and avoid many of these failure modes.”
In an IoT device’s small form factor, the cell sits close—perhaps too close, in some cases—to nearby electronics components, causing fire hazard during charging and usage. These issues will likely intensify with the arrival of 5G, with demand for more power to perform connected activities in the background even when the device seems idle.
Just as the Galaxy Note 7 mishap was fading from the tech consumers’ memory, Samsung once again ended up under the glaring spotlights of negative press. The Samsung Galaxy Fold, released this February, quite literally cracked when folded.
In the review aptly titled “Broken Dream,” The Verge writer Dieter Bohn quipped, “The future is very fragile.”
“Smartphone makers need to test their products under all the operating conditions, [and] do destructive tests to find out where the limits are,” suggests Bailey. “But the pressure to be the first to go to market with a new kind of product is huge, so some make the mistake of rushing a product to the market. Besides, testing a game-changing product is difficult.”
To uncover all the possible failure modes in an innovative product, a smartphone maker should let beta testers use the early units for a good amount of time in daily routines. But in the era of Instagram and Facebook live feeds, such a test comes with the risk that the prototype’s form factor, functions and even design details potentially ending up on social media.
Connect at Your Own Risk
But hazard in the IoT era is not restricted to poor design and overheating batteries. Due to their connected nature, the devices invariably invite cyberattacks. In its June 2018 comments submitted to the CPSC, the Center for Democracy and Technology (CDT) writes: “While there is no doubt that the IoT presents enormous value, poorly designed and inadequately secured devices can present risks to consumers’ safety and can be exploited for costly cyberattacks.”
For example, in 2017, the radio frequency (RF)-enabled St. Jude Medical implantable pacemaker was found to be vulnerable to hacking, prompting a voluntary recall of 465,000 units of the product. The manufacturer later issued a software patch to close the security loophole, according to FDA records of the case (“Firmware Update to Address Cybersecurity Vulnerabilities Identified in Abbott’s Implantable Cardiac Pacemakers: FDA Safety Communication,” August 2017).
Err on the Side of Safety and Humanity
As the Ford Pinto case reveals, sometimes design decisions are overruled by economic concerns. Modern technologies and processes can help manufacturers identify and spot many more failure modes than before, but remedies come at a cost. The later the flaw is discovered, the more expensive the remedy will likely be.
“Many of these cases stem from the pressure to get a high-quality product to the market on time to make a profit,” says Bailey. “If you have a good product but miss the market window, or if it’s too expensive, then you won’t succeed as a company. Humans are not infallible, so sometimes they make the wrong choice. With IoT devices, if you make the wrong choice, you may be looking at lawsuits; your reputation may suffer; but the consequences are far worse in aerospace or automotive.”
Meintjes warns that manufacturers shouldn’t gamble with consumer safety. “It’s dangerous and unethical to compare the cost of a human life with the cost of design decisions,” says Meintjes. “If your product has a failure mode that can kill or even harm people, you should design that failure mode out.”
More CIMdata Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Kenneth WongKenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.
Follow DE