Generative AI: Proceed with Caution, Pursue Progress

As generative AI comes into full swing, it’s important to be careful to consider all facets of generative AI during use.

As generative AI comes into full swing, it’s important to be careful to consider all facets of generative AI during use.

AI research has a long history of openness, collaboration, and transparent reproducibility, according to NCC Group’s Eric Schorn. Image courtesy of NCC Group.


Generative artificial intelligence (AI) is becoming quite popular with the dawn of enhanced and stronger AI platforms and capabilities. Regarding engineering design, it’s important to understand the risks and precautions of fully embracing the technology.

Without precautions, organizations run the risk of stumbling on copyright issues, sometimes fake information, and the simple risk that a bad actor or two might exploit AI in an adverse way that could cause harm to an organization and a design team. 

It's important to be optimistic but cautious of the security woes that may be associated with generative AI and the Tier 1, 2 and 3 suppliers associated with design teams and product lifecycle planning and management. 

Strategic Risk Management and Mitigation

A resounding consensus among professionals in the digital engineering design community agree that there are risks; however, such risk shouldn’t override AI’s merits and what it can do for efficiency, product builds and how it can help engineering design needs. 

“There are risks associated with any technology, and for us, it’s a matter of ensuring that we maintain the highest levels of precision and accuracy in our products,” says Tonya Custis, director of AI at Autodesk. “We want to be able to protect our customers’ IP, while also automating tedious tasks for them. Generative AI is one way we can do this, but it’s not the only way. 

“We look to balance our customers’ needs with the technologies that power our products,” says Custis. “Autodesk is on a mission to explore new possibilities and the technologies that enable them, while also de-risking opportunities for our customers as much as possible.”

Customers expect that astute professional firms will find ways to capitalize on AI, with careful practices that help mitigate risks. “We’re focused on ensuring that our customers’ data is leveraged in the right way—a complex undertaking,” adds Custis. “Positive outcomes from generative AI require access to large quantities of quality data for accuracy and precision, and we feel it’s vital to pursue possibilities in a way that preserves the trust our customers have in us.”

Experts recommend that those considering use of generative AI review the terms of use of any third-party AI applications under consideration. Image courtesy of NCC Group.

A Watchful Eye on Data 

Kunal Purohit is the chief digital services officer at Tech Mahindra, a multinational information technology services and consulting company based in India.He emphasizes that a primary concern is the possibility of unintended biases and unfairness in generated content. 

“Generative AI models learn from massive amounts of data, and if the training data is biased or partial, it can result in biased or discriminatory outputs,” says Purohit. “There is also the risk of malicious actors using generative AI for perverse purposes, such as creating fake news or deep fakes. Besides, there is a possibility of data risk too, as currently there are no strong IT and data security mechanisms and guidelines. Additionally, there may be legal and ethical implications around ownership of generated content. Generative AI models can unwittingly generate content that infringes upon intellectual property rights, such as copyrighted material or trademarks. All these can have serious social, political and economic consequences.” 

It's helpful to have an external, vendor-neutral perspective on issues like security and vendor trustworthiness, says Chris Anley of NCC Group. Image courtesy of NCC Group.

Purohit also adds that it is important to proactively address these issues or threats and implement appropriate safeguards before or through the responsible development and implementation of generative AI systems. 

“To combat them, enterprises must have stringent IT and data security protocols and review processes to provide constant monitoring and oversight,” Purohit says. “For ultra-sensitive information, enterprises must have extremely stringent policies related to the use of generative AI.”

Jason Juliano is a director with EisnerAmper Digital of the Eisner Advisory Group LLC, a large consulting firm. Juliano agrees that there are obvious risks from deep fakes to copyright concerns with the latest AI trends. However, he believes that it’s still possible to be careful while capitalizing on the technology’s enormous potential.

“Today, you now see use cases using foundation models built with generative AI,” says Juliano. “A foundation model requires a larger initial investment, [but] the initial work of AI model development is amortized with each use because the data required for fine-tuning additional models is significantly reduced. The flexibility and scalability of these foundation models will significantly accelerate AI implementation and enable improved AI, cybersecurity and data governance. 

“AI will soon be used at the strategic heart of business operations,” he adds “Our technology partners have seen a time to value that is more than half that of traditional AI. Indeed, we expect that within the next two years, foundation models will power nearly one-third of AI in enterprise environments.”

It’s important to not be cavalier in taking security measures.  Asad Siddiqui is the chief information officer of integration-platform-as-a-service (iPaaS) provider Celigo. He says it’s very important to not overlook data security. 

“Due to the perception that cybersecurity initiatives often do not generate business value until a security event occurs, enterprises often do not take proactive measures to protect their data until it’s too late,” says Siddiqui. “With generative AI relying on massive amounts of data, data security should be the most important priority for enterprises looking to incorporate generative IT into their practices.”

Know Your Stakeholders 

“Transparency is very important in the tech industry and in order to gain and keep the trust of customers, companies should be open about their approach and vendors used,” says Balaji Ganesan, CEO and co-founder of CA-based Privacera, a company that helps organizations enhance their data security posture, without impeding authorized access to data. 

“In order for companies to make informed decisions with their data, scalable and consistent security and governance controls that enable transparent data-sharing must be a priority,” he says.

Chris Anley is the chief scientist at the NCC Group, a large security consultancy. He says that before an organization starts using generative AI, it’s critical to review the terms of use of any third-party AI software. 

“In an engineering context, your intellectual property rights need to be absolutely clear, and you need to be sure that your proprietary designs aren’t being leaked or used by third parties,” says Anley. “Depending on your industry and regulatory position, it may be worth reviewing the ways your staff use generative AI, to make sure that no sensitive, proprietary or regulated data is being passed to third parties or is crossing borders in a way that breaches data protection laws. It’s [also] important to understand that generative AI systems have a habit of ‘hallucinating’—they often generate plausible, but misleading or incorrect content. Correctness is crucial in engineering applications, so it’s important to make sure that staff are aware they have to carefully review the output of generative AI systems.”

Eric Schorn is the technical director for cryptography services at the NCC Group. He says that AI research has a long history of openness, collaboration and transparent reproducibility, which has largely driven its rapid progress and significant achievements. 

“While this approach has become somewhat contentious recently, the new risks inherent in AI deployment require broad partnership,” says Schorn. “A security-oriented third-party partner can help objectively identify emerging risks and mitigate their impacts. The complex entanglements stemming from model development demand vendors and their customers to collaboratively partner in commercialization efforts. Perhaps most importantly, regulators themselves must participate and be included as partners. Secretive and siloed efforts will not be successful.”

Rob LoCascio is the founder and CEO of LivePerson—a global technology company that develops conversational commerce and AI software. LoCascio emphasizes careful preparation and due diligence everywhere along the AI food chain. He says it’s important to ensure data integrity and accuracy and keep scrutiny everywhere AI technology and applications are used. 

“Analyze the training data underpinning your AI solutions carefully,” says LoCascio. “Can you identify where it may be biased or produce errors? Does it represent all users and stakeholders? Is the code underlying the data auditable? Test your data to ensure it is free of bias, accurate and accounts for regional and cultural differences.”

LoCascio also advises that you make sure legal checks are in place. “Develop and use the AI system in a way that diminishes exposure and liability,” he adds. “Legal and HR teams should be consulted early along these lines, as they are an important part of the discussion and have context on applicable laws and regulations. Put safeguards in place. Deploy testing teams to ensure the interests of diverse groups have been adequately considered and addressed. This will also increase the number of people who can benefit from the technology. Prioritize building out policies and systems that support an ethical approach to leveraging AI.”

More Autodesk Coverage

Making and Breaking Things for Fun
Makers and YouTubers blend engineering, entertainment and creativity.
AU 2024: Project Bernini Exemplifies AI-Powered Design
At its annual user event, Autodesk highlights AI's growing role in products for all sectors, and celebrates being selected as partner for LA28 Olympic Games
Digital Transformation at IMTS 2024
Engineering, manufacturing solutions embrace AI and automation.
Autodesk/Makersite Partnership Brings Sustainability to Product Design
Autodesk expands partnership with Makersite across Inventor and Fusion.
GRAITEC Group Enhances Productivity Solutions for Autodesk Revit
Compatible with Revit 2025-2022, these tools are designed for enabling AEC firms to maximize their ROI, GRAITEC says.
Autodesk Company Profile

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Jim Romeo's avatar
Jim Romeo

Jim Romeo is a freelance writer based in Chesapeake, VA. Send e-mail about this article to [email protected].

Follow DE
#27952