Conceptual Design with Natural Language Prompts

Final Aim’s Yasuhide Yokoi reflects on using AI tools for product development.

Final Aim’s Yasuhide Yokoi reflects on using AI tools for product development.

Early 2D concepts produced with generative AI programs using natural language prompts. Image courtesy of Autodesk, Final Aim.


Yasuhide Yokoi, chief design officer of Delaware-based Final Aim, is Japanese but he was raised in Australia. Today, he splits his time between Japan and the United States. His designs reflect his East–West blend. His customer list includes Toyota, Honda, Sony, Panasonic, Olympus, Softbank, Microsoft and Autodesk. He recently added Yamaha Motor to the list.

Yamaha asked Yokoi to prototype a compact electric vehicle (EV), suitable for light work in Japan’s farmland and mountainous areas. For Yokoi, it was also a chance to explore the use of ChatGPT-style natural language input in the conceptual design phase. His client, Yamaha, was also open to the idea. What he discovered about the process highlights where it works, where it’s bumpy and where roadblocks remain.

Getting a Design Brief from AI

Yokoi first turned to text-to-text artificial intelligence (AI) tools to understand the features of “future tractors.” Since he was a novice in agriculture, he also queried the AI systems to gather data about the environment where the vehicle could operate. He asked for “a comprehensive explanation of the issues facing Japanese agriculture, including the aging of the farming population and business succession.” This was similar to working on a design brief with a client or domain expert, except he was involving AI.

The next step was to generate two-dimensional concept drawings from text input. “I used whatever AI tools were available: ChatGPT, Adobe Firefly, Midjourney, Dall-E, even the relatively new PromeAI,” says Yokoi. “If the app had a commercial version, I would use that instead of the free version, because I wanted to avoid commercial usage and copyright issues.”

In addition to typical engineering parameters and specifications, Yokoi also used text prompts to describe unquantifiable preferences, such as design ideas suitable for Japanese farmlands.

“In the 2D concepts generated, the backgrounds were definitely Japanese. For example, some showed rice fields,” Yokoi says.

But for the vehicle design itself, the outputs were much better with prompts that refer to well-defined existing objects. “For example, the prompt ‘Japanese tractors’ really made a difference,” he adds.

Most outputs adhered to the basic rules of vehicles, but there were some that completely ignored them, showing more wheels than necessary or roofs without support pillars. “These wild, crazy ideas are not necessarily bad. They opened my mind to what I might not have thought of. And I can always optimize them later with my design and engineering knowledge,” says Yokoi.

Many of the tools he mentioned can produce highly detailed, polished images in different styles: anime, illustration-based and photorealistic styles, to name but a few. “The communication between the client and the designer is especially crucial. I found generative AI to be a very useful tool for this purpose, as that allows us to share the design image at a very high level,” says Yokoi.

Accelerate Concept Development

Most mainstream 3D CAD vendors also have 2D sketching programs targeting design engineers. Autodesk, for example, offers AutoCAD in addition to Autodesk Fusion. Siemens offers Siemens Solid Edge 2D (free) in addition to Solid Edge 3D for mechanical design. However, these programs are specifically for creating parametrically precise, detailed 2D sketches. Using text prompts to generate concept designs is not a standard feature in them.

Final Aim’s Chief Design Officer Yokoi uses a mix of AI-based generative design programs to develop the design for Concept 451, an electric vehicle for light farm works in Japan. Image courtesy of Autodesk, Final Aim.

Yokoi is a long-time Autodesk Fusion user. The type of concept arts he needs can be produced in applications such as Sketchbook (for digital artists) or Autodesk Alias (for industrial design), but the process is manual, not driven by natural language prompts as in consumer-targeted tools like Midjourney or Dall-E. “Having a natural language-based concept-generating tool would really speed up the process,” says Yokoi.

As an experimental proof of concept, Autodesk Project Bernini, announced in May, explores the same workflow described by Yokoi. According to the announcement, “the first experimental Bernini model quickly generates functional 3D shapes from a variety of inputs including 2D images, text, voxels and point clouds.”

The technology is developed by Autodesk Research, the branch responsible for exploring new possibilities and usage paradigms. As part of Autodesk Research, the Autodesk AI Lab “trained the Bernini model on 10 million diverse 3D shapes—a composite dataset made up of publicly available data, a mixture of CAD objects and organic shapes,” the company explains. The program exports objects in OBJ format.

(Watch Kenneth Wong’s interview with Mike Haley, senior VP of Autodesk Research, on Project Bernini, below).

“Autodesk AI projects like BlankAI and Bernini are using designers’ language, descriptions of their ideas, illustrations and other 2D and 3D data, to streamline creative workflows and democratize creativity, often bringing together diverse disciplines under a shared creative vision. This will become a much more rewarding way of creating in design software, increasing creative expression and productivity,” says Thomas Heermann, head of Automotive Design Studio, Autodesk.

Yokoi’s workflow is also similar to what NVIDIA demonstrated at SIGGRAPH 2023 to a select group of automakers. The application, trained on Stable Diffusion, could accept text prompts and images as input to generate professional-looking 2D automotive sketches, complete with backgrounds.

“AI-produced artwork enhances the ideation development stage by creating anticipated evolutionary solutions as well as new and unexpected directions critical to design exploration,” says Peter Pang, senior product manager, virtual and augmented reality at NVIDIA.

At SIGGRAPH 2024 in July, NVIDIA CEO Jensen Huang revealed a feature that would allow users to use text or verbal prompts to automatically generate 3D characters, objects and scenes. It was part of a collaborative project with stock-art merchant Shutterstock and marketing firm WPP.

“We taught AI how to speak OpenUSD [3D file format for NVIDIA Omniverse]. So the girl [the user] is speaking to Omniverse; Omniverse can generate USD; then Omniverse uses the USD prompt to find the object from its catalog; and then Generative AI uses these conditions to generate the scene,” said Huang.

Ideally, Yokoi would like to be able to upload the AI-generated 2D concept into a 3D CAD program and extract a 3D model that can be the starting point. “I’ve found some open-source tools that can convert 2D into 3D, but the quality is not that good,” he says. For now, the process of converting AI-generated concepts into 3D remains largely manual. That means using 2D images of the design from the front, side and top views to recreate the 3D shape one surface at a time, one solid at a time.

Cultural Intelligence

Yokoi also felt that the current AI-based tools are limited in cultural intelligence. “For people outside Japan, most of these AI-generated concepts may look Japanese, but for me, some of them are totally off,” he says. “For example, Japanese designers would often use the word rin (凛), which is difficult to translate. It means elegance or dignity, but it also suggests subtlety and humility.”

The Japanese also embrace wabi-sabi (侘び寂び / わびさび), an esthetic principle that recognizes beauty in unlikely places, such as broken objects and rugged rocks. While many AI tools support Japanese language prompts, they don’t seem to recognize the implications of these Japanese words, Yokoi says. This hints at what role human designers will likely play in AI-powered product development. They will be the final arbiter, using their domain expertise and cultural knowledge to refine the AI-generated design so the product can be a success in the target industry or geography.

“As technology has made it easier to create something, it is now very important to have the discernment to evaluate it,” Yokoi says. Final Aim and Yamaha’s co-developed prototype, called Concept 451, made its debut at the Yamaha Motor booth at this year’s Tokyo Auto Salon.

Text to CAD

Ryan McClelland, a research engineer from the NASA Goddard Space Flight Center, is exploring the use of generative design and digital manufacturing to create lighter spacecraft structures. He is the proverbial rocket scientist. But even he feels CAD user interfaces need to be simpler and easier.

“I hate confusing menus!” he says. “This is what keeps work stove-piped and users specialized, slowing down the product development process. Hunting through menus sucks. The names of commands are often ambiguous and confusing. AI can fix this today. Any program where you can’t at least search for commands feels like a dinosaur.”

McClelland used Autodesk Fusion at NASA to explore what he calls “evolved structures”—what others might call “alien structures.” Evolved structures are produced with generative design tools in Autodesk Fusion. They’re optimized for specific manufacturing methods, such as 3D printing or computer-numerically controlled (CNC). McClelland says, “They look somewhat alien and weird, but once you see them in function, it really makes sense.”

Lately, McClelland has been exploring the browser-based Text-to-CAD application by Zoo with ChatGPT-style input. The company writes, “Text-to-CAD is an open-source prompt interface for generating CAD files through text prompts. Generate models that you can import into the CAD program of your choice. The infrastructure behind Text-to-CAD utilizes our Design API and Machine Learning API to programmatically analyze training data and generate CAD files.”

Presently, Text-to-CAD is a single-prompt geometry-generation tool, but Jessie Frazelle, co-founder and CEO of Zoo, revealed her plans. “We are working towards the V1.0 release of our Modeling App. It’s a traditional CAD application, but it has a hybrid interface: click and point for traditional mechanical engineers and code,” she says. “Machine Learning (ML) features like Text-to-CAD will be fully integrated into the app. You can imagine it’ll be a bit like J.A.R.V.I.S. from “Iron Man.” For example, you could click the face of a CAD model and type ‘Put an m8 screw hole in each corner,’ and the ML will return the change to the CAD model and the code.”

Zoo’s modeling app is also expected to let you import existing CAD models and edit or modify using text prompts.

More Autodesk Coverage

Meet the Latest Star Wars Droid Designers
Droid design contest winners discuss process, inspiration.
Making and Breaking Things for Fun
Makers and YouTubers blend engineering, entertainment and creativity.
AU 2024: Project Bernini Exemplifies AI-Powered Design
At its annual user event, Autodesk highlights AI's growing role in products for all sectors, and celebrates being selected as partner for LA28 Olympic Games
Digital Transformation at IMTS 2024
Engineering, manufacturing solutions embrace AI and automation.
Autodesk/Makersite Partnership Brings Sustainability to Product Design
Autodesk expands partnership with Makersite across Inventor and Fusion.
Autodesk Company Profile

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Kenneth Wong's avatar
Kenneth Wong

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at [email protected] or share your thoughts on this article at digitaleng.news/facebook.

      Follow DE
#29343