Style2Fab uses AI to easily personalize 3D printable models
MIT researchers developed the tool to rapidly customize models of 3D printable objects without hampering functionality

With the growing affordability and accessibility of 3D printers, a rising number of amateur makers are creating objects using free, open-source 3D models. However, personalizing these models is a challenge due to the necessity for costly and intricate CAD software, especially when the original model’s design isn’t available. Additionally, ensuring that these customizations do not impair the object’s function is another hurdle for beginners. To address this, MIT researchers have introduced Style2Fab, a generative AI tool that allows users to personalize 3D models using simple language prompts, ensuring the final product’s functionality isn’t compromised when printed.
“For someone with less experience, the essential problem they faced has been: Now that they have downloaded a model, as soon as they want to make any changes to it, they are at a loss and don’t know what to do. Style2Fab would make it very easy to stylize and print a 3D model, but also experiment and learn while doing it,” said Faraz Faruqi, a computer science graduate student and lead author of a paper introducing Style2Fab.
Style2Fab uses deep-learning algorithms to categorize 3D models into aesthetic and functional parts, simplifying the design process. Beyond aiding amateur designers and enhancing 3D printing accessibility, Style2Fab holds potential for medical applications, especially in creating personalized assistive devices. Research indicates that patients are more likely to use aesthetically pleasing assistive devices. Style2Fab facilitates such customizations, allowing users to design medical devices like thumb splints that align with their personal style while maintaining functionality.
The development of Style2Fab aims to support the burgeoning DIY assistive technology field, as noted by Faruqi. He collaborated with his advisor, co-senior author Stefanie Mueller, an associate professor in the MIT departments of Electrical Engineering and Computer Science and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL) who leads the HCI Engineering Group; co-senior author Megan Hofmann, assistant professor at the Khoury College of Computer Sciences at Northeastern University; as well as other members and former members of the group. The research will be presented at the ACM Symposium on User Interface Software and Technology.
Functionality
Online repositories, such as Thingiverse, allow individuals to upload user-created, open-source digital design files of objects that others can download and fabricate with a 3D printer. Faruqi and his collaborators began this project by studying the objects available in these repositories to better understand the functionalities that exist within various 3D models – giving them a better idea of how to use AI to segment models into functional and aesthetic components.
“We quickly saw that the purpose of a 3D model is very context-dependent, like a vase that could be sitting flat on a table or hung from the ceiling with string. So it can’t just be an AI that decides which part of the object is functional. We need a human in the loop,” said Faruqi.
The researchers identified two key functionalities in 3D models: external functionality (parts interacting with the external environment) and internal functionality (parts that must fit together post-fabrication). For effective stylization, it’s essential to maintain the geometry of these functional segments while allowing customization of the aesthetic sections.
Style2Fab uses machine learning to analyze a 3D model’s topology, identifying geometric changes such as curves or angles. This analysis divides the model into distinct segments, which are then compared to a dataset of 294 annotated 3D models to determine if they are functional or aesthetic based on similarity. If a segment closely aligns with a functional piece from the dataset, it’s labeled as functional.
“But it is a really hard problem to classify segments just based on geometry, due to the huge variations in models that have been shared. So these segments are an initial set of recommendations that are shown to the user, who can very easily change the classification of any segment to aesthetic or functional,” said Faruqi.
Human involvement
Once the user accepts the segmentation, they enter a natural language prompt describing their desired design elements, such as “a rough, multicolor Chinoiserie planter” or a phone case “in the style of Moroccan art.” An AI system, known as Text2Mesh, then tries to figure out what a 3D model would look like that meets the user’s criteria. It manipulates the aesthetic segments of the model in Style2Fab, adding texture and color or adjusting shape, to make it look as similar as possible – keeping the functional segments off-limits.
The researchers integrated their findings into a user interface that automatically segments and stylizes 3D models based on user input. A study involving makers with diverse 3D modeling experience levels showed that Style2Fab was versatile; it was easy for beginners to use and experiment with, while advanced users found it expedited their workflows and appreciated its advanced customization options.
In future developments, Faruqi and his team aim to refine Style2Fab for greater control over an object’s physical properties alongside its geometry, addressing potential fabrication issues related to structural integrity. They also hope to allow users to create custom 3D models from scratch within the platform. A collaborative project with Google is underway.
This research was supported by the MIT-Google Program for Computing Innovation and used facilities provided by the MIT Center for Bits and Atoms.