What do a municipal employee who surveys land parcels and buildings, a manufacturing manager who wants to determine the current state of plant construction, and an engineer who analyzes component geometries in parts and tools have in common? All of them engage in reverse engineering.
To do so, they usually use techniques such as laser scanning or photogrammetry to record various data on the objects they wish to capture. So far, so good. But a huge quantity of data is not particularly informative by itself. It first needs to be transformed into a meaningful, parameterized 3D model. The creation of such manipulatable CAD models is the goal of reverse engineering.
The compilation of optical measurement data in the form of point clouds is now technically easy to implement using modern methods such as laser scanning. The subsequent processing of the data into parameterized 3D models, on the other hand, is still performed manually. This requires trained specialists and a lot of time. Hence, this task is currently still often outsourced to service provider companies in low-wage countries. In an era of high-level automation, the question arises: »Why can't this be done at the push of a button?«
To extract useful CAD models from the large amounts of virtual point cloud data, certain requirements need to be met. The models not only need to be able to map geometric and structural information, but also metadata, such as on materials, identification numbers, or access rights. These must be accessible in parameterized form and easy to process with other programs.
Classically, data points – sometimes up to several million of them – are pre-processed before model reconstruction can begin at all. Incorrectly detected points are discarded, the entire point cloud is divided up into subsections, and the point density is reduced.
Subsequently, the actual model reconstruction begins with segmentation, in which geometric properties of the point cloud are determined and combined into clusters. This is followed by the classification of these clusters into features – design elements of CAD authoring systems. Finally, these are then reassembled into a parameterized 3D model, as if following a blueprint.
To be able to carry out this complex process in a fully automated fashion, researchers at Fraunhofer IPK have developed what they call »Scangineering«. In this procedure, the parameterized 3D models are generated algorithmically using artificial intelligence.
Scangineering is based on the reverse engineering process chain and can roughly be divided into the two software components main module and framework.
Compared to classical reverse engineering methods, Scangineering relies on a high degree of automation. Humans remain a part of the beginning and end of the process as input providers and analysts of the results. However, the repetitive work steps in the middle of the process no longer need to be performed manually.
Scangineering therefore helps to make objects, buildings, machines and components usable easily and quickly as virtual models. In this manner, the process also contributes to long-term sustainable value creation. After all, virtualizing physical objects using 3D scanning also makes it easier to reuse, refurbish, and recycle products.
In multiple research and industry projects, experts from Fraunhofer IPK have successfully demonstrated that their technology is suitable for automated reverse engineering of fully automated 3D models. To meet individual requirements of the respective use cases, only the software parameters need to be adapted.
Scangineering is about to take the next big step: Via the internal Fraunhofer funding program AHEAD, two scientists are collaborating with the company pointreef – Digital Reality on a joint spin-off. By the end of 2021, an initial version of the software is expected to revolutionize the market for modeling of physical objects in the building and construction
sector.