It's not so easy
by Guillaume Levenq
[This article is excerpted from datakit.com/en/news/textures-one-of-datakit-s-know-how-189.html]
The correct visualization of 3D objects, both in imaging and CAD, requires having, on the one hand, an exact and flawless geometry, and on the other hand colors and/or textures applied with precision on the faces of these objects.
To put it simply, a texture is a 2D image that is placed as precisely as possible on a 2D or 3D surface of an object, so as to dress it up to make the visualization more realistic. These images can be either simple flat areas of color, or real images representing, for example, a material.
These images most often come from texture libraries, either standard or specific to the editor of the authoring 3D software.
Typical texture library (image source Benchmarq Project Services)
These authoring software programs use mapping algorithms, again standard or proprietary, to apply them to the surfaces, which fill the surface by assigning each pixel of the image to a 3D pixel of the surface. These are the so-called procedural textures.
From Textures to Renderings
To be usable, this mapping information must be supplemented by lightning information, describing the lighting characteristics of the object -- ambient, diffuse, specular light, and on -- its intensity, as well as the behavior of surfaces with respect to light -- absorption, refraction, reflection, transmission, and so on.
Reflective properties of materials determine their look (image source Cadnav)
It is the joint use of the base image with the mapping and lightning information, which constitutes the rendering, giving a realistic rendering of a 3D object, in a given light environment. So much information that it is necessary to retrieve, understand and translate, when we want to transfer a realistic 3D object from one system to another.
An OBJ file, converted to a 3DPDF file with CrossManager in which textures in the PDF file corresponds to those in the OBJ file (image source NASA)
How Textures are Converted
When converting from one system to another, the 3D object is recovered, most often in the form of files in B-Rep or mesh formats, with the rendering information that will be applied to it (images, mapping, and lighting).
When converting the 3D object from one format to another, in addition to translating the geometry, it is necessary to calculate from the mapping information the coordinates of the texture on the mesh generated by the tessellation, to apply the image correctly on the 3D, then apply the lightning information to it to have a correct rendering of the object.
Mapping a 2D texture (using u,v coordinates) to a 3D model (image source CJump)
If the theory is simple, it is often more complicated in reality. The mapping algorithms used by the various authoring software are almost standard, but with very slight variations from one to another. We have therefore developed our own mapping algorithm, which makes it possible to avoid as much as possible any texture shifts during conversion, regardless of the authoring software, which seems to be perfectly suitable for users of our converters.
But we can also have textures whose format we don't know how to manage. Apart from the standard formats of external images, we can only give information about the presence of an image and the way to get there. In this case, if you want to generate a 3D PDF file, for example, (a format that needs to read the image and embed the associated pixel array) the textured rendering will not be available. On the other hand, for the supported image formats (jpeg, bmp, gif, tiff, png, tga), the 3DPDF will then benefit from the desired realistic rendering, which is very telling, even for occasional users of 3D software.
It will be understood that the conversion of 3D models from one system to another requires a perfect understanding of their geometry, but also of the textures that dress them, in order to make them as realistic as possible for easier interpretation.
[Guillaume Levenq is a development engineer at Datakit]
Comments