Marketing loving it
Guest editorial by David Levin, isicad
Entities should not be multiplied without necessity.
- William of Occam
Everything should be made as simple as possible, but not simpler.
- Albert Einstein
- - -
It is difficult to get past the article entitled “8 Myths About Digital Twins Exposed — Here’s the Reality” by authoritative author Joe Walsh. Digital Twins (DT) is a fashionable topic today, actively marketed by lots of vendors. So it is a professional duty of industry experts to help the market understand what it really means.
What Digital Twin is Not
The first section of Joe's article lists and exposes eight myths about the technology. Some of the myths listed are
- There is a Single DT
- A DT Needs to Cover the Entire Lifecycle of the Physical Twin
- DTs Are Used Only by Engineering.
These I do not dispute. Others, however, need clarification or, within the whole set of anti-myths, seem superfluous.
I agree with Joe that the myths and uncertainties surrounding DT are beneficial to certain vendors who exploit the DT hype to sell traditional products just by adding innovative-sounding labels. In my opinion also, the hype around DT provides additional food for experts and media.
Do “Virtual Representation” and “Surrogate” Clarify Digital Twins?
Exposing the myths is helpful, because it explains what digital twins are not. The market, however, still wants to know what a DT is, and so the second half of Joe’s article explains it by giving a “usable standard definition” that came as a surprised to me:
A digital twin is a virtual representation of a physical asset or collection of physical assets (physical twin) that exploits information flow to/from the associated physical asset(s).
The digital twin is a digital surrogate that is a description of a physical asset, such as products, processes, systems, people and devices, that can be used for various purposes. The digital twin makes use of data and information from the real-world asset and provides feedback to this real-world asset.
These definitions seemed to surprise Oleg Shilovitsky even more than me. Oleg clearly explained this in his must-read blog, Digital Twin Hype. Here are twoparagraphs from him:
Both definitions use vague words explaining that information can be used to get feedback about real objects’ behavior. The most interesting part is that the approach to use the information to analyze physical object behavior used in engineering disciplines probably as long as these disciplines existed.
Digital Twin is a cool name. I hope you agree. And I like this name.
Unfortunately, marketing did their job well and created a glorified definition and campaigns about Digital Twin that very confusing. In most of these presentations, the words Digital Twin can be easily replaced by something that is more thoughtful and pragmatic such as model, information, simulation, etc. Unfortunately, it didn’t happen and marketing keeps creating even more variations of digital twin marketing. I found these processes really damaging since as a result, we have Digital Hype and not Digital Twins.
Why Not Define Digital Twin as 'Advanced Modeling'?
I almost agree with Oleg completely. Almost – because I suggest replacing his words “Digital Twin can be easily replaced by something that is more thoughtful and pragmatic such as model, information, simulation, etc.” with “Digital Twin can be replaced by a model which can effectively communicate with the associated physical object.”
To be short, I use “object” to denote systems, assets, processes - everything which can be modeled.
Those who consider “model” as insufficiently concrete as compared with, say, “surrogate” may be thinking of an everyday concept in which a model is something that only superficially resembles an object in the real world. Yes, cloth dolls and plastic airplanes can in everyday life be called models of people and aircraft, but our industry understands models as defined by science and technology. In these domains, a model is always intended and built to accurately reflect the aspects of physical objects needed to study the object, as well as to control and optimize its behavior. The counterpart is constructed with as much precision as needed for particular applications and studies.
A constructive understanding of what a digital model is assumes that it involves a system that consists of the following aspects:
- Virtual objects that are associated with components of the physical object,
- Functions defined over the virtual objects that adequately reflect the properties of the physical object,
- Relations defined between subsets of virtual objects and adequately reflect relations between components in the a physical object.
In other words, a model is not just a virtual (digital) representation of a real object, but it also is by definition a virtual representation, which within is sufficiently isomorphic to an associated real object. Perhaps this interpretation of the model and its isomorphism may seem to some to restrict the idea of a digital twin. In reality, however, the definition is needed for meaningful communication, and communicating entities must have some shared compatibly interpreted subject domain (knowledge base).
I do not find it useful to imagine a digital twin as a black box, about which it is only known that it miraculously supports effective communication with the object in reality. If we were to have in mind such an understanding of a model, it allows us to conclude that if a digital twin is not a model it hardly can be useful anywhere.
A model-based approach to DT is of course not something that was just invented, even if the word “model” is not used explicitly. For example, see my favorite passage from Michael Park's article “Digital Twins Offer Unmatched Insights For Design Engineers”:
The basic notion is that, for every physical product, there is a virtual counterpart that can perfectly mimic the physical attributes and dynamic performance of its physical twin. The virtual twin exists in a simulated environment that can be controlled in very exact ways that cannot be easily duplicated in the real world, such as speeding up time so that years of use can be simulated in a fraction of the time.These hyper-accurate models and simulations offer engineers and product designers unmatched insights across the entire product development cycle. Still, digital twins are more than just an evolution of digital models, although their goal is similar: Higher quality products and better product support at less cost and less effort.
I like this clear and pragmatic explanation and do not consider “DT are more than just evolution of digital models” contradicts my emphasis of a model-based interpretation of a DT: it is not just evolution of modeling, it is the next higher level of modeling.
I believe that a digital twin need not a “hyper-accurate" model necessarily, as its usefulness does not directly depends on the number of parameters involved in modeling. I agree though that the hyper-accurate DT of an aircraft is hyper-valuable.
Is DT actually something new? A characteristic feature of DT is its capability to support regular bi-directional communications with associated physical objects. Note that such communication could be implemented by a human who shuttles between a model and the object being researched/controlled with the help of the model. A more pragmatic case is the communication between two people: the operator of a model and the operator of associated physical object. Perhaps such a process can be called a digital twin when, for example, it is the simulation of an (emergency) situation on a spacecraft carried out by the flight control center.
In fact, the fundamental novelty of the digital twin lies in the new level of efficient communications with physical objects (typically through Internet of things as some people believe). The key word is “efficiency” as it reflects the capabilities of today’s technologies that enable online, real-time communications between many objects of real world and their complex models – communications that can be implemented with practically any required level of performance, density, and reliability.
At the end of his article, Joe notes that, based on the clear definitions from CIMdata and the NAFEMS SMSWG, various forms of digital twins and their definitions can now be considered. He concludes:
There are multiple forms of digital twins for a wide variety of purposes. Each form of a digital twin has specific characteristics to meet its intended goals and should have a more detailed definition outlining the specific capability, functionality or implementation associated with that form of digital twin. The ASSESS Initiative has established a key theme and working group around Engineering Simulation Digital Twins and has provided the following definition of an Engineering Simulation Digital Twin:
"An engineering simulation digital twin is a physics-based virtual representation of a physical twin (physical asset or a meaningful aggregation of physical assets) that exploits information flow to/from the associated physical twin."
I do not consider the two basic definitions of DT to be intelligible. I completely agree that each form of DT should have a more detailed definition, and I think that if in the definition of Engineering Simulation Digital Twin we replace “virtual representation” with “model,” everything will be OK.
“Digital Twin” As a New, Useful Notion
So, in my point of view Digital Twins is a new advanced level of model.
Why would those involved in marketing DT avoid characterizing it as a model? The reason is clear: effective marketing requires new exciting words, and “model” is something too old, too familiar, and too boring.
Is it useful to denote new form of modeling by a new notion? New terminology is harmful if one considers the orthodoxy in the Occam's Principle quoted by this article's starting epigraph: "One should not multiply things unnecessarily." If, however, marketing honestly explains to the market the real and potential capabilities of new modeling tools, then the new notion can be considered useful.
The concept of "digital twin" is useful, and it will be successful when it stimulates the market. If we do not admit that something essentially new is behind a largely speculative concept, then we probably oversimplify the state of affairs and so do not listen to Einstein’s "Everything should be made as simple as possible, but not simpler" (Princeton books). In effect, when we want to follow Occam's principle, Einstein warns we should not throw out the baby with bathwater -- in other words, we should not go too far in our simplification.
Left: Occam; right: Einstein
A similar hype can be observed around "artificial intelligence." It is impossible to define AI constructively; therefore, we observe monstrous journalistic and marketing speculations abusing it. At the same time, however, useful AI solutions are gradually emerging in a fragmented way. Some of them even demonstrate useful approximations to what human intelligence can recognize as intelligence not produced by humans.
No amount of misleading hype can prevent the market from creating really new solutions and truly useful products that in time gradually will justify names as brassy as artificial intelligence and digital twin.
- - -
[This article is reprinted from isicad.net/articles.php?article_num=21408 with permission.]