by Owen Wengerd
The CAD pundits and CAD companies talk all the time about the next big thing. Much of the chatter is based on marketing hype and wistful thinking, with an occasional bit of reality thrown in. The hot topics du jour are "cloud" anything and massively parallel processing, aka "infinite computing". Now cloud computing might some day change the world as we know it, but that's not the topic of this post and I don't want to open that can of worms. Oops, my bad, I guess I already did.
Big Changes vs Small
I think it's likely that there will eventually be another seismic shift in how we design and build stuff -- something on the scale of moving from drafting boards to computer screens -- but mostly we will see a slow evolution with many small changes that amalgamate and standardize over time by survival of the fittest. That's how it should be.
I find it fascinating to observe the interplay between marketing forces and consumers. The social dynamics of this evolution are such that marketing forces are generally attempting to sell big changes (or at least big ideas, whether practical or not) at the expense of small, but important and practical changes.
Sometimes, I get to thinking about what kinds of small changes could really have an impact, but that may not be appealing to marketing wonks and may therefore never happen without divine intervention.
One such change that I have contemplated is the notion of viewing a model (of anything) like a video rather than a photo. In other words, with a time axis, and with discrete frames that can be easily rolled backward and forward, just as with a video. The technical ability to do this is very well established in forms as diverse as video compression technology or source code version control systems. Imagine that every change to a model, no matter how it is made, results in a small discrete packet of information that describes the change (ideally in a way that can be easily understood via a kind of viewer that compare the changes between any two frames in the history). This approach, however, impinges on a lot of the fundamental ways in which we work, from file formats to distributed multi-user environment management.
I hear some of you saying, "But this is just history-based modeling, which has already been tried." Yes, there are some parallels, but it's not the same thing. Take, for example, source control systems in which it is easy for multiple programmers to make changes to the same file simultaneously. This is possible because of how "diff packets" work, and because the software contains algorithms to merge simultaneous changes -- or, at worst, force a person to resolve collisions when they cannot be resolved automatically.
You can't do this sort of thing with history-based modeling.
I can think of other benefits as well. Anyone currently maintaining a BIM database probably will have developed some way of managing snapshots-in-time of the database; this could be completely automatic if the time axis were built right into the data. Incremental backups would be simple and, like video compression, model data compression would be enhanced because of the time granularity of the data.
Hmm, all this talk about packets and frames gives me an idea. I think I'll call my small change, "quantum modeling". Wait, I should trademark that: Quantum Modeling (tm); licensing fees negotiable.
I've heard some indications that the recent startup sunglass.io is storing models in a fashion similar to my description, and while writing this post I went to their Web site, but found that they don't support my Web browser or my Android phone, so I didn't bother to test or research it further. This, however, illustrates my earlier point that corporate needs sometimes trump the needs of consumers.
[Owen Wengerd writes about AutoCAD programming at his blog, Outside the Box. More about Mr Wengerd at http://otb.manusoft.com/about ]