The legacy fails
Newspapers remind me of stores that used to carry music CDs. As customers bought fewer CDs, stores carried fewer CDs, and so customers had fewer reasons to come to the store. The only stores today that still sell CDs in my hometown are Walmart and the independent Christian bookstore. The CDs that remain are not to my taste, so iTunes is my A&B Sound (western Canadian version of Tower Records).
Our local newspapers went the same route. We had two, each with two issues a week. One publishing chain bought the other, shut it down, and cut the pay to newspaper carriers to 9 cents per house. Last year, it used the coronavirus pandemic to reduce to one issue weekly, fat with advertising flyers.
Prior to the pandemic, it had eliminated the Letters to the Editor section; with the pandemic, it begged for donations to keep going, calling itself "your community paper," which it no longer was. With woke youngsters at the helm, the front page splashed one too many hit pieces on prominent local men and women, whose primary crime appeared to be their Christian faith. So, even though it is delivered free to my front door, I no longer read it, although I do use it to line the compost bucket.
- - -
In an important article on the future of news, Martin Gurri talks about Andrey Mir's new book, “Postjournalism and the Death of Newspapers” at discoursemagazine.com/ideas/2021/04/13/post-journalism-and-the-death-of-news.
In short, the news used to be distributed by the elite, whether as newspapers, magazines, tv, and even state-run radio. Then in the 1990s, the big shift occurred as the Internet allowed us with small voices to speak up. Now four billion people compete with the mainstream media over what is news, through forums like Facebook, blogs (like this one), email newsletters, Twitter, YouTube, and so on.
What Gurri and Mir say will help you understand the news business today and why it is so awful. For instance, they note that all news is a form of fiction, because it is presented from one person's point of view.
- - -
What these sorts of discussions seem to be unaware of are specialty publications, like the ones I produce. I think specialty publishers aren't suffering the problems of mainstream newspapers because we got certain things right:
We understand the new publishing technologies, and so we made use of the Internet early. I abandoned my print newsletter for an email one in 1996. I published by first ebook in 2000 and abandoned print books by 2013. This blog has been operating since 2003.
We target niche audiences, and so have a product that readers want. Being small, we cater to our readers and can afford to interact with them one-on-one. This ties in to an important point that the Internet revolution enabled: people want to know they are being heard -- whether Arab Spring or the bad state of BIM.
We operate on a sustainable budget, and so we survive when large publications with huge staff, and expensive printing and distribution models flounder. As I am fond of saying, it was the computer (and then the Internet) that allowed me in 1991 to become a one-man publishing company and handle every task this job involves: subscriptions and advertising; research, writing, and editing; design, publishing, distribution, and marketing; and feedback. As I don't have a lot of expenses, I don't need a big budget.
While I agree that digital-everything is annoying, it is the primary way to survive in this the-Internet-makes-everything-free culture.
- - -
I must add my appreciation for those who gave me advice along the way, just as I was able to help others to get on their way.
I've been arguing for a decade now that CPU speeds slammed to a stop around 2010, and as proof my primary desktop computer, on which I get most of my paid-work done, is indeed from 2010.
That just might make me a hypocrite after buying a new laptop last month, to replace one I got five years earlier. As I explained to my wife, there are three reasons to buy a new computer:
My five-year-old laptop was (at the time) the top-of-the-line HP Spectre 2-in-1 (hinges fold all the way back), which still works well. Well, one of the 360-degree hinges is getting wobbly and it concerned me that a Windows 10 update was delayed for a year due to Microsoft having problems with the computer's audio driver. Also, the silver-colored keys drove me nuts, as they were hard to read, whether or not the back-lighting was on. Imagine lit-up letters on a light background; HP still sells laptops with that dreadful combination.
Ordering a New Laptop: Not So Easy
I initially ordered a new HP Spectre from Best Buy, but Best Buy had only two pix of the device, so then to my horror I later found it had silver keys. I cancelled the order. I found one on the HP Canada site with black keys, and ordered it ($1,550).
Then I recalled that my son-in-law had bought a Dell over an HP. Dells tend to be invisible, as they are sold only through Dell's Web site and so make no appearance on regular outlets. I took a look at the Dell Canada Web site, and I found an equivalent model to the HP for $1,250 (incl. tax), and -- even better! -- I could pay with PayPal. I bought the Inspiron 2-in-1, sat back, and waited for it to arrive.
A phone call from Dell told me I needed to re-order it, as they could not process my payment through PayPal. Odd, given that PayPal from my end had no problem with me. Maybe Dell didn't want to pay the PayPal fees for a laptop that was on sale ($200 off). So I reordered it.
With that done, I got back to HP the very same day to cancel my order for the Spectre. The nice lady on the phone reassured me the order was cancelled. A few days later, I received an email from HP telling that the laptop was... on its way to me. Back on the phone to HP: even though I had been told the order was cancelled, it nevertheless shipped the next day. No worries: HP has free return shipping, complete with paperwork that the UPS shipping office signs as proof I had dropped it off. Once HP gets back the laptop, they take "20 bank days" to refund the credit card.
The laptop arrived from HP, I checked with friends in case any wanted to buy it off me (none did), and by week's end returned the unit, unopened. In the subsequent weeks, HP sent me four more emails letting me know the exciting news that the laptop had shipped and was on its way to me! All fake news, of course. The refund, eventually, showed up.
New Dell vs. New HP
So, by paying $200 less, what did I get for my money? All of the main specs were similar, such as 14" touch screen, black backlit keys, 16GB RAM, 512GB solid state drive, and Webcam shutter. Here are the primary differences:
Dell wins: The eight cores are useless for CAD and word processing, but double the speed of processing video files over four cores.
Dell wins: It can take both sizes of cards, with an adapter.
Metal Body and Color
Dell wins: no more freezing cold laptop on my lap! And the brown body is a nice change from corporate conformity.
Tie: I find I don't use a stylus, so its presence or lack thereof is irrelevant.
The Dell shows a microphone icon in the taskbar anytime a program runs that can record, such as Zoom. This acts as a nice warning and lets me toggle off the mic easily. Not having opened the new HP, I don't know if it does the same.
- - -
Overall, the trade-off was, for me, worthwhile. Also, I found the styling of the new HP r-e-a-l-l-y ugly. I don't know how long I could have stayed with that black and gold scheme.
New Dell vs Old HP
So, Ralph, if you claim there is no need to get new hardware, why did you get new hardware? Well, Ralph, allow me to explain.
The biggest bugaboo of the five-year-old HP Spectre was its (lack of) battery life. I only ever got four hours out of it. Normally, not a problem, but when I was at a conference where conference organizers forgot to provide power outlets, the frustration level reached ten zillion percent.
The latest CPUs double the battery life and laptops now sport the oval-shaped USB-C ports, which I found from my Acer Chromebook to be very useful.
Here is what the new Dell does that the old HP doesn't:
So that's what my $1,250 gets me. The old HP is stored away as a backup laptop.
There is one drawback to the new Dell compared to the old HP. The old HP had both full-size HDMI and mini-DisplayPort outputs, so I could attach two monitors easily. The Dell has the full-size HDMI for one external monitor, but I would need to use a dongle to attach a second one (thru the USB-C port). In reality, this is not a drawback, as I never use two external monitors, except to prove that I can.
- - -
A Final Oddity: I cannot find my laptop at the Dell Web site searching through the Dell Website's Search function. I can find it only by using Bing search externally to Dell.ca.
Final Oddity II, to quote Final Fantasy: I never knew Dell gives out Dell Bucks (well, they call it Dell Advantage), which is 3% towards your next purchase. What could I do with $33? I perused the site and found the Dell twist mouse: $25 off on sale, plus $33 off from my Dell Bucks. You twist it one way to flatten it out for travel, twist it 180 degrees the other way for use, and to turn it on.
Centers vs Borders
Martin Gurri's The Revolt of The Public and the Crisis of Authority in the New Millennium (2018 edition) is the textbook for explaining the discombobulation that we peoples of the world feel today. We want discord to go away; he says it can't. Here's why.
Gurri divides the world into two: a Center made of elites (gov’t, media, academia, corporations) directing their decrees and products at the Border, which is made up of all others. In the past, Border people tended to accept what was pushed at them -- limited policy options, limited news, limited job possibilities -- so much so that they became to be known as the Silent Generation. Then technology came along.
Technology allows the Border to think and act and produce independently of the Center. The Center is furious at its diminished power, and so we hear it lashing out, accusing us of being fascists, bitter clingers, deniers, and deplorables. In turn, the Border sees the Center as inept, thieving, self-obsessed, unaccountable, and hypocritical.
A second key to understanding the split is that it is not between left and right; we see the Border and the Center on all sides of political thought. In the USA, for instance, we have seen Socialists fighting their Democratic party and Tea Partiers fighting their Republican party. My wife notes that Trump may well have came to power by taking advantage of the Center-Border divide: he is a Center person who made his appeal to people living on the Border. It explains Brexit, which was not so much a desire to leave the ultra-Center EU but a revolt against the Center in England.
Gurri’s further point is that even though the Border now has power, it is unable to replace the Center, because only the Center is organized enough to run things, no matter how ineptly. As for the Border, it is too diffuse to develop and implement policy with sufficiently broad appeal. Both need each other, even as they despise each other.
The lack of cohesion is due also the the inability of the Center to receive feedback from the Border. Even when it does, the near-unlimited variety of demands made by the Border overwhelms the ability of the Center to respond.
Closer to home, this explains why large CAD vendors are tone-deaf to the concerns of users, whether the Revit-Autodesk conflict or the Solidworks-Dassault disconnect.
The solution? There is none. Gurri thinks the current discontent is unresolvable, and may well carry on for decades. There is no unity to be had. He hopes that a new form of democracy will eventually emerge, but he (like me) says we can never predict the future, not even tomorrow.
Women vs. Government
An excellent Border vs. Center example took place this week. The political and medical elite of the Center in Canada's province of British Columbia announced that 55-65 year-olds could get the AstraZenica vaccine from local pharmacies. My wife and her friends in the age range were conflicted, because other elites at the Center (primarily from Europe) has been declaring AZ ineffective, dangerous, age-restricted, and the cause of blood clots.
It was the death-by-blood-clot that concerned the women the most, as women are affected by them more often than men. Their conflicted choice was between suffering from coronavirus or from a blood clot. So these people on the Border used technology to confer through their smartphones and texting apps to make a consensus decision, independent of the Center.
(In the end, they decided that it was more likely for them to catch the virus than a clot. With the decision made, they then learned that just 13,000 doses were available for the 490,000 persons who are in their age group.)
by Alexey Ershov
Cloud-based BIM (building information modeling) became a primary target for development, thanks to the big data volumes that are typically generated by BIM projects. Big data requires big storage. As well, BIM involves collaboration between designers working on specific areas and layers of buildings, bridges, civil, and industrial projects. These requirements correspond well with the capabilities of the cloud from established large-scale platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Added to this, there are only a small number of competitors in the BIM software market (as compared to the MCAD market). All this propelled interest from established companies and startups to develop brand-new BIM-in-cloud software services and platforms.
LEDAS immediately became involved in complex 3D Web projects. Our first client asked us to develop a BIM cloud product that combined architecture, structures, mechanical, engineering design, and related modules. The product focused on providing wide support for BIM entities typically found in Autodesk Revit and Solibri Model Checker, an IFC-based quality control solution. By using a 3D modeling kernel, we made it possible for the BIM product to have all the capabilities needed for users to create and edit sophisticated shapes.
To meet additional requirements, such as fire prevention, ventilation, and accessibility, we developed a dedicated BIM verification module for the 3D BIM application. The verification module also took care of collision detection. Thanks to integrated 3D scanning and point cloud data management, we covered the later stages in a BIM lifecycle, including dynamic construction control and maintenance of buildings and constructions.
The solution we developed was a fully functioning 6D BIM application, with detailed planning for costs, resources, tasks, workers, and dates.
Following this, we decided to meet the need of the contemporary CAD market by developing our own 3D browser platform, with BIM and MCAD as the primary targets. So we created LEDAS Cloud Platform (LCP), a multi-user 3D-on-Web system that includes collaboration and supports many files formats. You can get more details about LCP on its dedicated page.
Since then, several companies have licensed LCP to speed up their Web BIM product development. Some of them are focused on civil BIM projects in the cloud, while others were more interested in using the 3D Web for industrial systems and applications.
For example, one of the products is focused on large industrial models that are hard to visualize in 3D Web environments due to memory limitations. To solve this problem, we at LEDAS added LOD (levels of detail) technology to our 3D Web offering.
Alexey Ershov is CEO of Ledas Group
Kneiling and Hirsch
Each of us have our ways by which we navigate the world. Sometimes, someone comes along who influences us in a better way. For me, there were two men who I appreciated in my teen years, whose thoughts went on to influence my work in engineering and technical publishing.
Neither of them knew me, as I lived in northern Canada and they were columnists in American magazines. Over time, as I read their work, part of their worldview become part of mine.
I was a model railroader during my teen years, and I say that model railroading kept me sane during those years that are tumultuous to many. I faithfully read Model Railroader and Trains magazines, both published by the very fine people at Kalmbach Publishing.
Among the regular columnists in Trains magazine was The Professional Iconoclast, John Kneiling (d. 2000). Little did I know that the word meant "icon smasher," nor, as a young teenager, did his columns made any sense to me. I ignored them. I couldn't, however, ignore the letters of outrage that appeared every so often.
People were having their oxen gored and I eventually forced myself to learn why. It was tough going, but in my later teen years I finally comprehended what he was saying. Mr Kneiling advocated new ways of doing things that went against the popular grain of thinking.
Those were terrible times for railroads in the early 1970s. They were going bankrupt, being bought up, and in the USA! were even nationalized (Conrail and Amtrak). For the first time in my life, I vicariously experienced in real time a fast-moving shift from an assumed monopoly to weak wannabe.
Naturally, the railroading community was trying to stay optimistic, but he punctured their wishful thinking. "Wow!" I thought as I finally caught on to what he was saying. For instance, he advocated that railroads adopt a new concept he called "unit trains." They would have just one kind of car (oil or coal or grain cars) and travel only between A and B. Standard methodology today, but in 1975 it was an iconoclastic idea.
From him, I learned that it's okay to be right when everyone else is wrong.
Stereo Review Magazine
In my university years, I wanted to know all there was to know about stereos, and eventually became a no-charge consultant helping fellow students buy the best stereo for the money they budgeted -- typically $400 to $800, or about $1,200 in today's money. (Met one of my girlfriends that way.) In the late 1970s, this meant a receiver (pre-amp, amp, tuner), record player, perhaps a cassette deck, headphones, and a pair of large speakers.
I read Stereo Review magazine religiously, absorbing all there was to know. The technical reviewer, Julian Hirsch (d. 2004), impressed me. He had created a standard set of tests for equipment, and then dispassionately reported the results. Mr Hirsch took no guff from furious advertisers.
He was anti-subjective. I remember Bob Carver's outrage at Mr Hirsch's statement there was no effect to be detected from Carver's new holographic amplifier -- an early attempt to create surround sound. (Later, in 1980, I nevertheless bought the Carver M-400 amp, just because its cube shape was so cool.)
Mr Hirsch's dedication to reporting on the results from consistent equipment testing impressed me no end, and that a single man could stand up to the combined forces of an entire industry. For him, the user came first.
When I became technical editor of CADalyst magazine, I adopted his methods, and even managed to get a couple of advertisers angry when my results contradicted claims in their full-page color ads. For me, the user came first.
- - -
There are, of course, others who influenced me in good ways and, sadly, sometimes in bad. But these two men are two of my early heroes, whose writings influenced me to this day.
I've been using the Opera Web browser since v4, back when its boasts was that it could fit on a 1.44MB floppy disc. In addition to being not-Internet Explorer and not-Firefox (the two browsers with the largest market share back then), it had a developer mode that displayed all the text of a Web site. This was especially useful for reading sites that otherwise required one to sign up.
After v12, Opera had a split inside the company, with the founder going off to start up the Vivaldi browser, saying he wanted to return the browser to its Opera roots. Vivaldi is still around but I find it sluggish compared with Opera, and so use it only when I need a not-Opera browser.
Following the split, the Opera company needed new engine for displaying Web sites, and settled on Blink, the Web rendering engine developed by Google for Chrome; later, Microsoft did the same with its Edge browser. Blink is kind of like Parasolid or C3D: it is the kernel that does the basic work, with developers adding features on top.
Which gets me to the the point of this post: Opera has developed some pretty nifty stuff on top of Blink, most of which I don't care about. It does, however, have three functions that I find invaluable. Here they are:
Copy and Insert
Back when travel was common, I would blog CAD conferences live. One onerous task is inserting photos and other images into the blog commentary. Opera makes this easier by recognizing the most-recent image I've copied to the Clipboard during the Insert Image operation.
Here is how it works:
1. Copy an image to the Clipboard. I use Windows Snapshot Maker v3.5, an old version, because the newer ones are overladen with features. (Do you know how hard it is to make a screen grab of screengrab software?)
2. In the blogging software running on Opera, click the Insert Image button (or equivalent in your software).
3. In the Insert Images dialog box, click the Choose Files button. Notice that Opera displays the image in the Clipboard, as well as the three files that it downloaded most recently. (To choose any file on your computer, click Show All Files to access the File Manager.)
4. Chose the Clipboard image, then click Insert Image(s) (or equivalent in your software) to place the image in the blog posting.
What this means is that I no longer have to save an image as a PNG file to disc before inserting it into the blog. Saving steps means faster blogging.
Editing Speed Dial Images
This feature is really obscure, but if you are as heavy user of Speed Dial, then it can be handy. Speed Dial displays large icons on the "desktop" of Opera. Think of it as a visual bookmark. Here is part of mine:
Notice that some bookmarks show as words, some as images. Images are really handy, such as in the third row, to identify the highway cam locations. For most others, though, I prefer the name of the site, as too many images become conflicting.
I was setting up a new laptop with Opera and found that every Speed Dial image came up as an image from the associated Web site. Some of them were pretty annoying to look at day after day. It turns out that Opera lets you customize the image for each Speed Dial entry. (I just learned this last week.) Here's how:
1. You add a site to Speed Dial by clicking the heart icon.
2. Notice the tiny arrows on either side of the image, the Wikipedia logo in this figure. Click the arrows to walk through the images Opera found on the Web page. Opera lets you use any of them as the Speed Dial icon.
Okay, this last one is somewhat controversial. We all know what a pain passwords are, and given the controversies surrounding password software that keeps track of them for you, I use the tried and tested method of maintaining passwords in an old-style address booklet.
It works pretty good, except for the times a Web site demands that I change my password and then I forget to update the address booklet. Or I sign up at a new site and forget to update the address booklet. Taking the Web site's offer to reset the password is a pain, because then I need to invent yet another new one. Et cetera.
Like all other browsers, Opera records passwords (if you allow it to), but then also permits you to see them. Here's how:
1. Click the big-red O, and then from the menu choose Settings.
2. In the Settings dialog box, look for the Search Settings field on the upper right, and then enter 'password'. This is a speedy way to get to the settings that involve passwords.
3. Click on Passwords. Notice the Saved Passwords section. All of your saved passwords are listed, and are all blanked out with a row of dots. Notice that each password has an eye icon next to it.
4. To reveal a password, click on the eye icon.
5. Click the ... overflow menu button to reveal several options, such as Copy Password, which copies the word to the Clipboard.
6. Final trick: click the ... overflow menu across from Saved Passwords. The Export passwords option saves all passwords as a CSV file (short for 'comma-separated values').
7. You can open the CSV file in a spreadsheet program like Libre Calc.
There is, unfortunately, no option to import this file into Opera running on another computer.
It was bad enough to see how many corporations track us by doing something as innocuous reading just the headlines at a government-funded Web site like Deutsche Welle (the German international broadcaster). The list I show below (found at dw.com/en/european-union-general-data-protection-regulationgdpr-valid-may-25-2018/a-18265246) is about 2/3 of the data skimmers DW employs.
It got worse when at a trial last week we learned that Google still tracks us when we run its browser in incognito mode -- never mind what it states:
It's what Google doesn't say in its incognito statement that's alarming.
- - -
It is the nice things that hit me the most. Here are some of them.
Web sites switched from asking us to provide a username to our email addresses as the login. Great innovation, as it was one less thing for us to remember. Except: by using our email address, data skimmers can match which sites we visit.
Facebook was thrilled to announce it has now scraped one billion images uploaded by us to Instagram to feed the hungry maw of its AI image recognition code -- automatically, with no human intervention needed or wanted.
When Google and others helpfully suggest that we sync our data between browsers, it's so that they can continue skimming our data as we move from one device to another. They want to match who we are on our home computer with what we do on our mobile devices, and back again.
When Google and Apple suggest we go onto their family plans for sharing Android apps and iTunes music, it's so that they can tie together members of families in their data skimming code.
Google's latest is to make cookies more private by placing each of us into similar-interest groupings. It will still feed our data to advertising agencies. Privacy groups are still working out what the scam is, as Google is focused only how to increase the data it collects from us to sell on to others.
More popular than MacOS
There are no native CAD applications for ChromeOS.
This need not be the case, as proven by Krita, a native Android program meant for tablets that runs as well as a native program on Chromebooks. Here's the point I want to make: Krita for Android is not a reduced-function version of the one that runs on Windows and MacOS. It is full-function.
CAD software written for Android, iOS, and Web browsers are all reduced-function versions, even from leaders in this field like Graebert and Autodesk.
ChromeOS was originally envisioned as a browser-based operating system, and I found the original Chromebook (from Samsung) pretty lame. I gave up on it, and so did major software vendors.
More recently, Google make the big breakthrough by allowing non-browser-based and Android apps to run like native programs on Chromebooks. it has become a viable operating system for professionals.
- - -
Android tablets have suffered in the market, as manufacturers churn out lame, under-powered models. The high-end models are over-priced for what you get. This problem does not exist among Chromebooks and Chromeboxes.
Here, you get hardware with strong CPUs (think Intel i5 and i7) and large amounts of RAM (think 8GB) all for under $1,000 -- complete with 14" high-resolution touch screen (think 2256x1504); full-size, backlit keyboard; and interactive stylus. I paid $700 last year for such a model. This kind of high-end device is not available from Apple, at any price.
The best part is that these Chromebooks/boxes run Android flawlessly, despite the naysaying from the Applephilia tech media.
- - -
So my suggestion is that we could run full versions of ARES Commander or BricsCAD Pro on Chromebooks by porting the Linux version to Android. This works, because Android is based on Linux. Autodesk, on the other hand, lacks a Linux version of AutoCAD, and so cannot be part of the party. But it has the resources to make one, should it so be inclined.
In theory, the Linux versions of BricsCAD and ARES can run on recent Chromebook models, but in practice I have found that it is a dog to get Linux running (hello command-line!) and then programs run too slowly to be useful.
The primary problem, it turns out, is 3D, which tends to be supported weakly by the hardware and software in portable devices. Tasks like rendering and ACIS solids modeling are usually farmed out to the cloud.
On the other hand, Google has committed to upgrading ChromeOS monthly, and new CPUs from new foundries are arriving on the market to dramatically strengthen the software. Perhaps 3D graphics in portable CAD will follow the transition cameras made on smartphones, where the processing of images has moved from hardware to software.
The year 2020 was the year ChomeOS outsold MacOS. But Chromebooks suffer from the Android-customer problem: the laptops tend to be cheap and, as a result, so are the customers. Apple sells less product but charges more, resulting in a customer base that spends even more to accessorize their iBling.
CAD vendors are keen to make customers-for-life out of school children. With classrooms and school-from-homes filling with Chromebooks, 2021 is the year for CAD software vendors to review their product lineup and see how ChromeCAD fits in.
Now take a look at Krita.
$99 proved too expensive
Today marks the 40th anniversary of the Sinclair ZX81, the second generation of the earliest affordable home computer -- and a distance cousin of our smartphones.
Inventor Clive Sinclair specialized in small devices. I recall picking up a brochure of his diminutive calculator while in England in 1974. He also designed a tiny, unsuccessful automobile.
In the late 1970s and early 1980s, I watched the personal computer scene with its confusing array of standards -- S-100, CP/M -- making me hesitant to take the plunge. Part of the problem was that the prices were not personal; a typical desktop computer (as we call them now) fitted out with all the kit came to over $10,000 in today's dollars.
Timex brought the British-made ZX81 to North America, the idea being that the maker of low-cost quality-made watches would be just as successful with low-cost home computers. The problem ended up being that the ZX81 was not quality-made.
Nevertheless, I was desperate for my own computer. For a couple of years I had been programming my HP 41CV calculator (even got the overpriced card reader for it!), but was bumping up against its limitations.
For a time, I considered the ZX81, as it was just $99 in Canada. But then I added up all the stuff I would need to add to actually make it work, such as the 16KB RAM module. For instance, I did not own a tv, so I'd have to buy one. The final tally came to $500 for a computer that only displayed monochrome uppercase text and used a membrane keyboard. I decided against it.
Six months after the ZX81 was released, IBM shook the industry with the release of its PC for $1,500. It de facto established the technical standards still in use today. But that price tag also got you just the box and motherboard: no floppy or disk drives (you were expected to supply your portable cassette recorder, no monitor (you were expected to supply your home tv set), and only 64KB RAM. Fully kitted out, it came to $6,000 at time when a starting engineer's wage was around $10/hr -- so, four months salary. (Last week I bought a new high-end laptop for four days wages.)
Then in early 1983 a local computer dealer put their Victor 9000 kind-of-compatible-with-IBM personal computers on sale, and I took a loan from my parent to acquire it. Buying the Victor helped get me the job at CADalyst magazine and launch my lifetime writing career in CAD.
- - -
The link to our cell phones?
The success of the ZX8x-series in England caused the BBC to commission a company called Acorn to produce the BBC Micro home computer for viewers to purchase and follow along how-to-use-a-computer series broadcast at the time. Acorn was the company that later produced the ARM CPU design, used today in every single smartphone, except for some older ones that used CPUs from Intel and nVidia's unsuccessful forays into handheld computing.
Paid vs. free
Apple and Facebook are in battle with each other. As Techmeme editorialized a Wall Street Journal headline this morning:
Sources shed light on the increasingly personal battle between Tim Cook and Mark Zuckerberg, who in private reportedly said “we need to inflict pain” on Apple.
There are the overt reasons:
There are the covert reasons:
Reasons Facebook gives are nonsense, that Apple users won't "benefit" from targeted ads, and that small business will suffer.
Left unsaid is that Apple continues to harvest data on users. We don't know what it does with it, as its ad system is not widely lauded.
We can understand Facebook's desperation. While Apple has less than 10% worldwide marketshare in smartphones (50% in USA), its user base is wealthier than the Android user base and so Apple customers spend more.
Even as Apple cuts off Facebook, Google could do the same to Facebook (and Apple to Google), never mind the secret Facebook-Google agreement to favor each other. Basing your business model on free is precarious.
So why is Apple taking on Facebook? Being pro-privacy looks good in today's surveillance society. As Matt Mireles (@mattmireles) put it, "Tim Cook might be the biggest weapon we have in trying to save our democracy."
There's profits to be made from them there million(s)
CAD vendors often give free licenses to educational institutions in an attempt to hook young people on the software early. I've never felt that this is a valid tactic, for when young people enter the workforce, they use the CAD program the boss tells them to use.
Still, boasting about the number of educational users is a pretty good way to goose user numbers. Solidworks, for instance, could at one time be heard boasting of six million users, but five million of those were educational. (Today, the company says it has 3,246,750 users, a number that appears to me to include the low-cost DraftSight DWG editor.) Keep in mind that 6:1 ratio for the next time you hear a CAD vendor lauding its total user number: dividing the number by six gets us a good guess at the number of commercial users -- a.k.a. actual paying users.
During last week's quarterly conference call with financial analysts, PTC boasted of exceeding one million educational users of Onshape:
As the COVID pandemic unfolded, we saw a real opportunity to help schools and universities, because we have the only true school-from-home CAD solution that works on any device with no installed footprint.
PTC's academic team decided to pivot their focus to Onshape and set an aggressive goal to try to reach one million total education users by the end of fiscal [September] 2021. (To put that in perspective, it's double the number of users that were participating in our education program across all PTC products at that time.)
I am happy to report that already in January, we've exceeded the goal of one million total Onshape education users -- nine months ahead of schedule.
Onshape's Web site, however, boasts of "millions of students and educators worldwide use Onshape’s online CAD platform." It leaves me wondering where the difference lies between "one million total educational users" and "millions of students and educators."
How Many Commercial Users Today?
Our rule-of-thumb implies that Onshape should have 167 thousand commercial users. In this case, however, the rule-of-thumb needs to be tossed out, because we know that Onshape had only five thousand commercial users when it was acquired by PTC in 2019.
To figure out the number of paying users of Onshape PTC has now (well, at the end of December, 2020), the company makes us do some math:
Onshape had record bookings, up more than 150% from its initial quarter at PTC one year ago, including a nice balance of new logo activity and expansion.
So, 5,000 x 2.5 (which is what 150% means) = 12,500 commercial users. This number is minuscule compared to the couple of million Solidworks users that Onshape at its launch five years ago had targeted.
When corporate executives talk with financial analysts, they use special insider language. Here is what that sentence means, translated from PTC-ese into everyday English:
So, about 3,500 additional licenses to existing customers and 3,500 to new ones.
Having 80x more educational than commercial users does makes sense. An online CAD program like Onshape looks attractive to schools and universities as an quick solution for remote CAD learning. Schools and teachers are desperate for easy technology solutions to the sudden (and unexpected) demand for teaching remotely -- something for which before now most were never prepared.
Quite frankly, once you learn the basics of one MCAD system, you've learned the basics of all of them; they all pretty much work the same. Most non-technical schools usually teach just the basics of CAD anyhow. So, sure, get students to run Onshape: it's hassle-free, it's got on-line assistance for learning MCAD, and it's free. But not for long.
Less Free Ride for Education
Still, having one million copies of Onshape accessing servers owned by Amazon Web Services costs PTC a lot of cloud-rental funds. That is the fundamental flaw to the otherwise-lauded move to the cloud: it's become oh-so-expensive for software vendors to offer free versions. This is a big change from when it cost software firms nothing in having free software running on people's desktop computers.
PTC told financial analysts that schools need to start paying:
We've not been charging educational institutions for the software. We had a program that you can use it for free for the first year.
So we do anticipate that, as this school year ramps up and we move into next year, we're going to get a pretty decent conversion rate to pay.
If this is news to you, it is not surprising. Onshape's home page does not make plain the temporary nature of free-use plan:
Turns out it's not all educational users that will have to start paying. It's just those who want the assistance in running Onshape thru PTC's Education Enterprise plan. Another page on the Web site explains:
On the one hand, we have Onshape being presented as the no-hassle solution to easily using and learning MCAD remotely; on the other, there is the suggestion that Onshape is so complex that schools need to pay to have it administered for them.
To Goose Sales
With commercials sales lackluster, it comes perhaps as no surprise that PTC has put some pressure on the Onshape team by having created a new division at the company. It comes with the rather generic name of SaaS Business Unit, and it looks like this:
"It's a business unit that's quite complete," explains PTC. Here is what the new division is responsible for:
The only thing that PTC will handle for the new division is acquisitions. As the ceo of PTC summed up:
That's a business, and the way I look at it they are paving the path for a future PTC.
PTC seems to be counting on converting most of those Onshape enterprise educational users this fall. It will be interesting to see how many schools pay up. To avoid paying, teachers can have students run the free version, and conductt their teaching through Google Meet and the like.
When 30-year-old PTC acquired Onshape, it enthused that this was the future of PTC. At the time, I warned that PTC's eye would wander in short order, as it has a history of enthusing over a new technology before being beguiled by the next. With the acquisition of Arena, Onshape's CAD segment already looks to be relegated to third place in the hierarchy, next to Vuforia AR software and partner tools.
They're occupying Wall Street all over again
When I first became self-employed, I read widely to understand what I was doing. The most reassuring book I read was Alvin Tofler's PowerShift: Knowledge, Wealth, and Violence at the Edge of the 21st Century. Mr Tofler (d. 2016) was a futurist, who with his wife wrote thick books on what the future could bring, his best known title being FutureShock.
In PowerShift, he told me that what I was doing -- a newly self-employed one-man technical writer, editor, typesetter, publisher -- was the right thing to be doing, because I represented the future. Cheap technology would allow the self-employed to react nimbly to changes in market conditions in ways corporations cannot.
In that, he was right, at least for me. My business took off, and I survived, while over the years established technical book publishing companies shut down around me.
On the 20th anniversary (2011) of me being self-employed, I thought I could celebrate by rereading the book that gave me that initial boost of confidence. It was terrible. The book's most spectacular failure was not predicting the impact of the Internet, even though it was already around, albeit not yet easily available to the general public. I stopped reading the book, but still have it on my bookshelf as a tribute to what it did for me.
The Rise of Micro-wars
Humans, not being machines, err. This is how we learn. When we fail, we figure out ways to not fail the next time around -- usually. PowerShift showed me that a confident, lauded writer like Tofler could rationally explain future developments that sounded right, yet be proven wrong as history unveiled itself in ways unexpected by us. History is independent of our forecasts. Which brings us to today.
Mr Tofler was correct in some predictions, such as forecasting that technology would allow the advent of the micro-war, which we today call terrorism in some cases. The great truth he uncovered is that technology leverages individuals, like me, to have as much impact as corporations and governments.
In light of this, the havoc wrecked by the guy running r/WallStreetBets should not surprise us. He is just one more manifestation of technology being leveraged by individuals into yet another wave that we can call "populist," which has been crashing against the walls of globalism for a decade now.
Until now, populism has been regional, allowing globalists to sniffily condemn it as just-like-nazi nationalism. Brexit, Capitol Hill Autonomous Zone, Thai anti-royal protests, Occupy Wall Street, Front National, Donald Trump, French and Dutch voters rejection of the Lisbon Treaty, and so on. It should not perhaps be unexpected that populism finds itself most popular in the haughty less-than-democratic European Union, and as a result the list of European countries that aren't home to populism is approximately zero.
These movements took on government. Corporations had to be next.
Populism Goes International
When we techno-ists smugly predicted that software would come to eat everything, we had not an inkling as to how far its acidic effect would spread. As we were in-the-know, we felt immune from its effects. It is, however, now eating at our most precious procession, our retirement funds.
This year the wave grew two-fold as populism went international, and took on corporations.
The initial wave of international populism was quite unintentional. As resentment grew against corporations abusing us thru their obsessive data collection, it became a question of how, not when, the revolt would launch. Facebook, in its arrogance, made the "mistake" of openly telling its two billion WhatsApp users that yet more data would be siphoned from them and sold to bidders (Google, mostly). Facebook gave users two choices: agree, or be cut-off from family and friends (except in Europe, which has an aggressive data protection law).
Given the dilemma between submission or isolation, users choose escape. My family and I switched to Threema. The good thing about the Internet is that most services are replaceable; WhatsApp with Signal, Google Search with Bing, and so on. With tens of millions of users switching in mere days, Facebook put its plan on hold for a few months while it scrambled to figure out how to stem the international populist revolt.
The wave has not yet crashed on its beach. The government of India demanded Facebook take on European-style data protection for Indians. Other governments, seeing an opening to be popular in the eyes of voters, are also tut-tutting over Facebook breaking its four-year-old promise to never access the data generated unknowingly by WhatsApp users.
Occupy Wall Street Reawakens
Occupy Wall Street was a 2011 attempt to protest against the protection given by governments to financial corporations that were labeled "too big too fail." It was colorful, due to the many tents pitched and so the media loved it. Underneath, however, it was a paled effort which collapsed quickly once infrared sensors revealed that nearly all tents stood empty overnight: there was, in fact, no occupation.
(Canada, by contrast, took the opposite approach. The conservative government applied a rather socialist solution by protecting the small investor, instead of banks, by guaranteeing deposits. The guarantees gave individuals the confidence to keep their funds in the banks, saving Canada from the ravages of an otherwise-worldwide recession. Being a conservative government, it receives no credit for the action.)
Just as rage simmers over unwanted data collection by Internet giants, so too rage simmers over gains by billionaires and financial firms seen as unjustified during the pandemic, a.k.a. China's Chernobyl. The Facebook brouhaha was barely down to a simmer when Occupy Wall Street II emerged from the keyboard of a single person.
Killing the Shorts
To take down Wall Street, the Reddit writer r/WallStreetBets engaged his 3.5 million (or maybe 2.0 million) followers to take advantage of free share trading services offered by the likes of RobinHood. He was angry at the favoritism shown by government in protecting financial giants while amateur investors can go bankrupt.
Shares in lame companies like GameStop and Nokia rose as much as tenfold in a few days. As millions of amateur investors bought ever more shares, share prices were forced up, killing the shorts.
Shorting a stock is where an investor bets a share will fall to a specific price by a certain day. If the guessed-at price is higher than the actual price on that day, the investor profits. If not, then the investor loses. If the investor leveraged the purchase (borrowed money against the share purchase, and so only paid 10% in cash, say), then they really lose: they have to pay the higher price for the shares and pay back the loan, resulting in bankruptcy, possibly.
GameStop was seen as the Blockbuster of the 2020s, a chain of five thousand retail stores in multiple countries still selling games in outmoded physical format. A few months ago, its share price was around $3 and some investors borrowed money to short the stock, confidently assuming that they would profit handsomely when the company eventually pulled a Blockbuster to send the share price into the pennies, perhaps.
r/WallStreetBets wanted to crush short sellers who profit on suffering firms. He and his followers drove the price of $GME (GameStop) from $30 to $300 in just a few days. A firm, whose contract required it to pay the current share price that very week, was in danger of utter collapse. Angering r/WallStreetBets further, a couple friends loaned the firm several billion dollars to stay afloat.
Just as Silicon Valley moved quickly -- in just one day! -- to quell discussion of problems that may have occurred during the November USA federal election, Wall Street moved quickly to quell the mass share purchases. Reddit and RobinHood shut down the action -- or, according to some theories, were pressured to shut down by their investors from Wall Street.
RobinHood then borrowed $1 billion to not run afoul of US law, which requires stock trading firms to maintain a float to cover buys, sells, and changes in price during the two days it takes for stock trades to settle.
To retaliate against RobinHood blockage of share trading, the international populists posted negative reviews for RobinHood's app on Google Play Store, driving its star rating down to 1. The next day Google retaliated against its users by removing 95,000 reviews and so bringing bring the apps' star rating back up to 3.5/5. ("Why would Google care?" asked my wife. "I dunno," I replied.)
Wall Street suffers far more than Silicon Valley in being unpopular with the general media and politicians, and so the backlash came from conservative and far-left media and politicians alike. Tech writers, of whom too many are beholden to Silicon Valley firms, this time dug eagerly into reasons for the financial blockages.
An hour later, Robin Hood opened up again. During that hour, GameStop yo-yoed between $300 and $200. In the meantime, r/WallStreetBets targeted further firms -- revving up more stocks that had been shorted. Visible financial populists, such as Barstool Sports' Dave Portney, joined in. He came up with this analogy:
Robin Hood of Sherwood Forest stole from the rich to give to the poor. RobinHood of Silicon Valley resembles more his nemesis, the Sheriff of Nottingham, who protects the rich from the poor.
We don't know what's next, because we cannot predict the future; only fools do so, for short-term profit. Twenty-twenty and 2021 are the proof.
Human nature, on the other hand, is predictable, as it never changes. The greedy will continue to be greedy; the desperate continue to be desperate; the clashes will continue.
The drawback to an interconnected world is that it is interconnected. Technology developed by governments and corporations are the tools with which populism finds it can attack corporations and governments.
International populism is not the kind of globalism that globalists envisioned for enriching themselves, and so they now face the annoying task of scrambling for badly-thought-through preventatives, such as mocking us "The Deplorables" or offering us "The Great Reset."
Based on nature of humans, we can predict that international populism will spread. As it does, we will see the desperate become the greedy that they seek to displace, because that's the way we are. Or as my wife put it, "As the desperate become greedy, the greedy become desperate."
Amateur traders on sites like RobinHood and Reddit are driving up the value of GameStop, best known as a retail computer game seller. As I write this, $GME is up nearly 10x in one week. And as TechCrunch headlined it, "Gamestop, memestocks, and the revenge of the retail trader." The downside: normally-legit fund firms are loosing billions on shorts and are needing support from other firms to avoid collapsing.
Here are some of the best tweets on the topic:
"oh no the wrong people are manipulating the stock market"
- Brandy Jensen (@brandyljensen)
"People looking for an explainer of the Gamestop thing: The Gamestop thing is like Bitcoin, in that they are both just gambling. At least the Gamestop people aren't pretending they're saving the world when they're recruiting more people on Reddit to get addicted to gambling."
- Ben Collins (@oneunderscor)
"Blockbuster watching Reddit drive GameStop stock through the roof"
- Peter J. Hasson (@peterjhasson)
"[Redit thread] r/WallStreetBets literally could have stopped the 2008 financial crisis"
- meg yar bitchell (@MeganBitchell)
"This GameStock trader with an elegant definition of behavioral economics: 'I just figured that the market is completely irrational and that everything I learned in college means nothing'."
- Miriam Elder (@miriamelder)
"Downloaded Robinhood a few years ago and got a free stock (either via a link my brother sent me or just the beginners free stock). Free stock was gamestop. Well..."
"Maybe the hedge fund can learn to code"
- Otto Von Biz Markie (@passionweiss)
I'll leave the last word to Ryan Broderick (@broderick): "If our economy is hooked up to an Internet dominated by corporate-owned social platforms that incentivize viral-ity, then, at certain point, commerce and viral-ity just become the same thing. But now online communities are big enough and self-aware enough to move the market how they see fit."
Oopsie hardware design
While I was technical editor of CADalyst (1985-1991), I reviewed an awful lot of hardware -- graphics boards, monitors, display-list software, pen and electrostatic plotters, dot-matrix plotters, laser printers, hardware and software plot cachers, hardware and software scanners -- because at the time CADalyst was at first the only game in town.
Those were remarkable years, in that I could have hands-on experience with hundreds of thousands of dollars of equipment related to CAD. (The most expensive was a $70,000 E-size super-fast Vidar scanner, that in today's dollars would be closer to $200,000.)
I developed testing methodologies to ensure every product in a class was tested identically, and established a rating system that ran from Not Recommended (almost never awarded) to Highly Recommended (rarely awarded); most products were Recommended, because they worked. I had been influenced by the technical editor at Stereo Review magazine, who was hard-core about consistent and repeatable testing that could be validated. This made some vendors unhappy at him, and at me.
The DraftMaster-line of pen plotters from Hewlett-Packard, for instance, usually received the High Recommended rating from me, because they were very, very good.
One of the hardware products to which I gave the dreaded Not Recommended label was a pen plotter from Houston Instruments. This company also produced contact lenses. Their plotters were considered mid-range, doing a reasonable job at a reasonable price. HI plotters I had tested in the past did a decent job.
Then a new model arrived with a bizarre design flaw. First, though, let me explain how rollerbed pen plotters worked. You manually afixed a sheet of paper (D- or E-size) on the plotter's full-width roller with roller clamps that typically were covered with soft rubber or grit for better gripping power. Remember the word "grit." This model boasted a full-width grid wheel.
When plotting the drawing, the pen head moved side to side, while the large roller pulled the paper back and forth. Diagonal lines were made by moving both at the same time, at varying speeds. The figure shows a typical HI rollerbed plotter of the late 1980s:
Sometimes, a roller would accidentally spit out the paper, due to an error in the movement codes sent by the CAD software to the plotter, or for some other reason.
As I tested the new arrival, it spat out the paper and continued plotting. Here I discovered the horrific design flaw: the pens were located over the grit wheel, so as the plotter continued its work, it methodically wore down the pen tips on the coarse grit wheel.
Other plotters, including those from HI, located the grit wheel elsewhere in the design. So I had to give this model a Not Recommended rating.
Millions of lost users
In my previous post, Why Google (Cannot) Fall, I wrote about the Network Effect, which tends to insulate giant social media companies from decline. When more people use a network service, like Facebook, then even more people have a reason to use it.
Barely a week after I wrote that, we saw the effects of the reverse network effect: the more people are banned (or quit) from social media, the more depart. By coincidence, my daughter-in-law and I quit Facebook the day before Twitter banned the president of the United States. We quit, because Facebook had broken its promise to never suck up data from WhatsApp.
Facebook will not feel the effect of us two leaving. The last post I made was in 2010. I hadn't used it in years; my wife, who had been using my account to keep up with some distant friends, had also stopped some time ago, as the friends either abandoned Facebook or switched to communicating through texting and good old email.
Indeed, most of our family and friends are now using various forms of texting software to replace the spies operating at Facebook, et al. Some of our family has switched to Treemo ($5 one-time fee), which is so secure it has no idea who we are.
When Twitter banned D. Trump, that account had over 80 million followers. In reaction, Rush Limbaugh cancelled his Twitter account, with its 88 million followers. In counter reaction, Google and Apple blocked people from downloading the app for Twitter's small competitor, Parler, to which conservatives had been migrating.
The bulk mail service used by the Republican party cancelled it as a customer. Some months earlier, mail server MailChimp had promised to block any customer whose emails it didn't like, with no appeal possible (following which I quit it as my bulk email service). Other social media companies, like Pinterest and Redit, jumped onto the blocking bandwagon, because it was a safe thing to do in the current environment.
So what? Conservatives deserve it -- you might think.
Then came the news that the Royal Californians, hyper-progressives Meghan and her husband, will quit all social media, including Twitter, Facebook, and Instagram, calling them an "addiction worse than alcohol and drugs." They have 10 million followers. As progressives, they felt that they could not promote addictive behaviour.
Progressives are all about freedom. When you are addicted, you're not free.
Not the Dogs Facebook Makes Us Out to Be
Now, we aren't supposed to be banning them.
This is not what arrogant social media titans had in mind when, some years ago, they began the anti-free speech practice of blocking and cancelling accounts. We were supposed to be addicted, and we were supposed to follow their admittedly-capricious rules to be let into their clubhouses.
The fundamental flaw in their reasoning is this: they fell for the lie of Determinism. It says we humans are programmable and can be manipulated into doing whatever the social media monopolists felt like having us do. We aren't, it turns out, Pavlovian dogs; we have, after all, free will.
And so we arrived at the point in the history of the world in which being banned by social media companies has become a mark of pride. Twitter had banned me for a while, I am proud to say, following which I stopped posting commentary there.
And today we are at the unexpected juncture of conservatives and progressives agreeing on one thing: social media systems need cancelling.
Now, blocking an account with 80 million followers does not mean that Twitter loses 80 million customers; it does, however, mean that 80 million people have one fewer reason to access Twitter. First one fewer, then more fewer... As more accounts are blocked and more accounts quit, the network effect shifts into reverse. c.f. Friendster, MySpace, Second Life, and so on.
This is not the end. "Life was possible before they existed. It can be possible again without them," Emily (@ethereal_em) reminds us.
A reader writes,
Here is an article that claims the reign of Big Tech is coming to a close because the huge amounts of data, that currently require huge server farms to process, will soon be replaced by inexpensive and more efficient software and hardware, that can fit on your dining room table (that seems like a wild exaggeration)!
I read this with a faint sense of hope that the monopolies of Google, Twitter, and Facebook could be broken up with inexpensive competition, but I find it hard to take the author's argument seriously. It seems to me that Big Tech has the all-important head-start, the brand recognition, highly sophisticated data bases in place, billions in capitalization, and many of the best personnel in their corner.
I could see some independents taking a serious run at Facebook and Twitter, but it seems to me that Google is in a class by itself. I can't see anyone threatening its domination short of a political move to break it up under anti-trust legislation such as the US government did to AT&T in 1984.
Since I am not well-educated in the field of information technology, I would be interested in your opinion.
I read the article, and found that the author fails to distinguish between software and people. Software can be easily replaced; people cannot be displaced so easily.
For instance, pundits have long proclaimed email is dead, but people keep using it, so the software survives. Only when people abandon software, like Second Life, does it die.
Entities like Facebook and Twitter are successful due to the network effect. This is a theory that says that the value of a network increased by the square as it doubles in size. Have twice as many people using a service (network) and it becomes 4x valuable to them.
The network consists of people connected to one other and resources. Early people-oriented computer networks like CompuServe and AOL failed when the Internet became accessible to regular folk, because the open Internet offered far larger network effect. Closed CompuServe and AOL could only a fraction as many connections to people and resources.
The good news is that there already are services that aim to replace Facebook/Twitter/Google; the bad news is that they are not all that popular due to the lack of sufficient connections. Nothing lasts for ever, so maybe one day Facebook will be the next CompuServe.
For what it is worth:
Guest editorial by Alexander Yampolsky
Several generations of builders over the decades failed to construct buildings whose commissioning would result in an industrial revolution. In such cases, errors were looked for in the design; below is an overview of what I consider as errors in BIM design.
Collaboration based on models; the death of drawings
BIM fans list numerous drawbacks to drawings and so call for a cultural shift by refusing to collaborate on the basis of drawings in favour of collaboration on the basis of models. Collaboration is based on understanding, and essentially the question is, “Which do we understand better?”
I am a structural analyst. I formalize analysis results as simple drawings because I am sure that it is possible to explain or understand something using only the language of drawings.
If someone requires the report as a 3D model, should I immediately attend training?
If you do not understand drawings (the professional designer language), it means that you don’t understand textbooks, building codes, regulations, and common instructions. If you see them only as “circles and sticks” then what sense is there in collaborating with you? The reason is that people who do not understand “circles and sticks” is that they left drawings and are not going to return to them.
Single model, single source of truth
Models sound attractive, but do not operate in practice. For example, the figure below illustrates an intermediate stage when designing a fixed-end arch.
The model comprises a pair of foundations and five units (of eight in total) of an arch body. Yes, the model is true in terms of the location and dimensions of these elements. But the intermediate model will misdirect us when we want information about forces at the points where the arch rests on the foundations. Based on the false information, we will construct foundations and end up with a dangerous structure.
The shape and dimensions of foundations correspond to real forces. Where did the designer get the correct information? Probably, the structural analyst created his own highly specialized model, even before creating the "single" model, then calculated the forces and configured the foundations. See the figure below.
The analysis model is a source of lies in almost all aspects and a source of truth only in one aspect: forces in its elements correspond to the real state of the arch during its operation.
The postulate about the single model does not take into account that any bounded system where all elements are interdependent) becomes the universal source of truth only after it is completely built, i.e., after completion of the design stage.
In the design stage, designers are forced to use many highly specialized, partial sources of truth. And this applies not only analysis models; the knowledge necessary may be obtained from reference books, standard construction practices, and similar projects.
Extracting drawings from models
Let's create the reinforcement field drawing in traditional way:
CAD Step 1. Draw a dashed outline, draw a bold line, draw a callout line, write text (diameter, steel grade, rebar spacing). The drawing is ready; see figure below.
Now do the same using new technology: create a model with BIM.
BIM Step 1: Select the command Reinforcing by Area, draw a contour, and specify properties (diameter, steel grade, direction, rebars spacing). The reinforcement field model is ready. Let's extract the drawing.
BIM Step 2. Adjust the view parameters, adjust the object visibility (only one rebar should be visible), draw an annotation (diameter, steel grade, rebar spacing)., draw an annotation. The drawing is ready. We arrive at the same drawing as in the figure above, but using much more resources and tools.
When comparing CAD 1 with BIM 1, we see that the designer entered the source data (parameters) in both cases following approximately the same approach to create the reinforcement field model. However, the source data are just saved as a drawing in case CAD 1 while they are processed and transformed into the reinforcement grid in case BIM 1. So, here is the algorithm of new technology:
If this is a breakthrough, then it is one that breaks through into the realm of absurd.
It would be easier to extract source data from the drawing, create the model, and calculate the specification -- and all of this can done automatically.
[This article first appeared on the isicad.ru CAD news portal. Mr Yampolsky also authored "A Third-generation 2D Editor" on WorldCAD Access and "Interpreting 3D Models from Formalized 2D Drawings" on upFront.eZine.]
Install a cache
In recent years it suffered from two serious flaws, which I detailed in worldcadaccess.com/blog/2020/10/its-hard-buying-a-chromebox.html:
At first, I thought that problem 2 was due to CPU overload, but it turns out the crawl is due to the storage drive running at 100% for long periods of time. This surprised me, as the drive is a solid state drive, that Windows only has to run a Web browser, and SSDs are supposed to be fast.
My first solution was to look online for reasons why drives run at 100%, and tried a half-dozen suggestions, like turning off indexing. None worked.
My next solution was to add more RAM to the computer, so it now has 12GB instead of 4GB, but this did not solve the problem. Windows normally uses only half of the available RAM, so a 4GB system runs on 2GB. When there is not enough room in RAM, Windows automatically pages the overflow to the hard drive, which can slow things down.
Now, this computer is seven years old, and so I wondered if its 64GB SSD could be a slow one, being so old. A 64GB solid state drive suffers from having half the data lanes of 128GB and larger SSDs, hence it is inherently half as fast.
So I though of replacing the SSD. Buying one large enough, like a new 240GB drive, would boost the read/write speed by 4x. I wondered if 4x speed increase would actually solve the problem, and this would be an expensive way to find out.
The other problem in buying a new drive becomes how to clone Windows from the old one to the new SSD. It is the short M.2 mSata style; see image above.
One solution is to buy an NVMe-to-USB adapter ($30), then plug the adapter into the computer's USB port. After that, use cloning software to precisely copy everything (including Windows) from one to the other. But the combined cost of a new SSD and USB adapter was getting too high for me.
I mulled things over.
RAM! The computer now had lots of RAM, and so I wondered if there still were caching software around. We used to use caching software a lot in the old days, when we operated off floppy drives and slow hard drives. I outfitted my very first computer, a Victor 9000 from 1983, with a then-massive 128KB memory board, and ran caching software on it just to hold the spell check dictionary for WordPerfect v5.x.
The idea behind caching is that a part of RAM is set aside as the cache (storage); software thinks it is still working with the drive, but the cache fools it. The caching software transparently moves data to and from the drive. Because the cache is in RAM, it works very fast.
TIP: There are two kinds of caching software. One uses a portion of RAM to shuttle data between the software and the drive (like we need here).
The other kind turns part of RAM into a drive, complete with a drive letter like Z: (like I used for that old WordPerfect spell checker). We want the first kind.
So I bought a cache program ($30), installed it, and set aside half (6GB) of the RAM for it. Now the computer purrs.
It is, however, slower at starting up, as the caching software starts up uncached, and then data is doing that initial move to the cache.
The caching software comes with a dashboard that reports how much RAM is used for caching, so after a while I was able to reduce the cache from 6GB to 4GB, leaving 6GB for Windows.
Tip: Samsung EVO and PRO solid state drives include caching software free, which is why I tend to use that brand. Your computer needs at least 8GB RAM for Samsung to implement the cache.
The evil that lies in RAVCpl64
For the last two years, I have been baffled and bamboozled by the audio driver going bad in the tiny Gigabyte PC I use in my entertainment room.
My initial fix was to reinstall the driver, repeatedly, something that took a good ten minutes each time. The fix lasted a few weeks, then a few days, and finally for only a day.
The computer has two audio drivers. Intel's audio driver worked over HDMI; the one from RealTek did not. But the Intel one would disappear, leaving only the RealTek. Ergo, no sound.
Naturally, I searched forums to solve the problem. It is not uncommon. After trying a dozen different things, a comment from one frustrated user struck a chord with me: he noted the problem occurred when the computer lay idle. I realized that that matched my experience.
The question then became, What was happening when the computer was idle (running, but not being used by me). For some reason, some thing was removing the Intel sound driver during the time of passivity.
My hunt changed to looking for what might be doing the dastardly deed -- was it Windows, or was it something else making the "correction" to my system?
One day last week, I was using CCleaner to peruse settings on the computer. I noticed that RealTek loaded an audio utility program during start-up, called RAVCpl64.exe. I usually try to minimize the number of programs being loaded during boot up, and so I turned it off from loading. As I did, I thought --- hmmm, I wonder if this is the culprit.
I monitored that computer for a week, rebooting it from time to time, letting it sit idle for hours on end. The Intel driver hung in there.
Solved: Prevent RAVCpl64.exe from auto-loading during boot-up.
I decided to replace my wife's computer.
It is a 64-bit machine that -- curiously enough -- runs 32-bit Windows 10. Even though I had upgraded it from 2GB to 6GB RAM (for free, using spare memory modules from older computers), 32-bit operating systems see only 4GB -- it's an addressing limitation inherent in 32-bit computing.
As well, Microsoft really didn't want to update the Windows 10 running on it. I made a few manual attempts to update it, but Windows just wouldn't give in.
Microsoft was seen as foolish when it said it would always update Windows 10. Apple controls all the hardware, and even it still suffers from upgrade problems to its proprietary operating system. My wife's desktop was not the only one to suffer abandonment issues: my HP Spectre laptop was also in the doghouse, upgrade-wise. (Spectre is HP's topmost brand of laptop, so I would expect the highest level of support.) The Spectre was running Windows 10 from 15 months ago -- two updates behind the pack. It turns out the problem was with two drivers involving audio causing BSODs [blue screens of death], and Microsoft finally got them fixed this week.
Switching to 64-bit Windows
It is possible to upgrade a computer from 32-bit to 64-bit Windows 10, but the only way is to wipe the machine. So I decided to buy my wife a "new" computer. Typically, I buy lease-return, business-oriented, small-form-factor, HP desktops. The new one cost $150.
I copied all her files to a portable SSD drive, and then I got her new one up and running. It was pretty easy, as she uses only a Web browser and Libre Docs.
Then I turned my attention to the old one. It was now a project computer that took me four days to complete, on and off. As I told my wife, for Christmas I could have either bought myself a 4,000-piece Lego set for $350, or spend $150 (on her new computer) and have a lot more fun with her old one.
As part of the process of installing 64-bit Windows, I had Windows destroy the hard drive's partition, recreate it, and then format it. Here is the curious part: later, when Windows was up and running, it had maintained the Old Masters wallpaper that my wife liked. Curious.
With Windows updated to 20H2 and running on 64 bits, I turned to the hardware. The old HP suffered from these limitations.
I set about solving the problems.
This computer is not that old, so I wondered why it didn't even have a DisplayPort port (which predated HDMI). Perhaps it had only VGA so that the security functions of HDMI/DP would not get in the way: the business could then use non-security-oriented cables and monitors. Or maybe leaving out HDMI was just a cost factor; VGA is, after all, capable of 1920x1080 resolution, like HDMI/DP.
(VGA is short for "video graphics array," the high tech name IBM gave its hot new 640x480-resolution graphics board in 1988.)
But modern devices, like my movie projector, expect HDMI input. To solve the lack of HDMI, I ordered a generic-brand VGA-HDMI converter ($15). See figure below. Be careful which style you buy, as some adapters only go the other way, from HDMI to VGA.
This adapter comes with two support cables:
Much to my surprise, it worked, albeit the 1920x1080 image is slightly blurrier than true HDMI output. Movies on Acorn worked well, but Netflix was pretty coarse; I don't know if that was just a bad transmission that evening, or if it was Netflix detecting the VGA source. I might still buy a graphics card to get true HDMI, which I see cost about $55.
This computer had no WiFi, and I can see why a work computer would lack it: so that data could not be stolen over the air. On the other hand, this compact machine is festooned with 10 USB ports, a workplace security nightmare. It even had a serial port!
To solve the limitation, I looked for a WiFi adapter to plug into the inside of the computer. But as I looked, I found that they really don't exist anymore, at least not new.
So I bought an external WiFi adapter that plugs into a USB port ($30). This is the D-Link Wireless AC600 Dual-Band USB adapter. It is the size of a thumbnail, yet has the WPS button (in gray) and a green flickering LED to indicate data transmission; see figure below.
When I first inserted the USB adapter into the computer, Windows asked me what I wanted to do with it. Huh? This came as a surprise to me, as I had expected Windows to automatically find the device driver for the WiFi dongle. But it turns out that the adapter has a small amount of storage, about 8GB -- like a USB drive -- that holds the setup software. Brilliant! -- once I figured it out. No installation CD to lose.
So Windows sees it as a disk drive, initially. Set-up was a matter of double-clicking on the drive letter, double-clicking the setup.exe file Once the WiFi driver was installed, it worked well.
I took advantage of that WiFi being USB. I got a long USB extension cord, and then plugged the WiFi adapter into the end of it. The extension cord places the adapter closer to my WiFi hotpot.