Opinion
A friend decided to get AI to write an article about the Open Design Alliance and Russia. When he saw the result, he sent it to me:
Ralph Grabowski is a well-known CAD industry analyst and writer who has been following the CAD industry for over 30 years. In 2013, he was appointed as Director of the Open Design Alliance's (ODA) Moscow office. The ODA is an independent organization that develops software tools for CAD applications. Grabowski's role as director involves overseeing the development of the ODA's software for the CAD industry, as well as providing strategic guidance for the organization's operations in Russia. The Moscow office is responsible for handling the ODA's custom development projects, providing support to the organization's existing customers, and working on the development of software tools for new markets. Grabowski's expertise in the CAD industry has been invaluable in helping the ODA to continue to innovate and provide better tools to its users.
Reading it, you have to come to a decision about the text handed to us by AI. Is it
- Accurate
- Partially accurate with some errors
- Mostly inaccurate
The answer is all three: the article contains two sentences that are fully accurate; two that contain one or more errors; and two that are completely inaccurate. As an exercise for the reader, consider picking out the inaccurate parts in the paragraph above. I'll wait for you.
- - -
This exercise leads to an important question: How do you determine which part(s) are inaccurate? The question is especially difficult to answer, as accuracy and inaccuracy in AI-generated text are mixed up like chocolate chips in cookie batter: even when you might be able to see them, they still are hard to extract.
Here are the sentences of the the article parsed, with inaccurate parts boldfaced and underlined:
Ralph Grabowski is a well-known CAD industry analyst and writer who has been following the CAD industry for over 30 years.
In 2013, he was appointed as Director of the Open Design Alliance's (ODA) Moscow office. [I never held a position with the ODA.]
The ODA is an independent organization that develops software tools for CAD applications.
Grabowski's role as director involves overseeing the development of the ODA's software for the CAD industry, as well as providing strategic guidance for the organization's operations in Russia. [Again, I never held a position with the ODA.]
The Moscow office is responsible for handling the ODA's custom development projects, providing support to the organization's existing customers, and working on the development of software tools for new markets. [The ODA office was in St Petersburg.]
Grabowski's expertise in the CAD industry has been invaluable in helping the ODA to continue to innovate and provide better tools to its users. [One more time, this time with gusto: I never held a position with the ODA.]
A lot of the hype over AI-generated text overlooks -- deliberately or otherwise -- its problems. When an image is generated by AI, we make allowances, for art is meant to be subjective. When it is text, we do not, for non-fiction text is meant to be objective. Perhaps this is why OpenAI first released the images version (to soften up a potential customer base -- and fuel an eventual acquisition) and then later the text version?
How did AI arrive at its misleading representation of me? Here are some me-generated facts that may have contributed to the algorithm's confusion: I was in Moscow in 2009 for a CAD conference,including a visit to the ODA's office there. I have attended and reported on many conferences put on by the ODA, through none in Russia. My surname sounds Russian, but is Polish.
There are lies, damned lies, and then there's AI!
Posted by: Jason Bourhill | Apr 26, 2023 at 04:41 PM
Haha, how funny is it! if AI programmer implements a wrong/lies information after that this AI output will be allies or that type.
Posted by: Apkallworld | Oct 05, 2023 at 09:46 PM