A Friday Rant - Don't call something AI just because you feel like it

I’m going to rant a little bit.
About the common intentional as well as unintentional mutilation of technical terms.

It’s not new.
It has always been a favorite way to market technical products, and is often justified by saying it’s “The sizzle.”

I remember the first <really> egregious example I ran into long ago was “Stateful Packet Inspection” when describing a proxy firewall feature. It sounds cool, but when you start digging into what the individual words mean, questions are raised about what is really being described, and whether anyone defines the same way.

In the same way,
This past week there have been at least 2 instances where I am frustrated that both the knowledgeable as well as the naive have co-operated in further mis-using terms.

The first is the “White House Panel on AI”
For starters, AI (aka Artificial Intelligence") is not well defined, even people who work in the field don’t typically define the same way. Some say that sentience is necessary for something to be intelligent, others say not necessarily. Some say that a computing device can never really be intelligent on its own because its behavior is limited by man-made algorithms and merely obeys what it has been told. Some say that machine learning is a form of intelligence. Others might want to look for other human attributes like some level of “understanding.”

But,
Until now, hardly anyone has defined “intelligence” like the White House Panel, as mysterious computation not understood by the masses.
Yes, there <might> be some type of AI attached to what Facebook, Twitter, Apple and other technical giants are doing, but the vast majority of what we see today is merely the product of <human intelligence> amplified by computational power. That is why, for example Facebook has to hire thousands of <humans> to sift for ads that violate its new standards on Truth and Falsehoods.

So, saying the White House Panel which is supposed to discuss Analytics and its use in Social Media is anything that might focus on AI is likely a bill of false goods.

Similarly, yesterday I saw a guest on a cable news channel describing how his software can use effects commonly available to Hollywood movies to place his head on a celebrity’s body with Obama’s voice.
His point is that although today we place our highest trust in audio and video evidence, that won’t be the case much longer when false advertisements will be commonly built using his technology as the next possible step to influencing elections.

But then, he said that he uses AI for his photoshopping and voice.
If his technology has anything to do with AI… Really, what would you have me believe?
Today, there isn’t likely <anything> in what he described that would use AI in any useful way. Maybe when and if AI becomes competent enough to manage his technology it’ll be useful in that way, but only then.

Again,
I view this as just another example where someone is calling something, anything computers can do that isn’t easily understood “AI”

So,
I implore those who touch our technical world to use our terminology as accurately as possible and to at least avoid the worst and most technically obvious corruptions, else we will find that although we might speak the same language, no one will understand what anyone else is saying.

</Friday Rant>,
TSU

On 2018-05-11, tsu2 <tsu2@no-mx.forums.microfocus.com> wrote:
> For starters, AI (aka Artificial Intelligence") is not well defined,
> even people who work in the field don’t typically define the same way.

For someone who researches in the field, let me clarify AI. Artificial intelligence isn’t. It might be artificial but it
certainly is not intelligent. Deep learning networks might be getting bigger and better with supra-human performance for
certain tasks, but at best they still remain classifiers or regressors.

There’s a huge difference between understanding the semantic concepts that underly observations, and fitting data using
sophisticated models. The first situation is intelligence, whereas the second might give impression of intelligence
whereas in fact it only emulates intelligence for a specific task or set of tasks. For example, in recognition tasks, if
a deep network could be tangibly informed by semantic explanation of relevant concepts before seeing data then that
would be intelligent learning.

But instead, deep networks learn by updating weight coefficients and bias offsets for components within the network
whose specific relationship to semantic concepts remains poorly defined. Deep networks only properly learn when exposed
to very large sets. Any human that required such a large volume of data before exhibiting acceptable learning
behaviour would hardly be called `intelligent’.

So why do we call it AI when it isn’t intelligent? Because it sounds cool, generates hype, excites the general public,
grabs the attention of lay media, and therefore attracts a lot a money.

Looking at the Wikipedia article on “Artificial intelligence”:

The field of AI research was born at a workshop at Dartmouth College in 1956.

I was first confronted with the concept of AI in the mid 1980’s; it was presented as being a “mature” academic discipline then. But, even then, what seemed to be more reliable were, the “expert systems”, such as the “MUD” application used by the oil industry.

In the 1990’s, from within the telecommunications industry, we began to realise that, we were (and still are) building the largest (intelligent) robot on planet earth. Have you ever considered about what really happens when, you pick up a telephone and dial the number of someone physically located on the other side of the planet? And, what you almost certainly do not realise is, the speech channel setup between the two telephones is dynamic, very dynamic – the speech packets (traditionally, one every 20 ms) are routed dynamically. With mobile telephony it’s much worse – as the mobile telephone moves, everything changes and, these days, even when it’s not moving …

Please realise that, this all happens without any human intervention, at all. Yes, humans influence the behaviour of the telephone network but, it’s only an influence …

If you were to (begin to) read the ITU-T standards and the 3GPP standards « there are thousands of the things », you would begin to gain a glimmer of what is behind this extremely large robot but, you still wouldn’t really know because, the standards compliant algorithms are company secrets within the components which make up the (robotic) network.

A comment made about a single network component:

You do realise that, with this thing we can switch every telephone conversation on the planet!

On 2018-05-15, dcurtisfra <dcurtisfra@no-mx.forums.microfocus.com> wrote:
>
> flymail;2865524 Wrote:
>>
>> So why do we call it AI when it isn’t intelligent? Because it sounds
>> cool, generates hype, excites the general public, grabs the attention of
>> lay media, and therefore attracts a lot a money.
>>
> Looking at the Wikipedia article on “Artificial intelligence”:
>> The field of AI research was born at a workshop at Dartmouth College in
>> 1956.
> I was first confronted with the concept of AI in the mid 1980’s; it was
> presented as being a “mature” academic discipline then.

Ahh… the authorative academic resource that is Wikipedia, devoid of bias and nationalistic agendas. I’m sure the WW2
code-breakers at Bletchley Park might object to Wikipedia’s assertion.

AI is a poorly defined because the `intelligence’ has little to do with what AI really does. The use of AI techniques
may result in astonishing levels of pattern recognition and performance, but there’s a difference between emulating
intelligence and creating intelligence. We’re very good at the first but not the second because we don’t really
understand the human basis of intelligence.

As the old saying goes:

  • Artificial Intelligence – a problem that is half solved. They have the “artificial” part of it working.

Same for “smart” phones. But the general people swallow it all. These are the new names for what was earlier called “magic”.

Here we even have “smart” energy meters. Their smartness being that they can communicate using Wifi over the internet to tell your energy provider that you are on vacation and the house is empty.

… or report the same information to any would-be burglar who knows how to hack in. Hack tests have proven this, but ignored by the power companies and the regulators.

Exactly. Those companies always believe everything works along the lines of their all singing all dancing scenarios. They choose to ignore the worst cases. Those cases that will happen inevitable.

On 2018-05-20, hcvv <hcvv@no-mx.forums.microfocus.com> wrote:
>
> nrickert;2865995 Wrote:
>> As the old saying goes:
>> > > >
> - Artificial Intelligence – a problem that is half solved. They
> > have the “artificial” part of it working.
> > > >
>>
> Same for “smart” phones. But the general people swallow it all. These
> are the new names for what was earlier called “magic”.

Good point Henk. I should call my next deep network `SmartNet’!

Ahh - it looks like Cisco has already trademarked it…

… no problem. Call yours EvenSmarterNet.

… then, I could call mine TheSmartestOfAllNet.

Good point but, I suspect that, Alan Turing and his mates considered the work they were doing as being a mathematical puzzle to crack the Enigma coding – they were definitely the first to decrypt coded messages on a large scale, in almost real-time, without any knowledge of the key which was used to encrypt those messages …

Did you know that, they had incredibly fast paper-tape readers?

Whether or not, the Bletchley Park “Enigma” computer (with valves) was in fact a “Turing machine” is a good question; I personally don’t have an answer …

On 2018-05-22, dcurtisfra <dcurtisfra@no-mx.forums.microfocus.com> wrote:
> Whether or not, the Bletchley Park “Enigma” computer (with valves) was
> in fact a “Turing machine” is a good question; I personally don’t have
> an answer …

I think the Enigma machine was invented by the Germans after WWI. However I believe the British Bombe is a Turing
machine by definition since it was designed during WWII by Alan Turing! Only later did we call such machines
computers…

Since Engima was mentioned here I would like to add that the cypher was cracked with a lot of help from Polish mathematicians (Marian Rejewski, Jerzy Różycki and Henryk Zygalski), which I see rarely mentioned in museums and such :slight_smile:

In the first Culture novel by Iain M. Banks (Consider Phlebas),

a transcendent civilisation that had left the material plane had left behind as footholds in the material universe certain “planets of the dead” which were their last vestiges, or physical presences in that universe.

The novel takes place mostly on one such planet and the narrator remarks, or one of the protagonists does, that the builders had chosen to use trustworthy, old-school touch screen technology instead of interactive holograms.

By then ancient technology, touch screens were more reliable, robust, no-nonsense compared to the ‘advances’ that came after.

I think much of the technological advances we see today are in fact deterioriations because they solve, so to speak, luxury problems while creating new real problems.