How can a software recognize a chair?

The artificial intelligence and the chair of Bruno Munari.

2020-08-24 11:49:46

Some time ago I found a definition (I think of a French researcher) that I found appropriate for describing this brute-force elaboration process: trivial heuristics.

The appeal of DNN and in general of Machine Learning (ML) algorithms is very strong, the magic wand with which to solve most problems without the need to write code.

Neural networks are undoubtedly a useful mechanism, to be had in the bag of IT tools, in recent times, however, I see them used even when they are not necessary, out of proportion and with results that only apparently are satisfactory:
in fact, in addition to the "magic" problem, there is that with input data that go beyond the training set, incorrect results are often obtained, to remedy the sample is extent as much as possible and the energy consumption of the servers increases dramatically .

More or less the procedure is as follows: the analyzers extract from the images of the training set a series of features that the DNN "learn", with a pseudo-statistical process, to associate with labels (signifiers) that will return as a result when they will have to operate the recognition; but the designers' imagination will always produce that new chair that DNN will not be able to recognize.

Let's go back to the initial question: how can a software recognize a chair?

Let's start with the words: Description, Definition.
The description is specific (four legs, wheels ...), the definition instead is general.

As usual we have to pose the right questions and in this case the question is: what is the definition of the signifier chair?

In second or third grade my drawing professor, the painter Enrico Ragni (1910-2002) invited Bruno Munari (Italian architect and designer 1907-1998) to give us a lecture. It was a memorable experience, I learned that the possibility of use is hidden in the essence of things.

Bruno Munari's chair does not have one or four legs, this can be the description of the chair at the reception, Bruno Munari's chair is an object on which I can comfortably rest my ass at a certain distance from the ground and stretch my back, or its definition is (in summary and mainly) the use I can make of it.

DNN can give us an answer if in some way the description of our chair is included in the training data set, but we cannot be sure that it will be able to recognize Bruno Munari's chair. So things are not simple.

Right now, research is largely geared towards improving DNN algorithms. This is fine, it is certainly useful, but to think of solving the problem of recognition starting from the definition (general) one must venture into other fields and it is for this purpose that the programming language POWER-KI was born, which combines the possibility to hook DNN code, ML etc. (also created ad hoc), to a reflective environment in which to develop symbolic elaboration.

We talked about images, but the concepts also apply in other areas.
The topic is extraordinarily broad, not exhaustive in a page. However, this is the road and the tool to travel it exists.

CAP 2020

I recently listened to the following explanation:

train a Deep Neural Network (DNN) by submitting a large number of images of chairs and this, magically (or unable to explain why), will be able to recognize a chair.