Transcription & Deep Learning

Humans looking for reassurance against the encroachment of artificial brains should try YouTube subtitles: whatever Google’s track record in natural language processing, the way its automated scribe writes down what is said in the movies is essentially useless.

A blank sheet of paper was copied on a Xerox machine.
This copy was used to make a second copy.
The second to make a third one, and so on…
Each copy as it came out of the machine was re-used to make the next.
This was continued for one hundred times, producing a book of one hundred pages. (Ian Burn)

Experience directly points to the probable cause of failure: the usefulness of real-time transcriptions is not a linear function of accuracy because every slip can be fatal, without backup or second chance. It’s like walking a line: for all practical purposes a single misunderstanding can throw away the thread of understanding, without a chance of retrieve or reprieve.

Contrary to Turing machines, listeners have no finite states; and contrary to the sequence of symbols on tapes, tales are told by weaving together semantic threads. It ensues that stories are work in progress: readers can pause to review and consolidate meanings, but listeners have no other choice than punting on what comes to they mind, hopping that the fabric of the story will carry them out.

So, whereas automated scribes can deep learn from written texts and recorded conversations, there is no way to do the same from what listeners understand. That’s the beauty of story telling: words may be written but meanings are renewed each time the words are heard.

Further Reading

2 thoughts on “Transcription & Deep Learning”

  1. This is the fitting weblog for anyone who wants to search out out about this topic. You notice a lot its virtually exhausting to argue with you (not that I truly would want…HaHa). You positively put a new spin on a subject thats been written about for years. Great stuff, simply great!

  2. Good points. In the same time, the words by themselves have no meaning without a contextual abstract model. It reflects relation between all the entities, actions, dependencies and how all the them serve what we are unconsciously looking to archive. Such invisible internal model is the actual battlefield targeted, lets say, by the mass media. They know, there is no need to change the word, it’s more practical to change it’s meaning in the context.

Leave a Reply

Discover more from Caminao's Ways

Subscribe now to keep reading and get access to the full archive.

Continue reading