The Brain Is Not A Computer - Does Not Process/store Information, Memories, Knowledge

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,798
Location
USA / Europe
A really great article that not only makes a major argument against the success of AI as implemented in digital computers, but also calls into question the mainstream approach to neuroscience. Even neuroscience currently views the brain as little more than "wetware" - a biological CPU implemented in cells instead of microchips. Well, as the article aptly explains, the computer analogy is convenient but extremely misleading and has set neuroscience (and possibly many other branches of medicine) into a very wrong direction. Rather than being eternally unchanging and ever growing collection of data (our life experiences, thoughts, knowledge, etc) the evidence suggests that our brain (and thus consciousness) are constantly changing and malleable, and those changes occur in response to environmental stimuli. So, every experience changes us in a unique and irreproducible (and irreducible) way so that what we think of as memories are actually the changes in brain structure consistent with the experience which we associate the "memory" with. How exactly are we able to utilize these brain changes in a way that allows us to "recall" an event or play an instrument we learned to play long time ago? Nobody currently knows, at least not anybody in an official occupation as a neuroscientist. My guess would be that the brain structural changes that occur as part of our experience of being alive modulate the electron flow that we call consciousness. And because brain structural changes are ongoing, remembering the same event over and over again results in a constantly modified "memory" of the event to the point of confabulation if the event occurred sufficiently long ago. We don't remember, we re-live and re-create. Perhaps this is one reason Peat said that knowledge is overrated and Einstein said that imagination (the creative spirit in Blake's writings) is more important than knowledge. It is the ability to change rapidly, which depends on metabolism, that counts. Not the unique structural change itself and how robustly it is retained.

Your brain does not process information and it is not a computer | Aeon Essays
"...No matter how hard they try, brain scientists and cognitive psychologists will never find a copy of Beethoven’s 5th Symphony in the brain – or copies of words, pictures, grammatical rules or any other kinds of environmental stimuli. The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’. Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer."

"...Senses, reflexes and learning mechanisms – this is what we start with, and it is quite a lot, when you think about it. If we lacked any of these capabilities at birth, we would probably have trouble surviving. But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever. We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

"...Computers, quite literally, process information – numbers, letters, words, formulas, images. The information first has to be encoded into a format computers can use, which means patterns of ones and zeroes (‘bits’) organised into small chunks (‘bytes’). On my computer, each byte contains 8 bits, and a certain pattern of those bits stands for the letter d, another for the letter o, and another for the letter g. Side by side, those three bytes form the word dog. One single image – say, the photograph of my cat Henry on my desktop – is represented by a very specific pattern of a million of these bytes (‘one megabyte’), surrounded by some special characters that tell the computer to expect an image, not a word. Computers, quite literally, move these patterns from place to place in different physical storage areas etched into electronic components. Sometimes they also copy the patterns, and sometimes they transform them in various ways – say, when we are correcting errors in a manuscript or when we are touching up a photograph. The rules computers follow for moving, copying and operating on these arrays of data are also stored inside the computer. Together, a set of rules is called a ‘program’ or an ‘algorithm’. A group of algorithms that work together to help us do something (like buy stocks or find a date online) is called an ‘application’ – what most people now call an ‘app’."

"...Forgive me for this introduction to computing, but I need to be clear: computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms. Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?"

"...Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software. The landmark event that launched what is now broadly called ‘cognitive science’ was the publication of Language and Communication(1951) by the psychologist George Miller. Miller proposed that the mental world could be studied rigorously using concepts from information theory, computation and linguistics."

"...This kind of thinking was taken to its ultimate expression in the short book The Computer and the Brain (1958), in which the mathematician John von Neumann stated flatly that the function of the human nervous system is ‘prima facie digital’. Although he acknowledged that little was actually known about the role the brain played in human reasoning and memory, he drew parallel after parallel between the components of the computing machines of the day and the components of the human brain."

"...The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences. There is virtually no form of discourse about intelligent human behaviour that proceeds without employing this metaphor, just as no form of discourse about intelligent human behaviour could proceed in certain eras and cultures without reference to a spirit or deity. The validity of the IP metaphor in today’s world is generally assumed without question."

"...The faulty logic of the IP metaphor is easy enough to state. It is based on a faulty syllogism – one with two reasonable premises and a faulty conclusion. Reasonable premise #1: all computers are capable of behaving intelligently. Reasonable premise #2: all computers are information processors. Faulty conclusion: all entities that are capable of behaving intelligently are information processors. Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the IP metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly."

"...What is the problem? Don’t we have a ‘representation’ of the dollar bill ‘stored’ in a ‘memory register’ in our brains? Can’t we just ‘retrieve’ it and use it to make our drawing? Obviously not, and a thousand years of neuroscience will never locate a representation of a dollar bill stored inside the human brain for the simple reason that it is not there to be found."

"...The idea, advanced by several scientists, that specific memories are somehow stored in individual neurons is preposterous; if anything, that assertion just pushes the problem of memory to an even more challenging level: how and where, after all, is the memory stored in the cell? So what is occurring when Jinny draws the dollar bill in its absence? If Jinny had never seen a dollar bill before, her first drawing would probably have not resembled the second drawing at all. Having seen dollar bills before, she was changed in some way. Specifically, her brain was changed in a way that allowed her to visualise a dollar bill – that is, to re-experience seeing a dollar bill, at least to some extent. The difference between the two diagrams reminds us that visualising something (that is, seeing something in its absence) is far less accurate than seeing something in its presence. This is why we’re much better at recognising than recalling. When we re-membersomething (from the Latin re, ‘again’, and memorari, ‘be mindful of’), we have to try to relive an experience; but when we recognise something, we must merely be conscious of the fact that we have had this perceptual experience before. "

"...As we navigate through the world, we are changed by a variety of experiences. Of special note are experiences of three types: (1) we observe what is happening around us (other people behaving, sounds of music, instructions directed at us, words on pages, images on screens); (2) we are exposed to the pairing of unimportant stimuli (such as sirens) with important stimuli (such as the appearance of police cars); (3) we are punished or rewarded for behaving in certain ways. We become more effective in our lives if we change in ways that are consistent with these experiences – if we can now recite a poem or sing a song, if we are able to follow the instructions we are given, if we respond to the unimportant stimuli more like we do to the important stimuli, if we refrain from behaving in ways that were punished, if we behave more frequently in ways that were rewarded. Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary."

"...A few cognitive scientists – notably Anthony Chemero of the University of Cincinnati, the author of Radical Embodied Cognitive Science (2009) – now completely reject the view that the human brain works like a computer. The mainstream view is that we, like computers, make sense of the world by performing computations on mental representations of it, but Chemero and others describe another way of understanding intelligent behaviour – as a direct interaction between organisms and their world.

"...One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal. This concept drove the plot of the dystopian movie Transcendence (2014) starring Johnny Depp as the Kurzweil-like scientist whose mind was downloaded to the internet – with disastrous results for humanity. Fortunately, because the IP metaphor is not even slightly valid, we will never have to worry about a human mind going amok in cyberspace; alas, we will also never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing."

"...Because neither ‘memory banks’ nor ‘representations’ of stimuli exist in the brain, and because all that is required for us to function in the world is for the brain to change in an orderly way as a result of our experiences, there is no reason to believe that any two of us are changed the same way by the same experience. If you and I attend the same concert, the changes that occur in my brain when I listen to Beethoven’s 5th will almost certainly be completely different from the changes that occur in your brain. Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences. This is why, as Sir Frederic Bartlett demonstrated in his book Remembering (1932), no two people will repeat a story they have heard the same way and why, over time, their recitations of the story will diverge more and more. No ‘copy’ of the story is ever made; rather, each individual, upon hearing the story, changes to some extent – enough so that when asked about the story later (in some cases, days, months or even years after Bartlett first read them the story) – they can re-experience hearing the story to some extent, although not very well (see the first drawing of the dollar bill, above).

"...This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination. For any given experience, orderly change could involve a thousand neurons, a million neurons or even the entire brain, with the pattern of change different in every brain."

"...Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive. There is no on-off switch. Either the brain keeps functioning, or we disappear. What’s more, as the neurobiologist Steven Rose pointed out in The Future of the Brain (2005), a snapshot of the brain’s current state might also be meaningless unless we knew the entire life history of that brain’s owner – perhaps even about the social context in which he or she was raised. Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic. (In a recent op-ed in The New York Times, the neuroscientist Kenneth Miller suggested it will take ‘centuries’ just to figure out basic neuronal connectivity.) Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept. The most blatant instance of neuroscience gone awry, documented recently in a report in Scientific American, concerns the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down."

"...We are organisms, not computers. Get over it. Let’s get on with the business of trying to understand ourselves, but without being encumbered by unnecessary intellectual baggage. The IP metaphor has had a half-century run, producing few, if any, insights along the way. The time has come to hit the DELETE key."
 

Soren

Member
Forum Supporter
Joined
Apr 5, 2016
Messages
1,648
Fascinating. I have always believed that we are a very long way from creating a true AI aka consciousness from computers and that it may be virtually impossible to do so via mechanical means (aka computers and coding). Possibly we will create a sufficiently intelligent system to create the illusion of consciousness but it will not in fact be consciousness.

One reason is that there are certain mathematical problems that outside the realm of computation and seemingly can only be solved by conscious individuals.
Roger Penrose an Oxford Mathematician talks about this and gives a pretty good interview on his views on consciousness and why he believes that consciousness can not be created via computer processes.

 
Joined
Oct 8, 2016
Messages
464
Location
Colorado, USA
Creating false analogies between biology and the technological fashion of the day is a recurring theme of Western thought.

In the days of the steam engine, contemporary explanations for all sorts of things were based on pressure. Even in antiquity, Aristotle believed that the head was simply full of steam created by the heat of the heart, much like in an ancient spa.

Jaron Lanier discusses this, so does Nicholas G. Carr.
 
OP
haidut

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,798
Location
USA / Europe
Fascinating. I have always believed that we are a very long way from creating a true AI aka consciousness from computers and that it may be virtually impossible to do so via mechanical means (aka computers and coding). Possibly we will create a sufficiently intelligent system to create the illusion of consciousness but it will not in fact be consciousness.

One reason is that there are certain mathematical problems that outside the realm of computation and seemingly can only be solved by conscious individuals.
Roger Penrose an Oxford Mathematician talks about this and gives a pretty good interview on his views on consciousness and why he believes that consciousness can not be created via computer processes.



There is actually proof that computers and ANY other machine that combines deterministic with random behavior (or either one of these by itself) can never increase mutual information (knowledge). Such machines can only manipulate information. But humans can increase mutual information (knowledge) implying there is something special about human operation that digital computers cannot reproduce.
Science Is Stagnant, No Real Progress Since Early 20th Century
 

BRMarshall

Member
Joined
Sep 12, 2018
Messages
237
Thanks for the start of another most important topic of interest
~the segue into a thread....or is it a microtubule? ~~~~~

For those interested in this subject matter, I highly recommend the work of Stephen E. Robbins, Phd who has written several books on these subjects. It is Robbins who is rehabilitating the philosopher Henri Bergson's work, specifically as regards an understanding that consciousness is 'holographic', years before the holograph was even invented. Robbins work is also a welcome rebuff to the mad rush to Artificial Intelligence, dissecting the nonsense theories that are making up the quest for a gollum.

I first came across Robbins work by viewing those chapters in his youtube series on Bergson's Holographic Theory that had to do with the work of Tesla.

Here is the first video in that series:



There are 38 segments in this video series, power-point education of a least an hour a piece.....with the one's on Tesla being #19 & #22

His website below has a number of papers and articles for download, some of which are some are of questions concerning the evolution of domestic plants and animals, where was Troy (England), 10,000 BC, dating the dinosaurs, etc....

Bergson Holographic
 

Regina

Member
Joined
Aug 17, 2016
Messages
6,511
Location
Chicago
Fascinating. I have always believed that we are a very long way from creating a true AI aka consciousness from computers and that it may be virtually impossible to do so via mechanical means (aka computers and coding). Possibly we will create a sufficiently intelligent system to create the illusion of consciousness but it will not in fact be consciousness.

One reason is that there are certain mathematical problems that outside the realm of computation and seemingly can only be solved by conscious individuals.
Roger Penrose an Oxford Mathematician talks about this and gives a pretty good interview on his views on consciousness and why he believes that consciousness can not be created via computer processes.


Great interview! Great topic!
 

DrJ

Member
Joined
Jun 16, 2015
Messages
721
An interesting thing I've noticed in AI relating to Peat's idea on structure and energy being interrelated is this:

A current popular approach to AI is 'deep' neural networks or convolutional neural networks. They are sort of like a glorified correlation engine. Although they change in size, the underlying structure is always the same - weights assigned to nodes and the connection of nodes pre-specified. The structure is fixed. Description of correlation relies on a fixed model. They can recognize patterns but they can't 'reason' as we think of it.

There are other AI approaches, and I think one of the most interesting is described in the book by Judea Pearl called Causality, in which he spends a lot of time discussing how you actually tell how things are causally related, which is substantially different than correlation. He provides a way to do this. An algorithm to figure out the causal model if you will. And each time the model is different, and for even describing simple systems, it can be quite challenging to arrive at the accurate causal model. The structure is changing and contextual. What matters in the model is the connection of the relationships between nodes in the model which has no fixed description. But the human mind can perceive it, and deduce it, so must be able to 'model the model' itself. I suspect it is the more promising approach, even if more difficult.
 
OP
haidut

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,798
Location
USA / Europe
An interesting thing I've noticed in AI relating to Peat's idea on structure and energy being interrelated is this:

A current popular approach to AI is 'deep' neural networks or convolutional neural networks. They are sort of like a glorified correlation engine. Although they change in size, the underlying structure is always the same - weights assigned to nodes and the connection of nodes pre-specified. The structure is fixed. Description of correlation relies on a fixed model. They can recognize patterns but they can't 'reason' as we think of it.

There are other AI approaches, and I think one of the most interesting is described in the book by Judea Pearl called Causality, in which he spends a lot of time discussing how you actually tell how things are causally related, which is substantially different than correlation. He provides a way to do this. An algorithm to figure out the causal model if you will. And each time the model is different, and for even describing simple systems, it can be quite challenging to arrive at the accurate causal model. The structure is changing and contextual. What matters in the model is the connection of the relationships between nodes in the model which has no fixed description. But the human mind can perceive it, and deduce it, so must be able to 'model the model' itself. I suspect it is the more promising approach, even if more difficult.

I think you hit the nail on the head. Any algorithm that models the world based on correlations only will probably not achieve much general intelligence as it would only "act" if the correlation is strong enough. In the real world, there are often things that are causally related and we can easily observe that and act upon that realization but the correlation between them is low or even zero.
Causation without Correlation is Possible
 

Literally

Member
Joined
Aug 3, 2018
Messages
300
Interesting article, but the author does not seem very familiar with neural networks, as he tries to draw a stark distinction between computer and human memory-as-processing without apparently realizing there are many computer models of memory that are quite like what he describes as the distinctive characteristics of humans. The author also doesn't seem to be aware of the basic concepts of computability theory including e.g. Turing Completeness. So this is necessarily a straw man attack because the author doesn't appear to know or care enough to give machine functionalism a fair shake.

I suppose the brain will ultimately be shown to be a kind of computer, although possibly a quantum computer. I also believe with @Soren that we are a long way off from anything that could legitimately be called AI. Long enough that I find all the transhumanist hype these days especially suspicious... it's a modern religion.

I guess I should mention I am a computer scientist?

BTW this seems to directly contradict this author's claims -- Invariant visual representation by single neurons in the human brain It should not ultimately be taken for evidence of the "grandmother cell" hypothesis... they aren't claiming the sort of neurons they found are unique in a brain, and in fact the prevailing consensus is for sparse/distributed memory.
 
Last edited:
OP
haidut

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,798
Location
USA / Europe
Interesting article, but the author does not seem very familiar with neural networks, as he tries to draw a stark distinction between computer and human memory-as-processing without apparently realizing there are many computer models of memory that are quite like what he describes as the distinctive characteristics of humans. The author also doesn't seem to be aware of the basic concepts of computability theory including e.g. Turing Completeness. So this is necessarily a straw man attack because the author doesn't appear to know or care enough to give machine functionalism a fair shake.

I suppose the brain will ultimately be shown to be a kind of computer, although possibly a quantum computer. I also believe with @Soren that we are a long way off from anything that could legitimately be called AI. Long enough that I find all the transhumanist hype these days especially suspicious... it's a modern religion.

I guess I should mention I am a computer scientist?

BTW this seems to directly contradict this author's claims -- Invariant visual representation by single neurons in the human brain It should not ultimately be taken for evidence of the "grandmother cell" hypothesis... they aren't claiming the sort of neurons they found are unique in a brain, and in fact the prevailing consensus is for sparse/distributed memory.

Look at one of my comments further below. No actual computer can increase/create mutual information (knowledge).
The Brain Is Not A Computer - Does Not Process/store Information, Memories, Knowledge

The human brain is a matter/energy processing device, not an information one. I will post the study below as a separate thread but here it is in case you feel like reading the full paper.
Consciousness as a Physical Process Caused by the Organization of Energy in the Brain

Matter/energy > information, as in information is always secondary to matter. As such, simply processing information can never result in actual consciousness and can only count as intelligence in some very specialized tasks in a sufficiently structured and unchanging environment. The fact that there are claims right now of "general AI" being within reach are laughable. And as far as practical quantum computing (which probably still won't give general AI)? See the links below.
https://scottlocklin.wordpress.com/2019/01/15/quantum-computing-as-a-field-is-obvious-bull****/
The Case Against Quantum Computing
 

Literally

Member
Joined
Aug 3, 2018
Messages
300
@haidut, You are playing fast and loose with math that you do not seem to understand. None the claims you are making are even remotely supported by Levin's work, as far I can tell.

>> Matter/energy > information, as in information is always secondary to matter.

If this is the case, why for example do you think that the equation for Bolztmann entropy (core thermodynamics) is *identical* the equation for Shannon entropy. Modern physics is pointing in the direction that information theory is a better/coequal metaphor, not a subordinate process.

>> matter/energy processing device, not an information one

Computers are also matter/energy processing devices. As it is trivial to show humans can act as information processing devices. This is a kind Cartesian axiom, in any case, not any real distinction.

>> Look at one of my comments further below. No actual computer can increase/create mutual information (knowledge).
>> The Brain Is Not A Computer - Does Not Process/store Information, Memories, Knowledge

This kind of statement seems like it can only result from taking some very technical statements, with formal definitions and implying them informally in a "fast and loose" manner. This is probably why Levin himself did not make these sorts of wild claims based on his idea of independence conservation. It would appear to be a huge and unsubstantiated leap, to the point I wonder whether the person who wrote the last article in incompetent or malicious.

Let's engage with this "Law" of independence conservation for mutual information. Rather than quote Levin directly I am going to quote an academic who has attempted formalize and organize his actual argument more clearly. Take a breath because this is some wild math.

"Peano arithmetic has no computable consistent completion - this is the content of Godel’s first incompleteness theorem. It was probably the most surprising result that has been found in the context of Hilbert’s program, which calls for a formalization and axiomatization of all of mathematics, together with a “finitary” consistency proof for this axiomatization. It is important to see however, that G̈odel’s result entails no assertion regarding the general realizability of consistent completions of Peano arithmetic. By basic results of mathematical logic, for every consistent axiomatic system there is a consistent completion. G̈odel’s result only entails the assertion that the consistent completion of Peano arithmetic is impossible by effectively calculable methods- if we accept the Church-Turing thesis.

"Levin in his paper “Forbidden Information” argues that we can significantly expand this assertion to other than effectively calculable methods. His argumentation can be outlined as follows.

"Given any sequence, computing a consistent completion of Peano arithmetic relative to this sequence is equivalent to computing a total extension of the universal partial computable predicate u relative to this sequence. Due to what we will call the forbidden information theorem, every sequence that computes a total extension of u has infinite mutual information with the halting probability Ω. If we accept Levin’s independence postulate, then no sequence generated by any locatable physical process may have infinite mutual information with Ω. So we can conclude the following extension of Godel’s incompleteness assertion, which we will call the forbidden information thesis: no sequence that is generated by any locatable physical process is a consistent completion of Peano arithmetic.

"Levin’s exposition of the argumentation we just outlined is, however, rather sketchyand difficult to follow in some parts. Moreover, he tends to implicitly use resultswithout explicitly mentioning them or indicating where they were proved. The main objective of the present work is to completely and critically elaborate Levin’s argumentation."

So Godel's famous proof, cited at the beginning, was that are are un-provable theorems in any formal system powerful enough to do arithmetic. Written before modern computers or calculators were common, this was very much meant to apply to human beings manipulating symbols in any way. I have studied the theorem extensively. Now what is this Church-Turing Thesis, upon which it is pointed out that Godel's result rests, i.e. where he is saying there is no way to compute certain things. Wikipedia says, "It states that a function on the natural numbers is computable by a human being following an algorithm, ignoring resource limitations, if and only if it is computable by a Turing machine." Algorithms would include any known general proof technique, and generally anything that has been formalized in math. A way of thinking about this, in very loose language, is that anything/one capable of a certain, basic level of computation can compute anything computable.

Next comes a key sentence -- ""Levin in his paper “Forbidden Information” argues that we can significantly expand this assertion to other than effectively calculable methods. His argumentation can be outlined as follows."

That is, Levin's goal was to BROADEN the claim. Not only can computer-y things not do it, Level will argue.. it goes beyond that.

Next up -- ""Given any sequence, computing a consistent completion of Peano arithmetic relative to this sequence is equivalent to computing a total extension of the universal partial computable predicate u [...] every sequence that computes a total extension of u has infinite mutual information with the halting probability Ω." Whu? This is too hard to break down briefly but it means doing what Godel proved you can't do would additionally lead to a "halting problem" situation. It is a proof that shows there is no *general* way to prove that a computer-y thing running a program/solving a theorem is going to finish. In this case, it means, if you have a supposed method that purports to calculate halting information, it's bunk. (The why is interesting... the proof uses a paradox, showing that if you could do this, then you actually couldn't. Definitely worth a look if you are interested in this stuff. Next up, we're told that IF we accept Levin's independence postulate -- the supposed basis on which humans can think things that computers can't, remember: "no sequence that is generated by any locatable physical process is a consistent completion of Peano arithmetic."

In other words, Levin concluded that beyond computer-y processes, nothing physical could transcend the Godel limit on computation. Or reduced very simply, even going beyond the computational methods of computers, nothing/no one can produce these kinds of answers that have eluded computational formal systems.
In this light, can you see why Howell's interpretation might be a bit of a reach?

Let's look at the independence *postulate* -- not Law -- itself. No mathematical proof or argument is given, i.e. we're doing metamathematical guessing here...it's okay but very important context.

"In this section we want to discuss Levin’s independence postulate. It is a non-mathematical statement which we will need to argue for the non-mathematical forbidden information thesis." [...]

Ready? Here is the postulate itself -- which this reviewer ultimately concludes is unsupportable, by the way:

Thesis 3.26(Independence postulate). Let α be an infinite sequence that is definable in N [the natural numbers]. Then for an infinite sequence β that is generated by any locatable physical process, we have ˆI(α:β)<∞.

So all this independence stuff? Yeah, it's frigging based on MATH OF INFINITE SEQUENCES.

Shall we get into the actual conservation laws proposed based on this framework? How about I spare us? The important point is that you can't take computer science and math words that vaguely sound like they support your ideas, pull them out of a formal context -- esp. one that is quite aligned with machine functionalism -- and use them to make sweeping claims about what computers can't do, or what humans can do, or anything else. Howell jumps in an out of different metaphors for information without much care, but that is completely invalid. If there are conservation laws for mutual information it only says something about the mathematical system (FORMAL system, remember Godel?) of mutual information, not about the larger, big ideas that this guy wants to apply it to.

Here is my source http://pgeiger.org/dl/publications/geiger2012mutual.pdf -- I also skimmed a few of Levin's papers, I can point you to them if you want.
 
Last edited:

Literally

Member
Joined
Aug 3, 2018
Messages
300
By the way, I had not run much into mutual information but am finding that interesting. From what I can tell, at first blush, even within the context of mutual information theory Howell's general application of the concept is a butchery. For example here is a "101" example of when mutual information increases -- without any special context around infinite sequences of advanced metalogics:

Good examples of when conditioning decreases/increases mutual information

So running around saying things like "No actual computer can increase/create mutual information (knowledge)." is pretty much mathematical quackery, as far as I can tell. What I found super interesting was that there are also many parallels to entropy in physics to mutual information theory, and this kind of fallacy is similar to one that I have seen Young Earth Creations use to argue that evolution is not possible, based on the Laws of Thermodynamics.
 
Last edited:

yerrag

Member
Joined
Mar 29, 2016
Messages
10,883
Location
Manila
This is a very interesting subject. It has me thinking about whether self-driving cars will actually be in our future. Also makes me wonder about how human intellect can be made to outsmart trading systems built with validated algorithms running on brute computational power. On the other hand, seeing how Deep Blue has made great progress and easily beat grandmasters at chess makes me wonder if the great strides in AI can be matched by making progress in the human intellect.

A lot of research and funding has been made towards AI. There could be a future where drivers are no longer needed, and this is one displacement of human labor that's to be feared. But I don't even know if there is really much reason for this fear. It was in the news that Chinese hackers were able to fool Tesla's AutoPilot by preying on its inability to perceive like humans. Tesla could reprogram the AutoPilot or add more intelligence to its sensors, and it will be able to deal with this exploit. But the real world will always challenge the ability of AutoPilot, and each new learning will come with some mishap coupled with the loss of life or body parts. The real world isn't predictable, and AutoPilot doesn't have the innate intelligence to react appropriately as human brains do. It may have better reaction times, but it doesn't have the fine abstraction needed to make one-0ff decisions. This simply means to me that the future will not be about humans being replaced, but will be about humans being assisted by AI, or the other way around.

This may be or may not be a good thing. Humans may evolve to lose their ability to do AI-capable tasks, such as the ability to add, subtract, multiply, and divide, and will be more specialized towards making decisions. This would be the machine-caused devolution/specialization of our brain, which already has begun with calculators, and this has been made worse with schools that have provided students with crutches in the form of tablets and computers in the early years of education. There is now software such as Grammerly to even teach people to form better sentences.

This makes me wonder whether there's not as much effort being made to develop our intellect as much as on efforts to displace our intellect with machines. Educational systems are programmed to make our minds captive to accepting establishment theories and narratives, and we're not given the training to think critically. We're fed a continuous stream of Netflix entertainment, in which it's easy to fall into the habit of binge-watching. It's another opium for the masses. I look at the teachers nowadays, and my impression is that they don't measure well to the abilities of the teachers in the past.

On the topic of developing the mind further, aren't we inundated now with media of all sort, as well as RF radiation, that it takes us away from the quiet time for the mind to rest and recharge, as part of its development? And aren't we being restricted from some substances that allow our mind to expand its perception? Is it any wonder that machines are coming to be more human-like, as humans are more and more turning into machines?
 

Literally

Member
Joined
Aug 3, 2018
Messages
300
Interesting comments, @yerrag. I think the main think we need to worry about current "AI" systems is that TPTB will use them even though they don't work super well on a lot of the use cases that people see for them. They work well enough for a sort of hellish bureaucracy where there is no one to help you with the inevitable problems of the system, though. The obsession to put millions of drivers out of work breaks my heart. Tech is great but I think we need to question why it's being pushed on us so, so fast... especially when any tech user knows almost nothing is well built.

In related news, Google’s brand-new AI ethics board is already falling apart
 
Last edited:

ilikecats

Member
Joined
Jan 26, 2016
Messages
633
@Literally I don't know much at all about AI but what do you think when people like Elon Musk give these doomsday type predictions for the future of AI. Elon did an interview with Joe Rogan and he was like "I tried to warn them. I tried to convince people to slow down AI developement... this was futile" lol just doesn't pass the smell test for me.

 

Literally

Member
Joined
Aug 3, 2018
Messages
300
I think most interesting issues are not so easy to decide. If it were easy to substantiate any of the positions that have been laid out in this thread, for example, the argument wouldn't be so interesting. All this stuff could go either way.

My own position is nuanced, because on the one hand I do suspect brains are information processing machines, in a formal sense. That is, there is nothing "magic" that brains do that only brains can do -- why would the rules of the universe prohibit the same kind of physics that's possible inside a person's squishy head vs. outside it. If there is any special phenomonon going on I would think it is in the spiritual realm and it's hard to comment on that.

However, I am also an "AI skeptic," along with many computer scientists. Not in the sense that what they are calling AI isn't real, or doesn't do the stuff they claim. I use some of the hyped algorithms in my work! The skepticism is that our technology is anywhere near real intelligence. Or that the so-called "singularity" where computers supposedly will pass human intelligence is near.

One thing that lends credence to a skeptical position is that, if you look at the history of AI and machine learning, you will see a series of ridiculous hype waves -- we are in one now -- followed by mass realization that the people promoting it were extremely naive and overconfident. In the late 80's the perception of AI was so bad that people didn't even want to admit they were studying it for a while. The previous hype cycle made many of the same claims we see in the current one, as did the previous one. It's always promised to be right around the corner. Actual victories are relatively scarce. Right now most of the hype is based on ONE statistical algorithm called deep learning, and it's starting to show it's limits.

Another easy to explain thing that suggests skepticism is if you just zoom on on Ray Kurzweil's claims. He is a sort of father of transhumanism -- which is a sort modern techno-religion, embraced by many Silicon Valley elites. Kurzweil publically claimed with great certainty that the singularity was imminent -- humans would have to cope with being 2nd class citizens, dumb compared to our software. His claim was based on an estimate of the total computational power of the human brain vs. the rate of progress in building faster and faster computers.

Now it turns out that the estimate he used for the information processing power of the Brain is a better estimate for the computational power of an individual neuron. Has he changed his tune at all? Nope.

I think when tech people tell themselves that their arguments are beliefs are hyper-rational, and believe they could not be subject to the sort of drives that lead people to religion, they are ironically more likely to approach tech in a religious way. Many technologists -- across the board -- are IMO closer to priests today than scientists. My view of this transhumanist thing is that it is a sort of replacement religion. They have all the components... higher power (post-singularity AI), salvation (uploading brains into computers), etc. If you look more deeply into it more it starts to seem bat ***t crazy, pardon my French.

I don't know how deep Musk is in this transhumanist philosophy, but I would say he is at least influenced by it. In some sense it is very narcissistic, like grandiose, to fear all these consequences of our inevitable robot overlords when right now the performance of these systems is pretty sketchy outside their specialized training, when compared to any kind of robust thought process. Narcissism = a need to be greater than oneself while fearing that one is inferior, i.e. falling in love with the FALSE image of oneself.

So I don't think we need to worry too much about slowing AI to prevent a Skynet type situation or something. I think we should be very cautious about endorsing decision makers who want to replace traditional systems with AI systems that perform inadequately on the fringes / for edge cases, simply so they can fire a lot of people. And I think a "slow tech" thing in general is a pretty cool idea. Let's take time, build beautiful things, get it right and give people time to adjust. Few in power seem to care about such a vision... including Musk, with the sole exception of AI and the Skynet scenario. The hubris is, at least, consistent, from a certain point of view.
 
Last edited:

yerrag

Member
Joined
Mar 29, 2016
Messages
10,883
Location
Manila
Tech is great but I think we need to question why it's being pushed on us so, so fast... especially when any tech user knows almost nothing is well built.

I think it's because AI is still an indiscriminate slave to programming, and seen in this context of being devoid of a sense of what's right and what's wrong, is a tool that absolves a protagonist of conflict of responsibility in crimes against humanity. It's akin to the tandem of Jack Dorsey and Vijaya Gadde (in a recent podcast of Joe Rogan's) blaming flaws in algorithms for Twitter banning people with certain viewpoints, but applied to situations where lives are literally being destroyed. It is a scary thought, as nothing can bring more fear to people than an entity that that is seen as unintentionally evil, but doing evil, and whose reason for being is that the system is not perfect, but apologies are being made for it the same way poor internet service is. "Sorry for the inconvenience, but we are making upgrades to this system to improve our level of service" is what we'll be seeing more of. So, we'll just take the cool-aid as long as we're not the class of people being destroyed.

Plausible deniability of responsibility is inherent in the design of such AI systems, and that's to be feared.
 
Last edited:

Similar threads

Back
Top Bottom