The Brain Is Not A Computer - Does Not Process/store Information, Memories, Knowledge

pepsi

Member
Joined
Apr 15, 2013
Messages
177
Location
Texas
I always wondered why we need to sleep. I can lay down and not move a muscle and my body gets rest. But if I quiet my mind and try my hardest not to think about anything, that still wont be the same as rest for my brain from sleep. I wonder if our consciousness(real us) is located somewhere else out there in the universe and it needs to go back each day for a little bit for whatever reason I dont know. I think consciousness and need for sleep are interrelated because I think the purpose of sleep is to be unconscious for a little bit.
 
OP
haidut

haidut

Member
Forum Supporter
Joined
Mar 18, 2013
Messages
19,799
Location
USA / Europe
@haidut, You are playing fast and loose with math that you do not seem to understand. None the claims you are making are even remotely supported by Levin's work, as far I can tell.

>> Matter/energy > information, as in information is always secondary to matter.

If this is the case, why for example do you think that the equation for Bolztmann entropy (core thermodynamics) is *identical* the equation for Shannon entropy. Modern physics is pointing in the direction that information theory is a better/coequal metaphor, not a subordinate process.

>> matter/energy processing device, not an information one

Computers are also matter/energy processing devices. As it is trivial to show humans can act as information processing devices. This is a kind Cartesian axiom, in any case, not any real distinction.

>> Look at one of my comments further below. No actual computer can increase/create mutual information (knowledge).
>> The Brain Is Not A Computer - Does Not Process/store Information, Memories, Knowledge

This kind of statement seems like it can only result from taking some very technical statements, with formal definitions and implying them informally in a "fast and loose" manner. This is probably why Levin himself did not make these sorts of wild claims based on his idea of independence conservation. It would appear to be a huge and unsubstantiated leap, to the point I wonder whether the person who wrote the last article in incompetent or malicious.

Let's engage with this "Law" of independence conservation for mutual information. Rather than quote Levin directly I am going to quote an academic who has attempted formalize and organize his actual argument more clearly. Take a breath because this is some wild math.

"Peano arithmetic has no computable consistent completion - this is the content of Godel’s first incompleteness theorem. It was probably the most surprising result that has been found in the context of Hilbert’s program, which calls for a formalization and axiomatization of all of mathematics, together with a “finitary” consistency proof for this axiomatization. It is important to see however, that G̈odel’s result entails no assertion regarding the general realizability of consistent completions of Peano arithmetic. By basic results of mathematical logic, for every consistent axiomatic system there is a consistent completion. G̈odel’s result only entails the assertion that the consistent completion of Peano arithmetic is impossible by effectively calculable methods- if we accept the Church-Turing thesis.

"Levin in his paper “Forbidden Information” argues that we can significantly expand this assertion to other than effectively calculable methods. His argumentation can be outlined as follows.

"Given any sequence, computing a consistent completion of Peano arithmetic relative to this sequence is equivalent to computing a total extension of the universal partial computable predicate u relative to this sequence. Due to what we will call the forbidden information theorem, every sequence that computes a total extension of u has infinite mutual information with the halting probability Ω. If we accept Levin’s independence postulate, then no sequence generated by any locatable physical process may have infinite mutual information with Ω. So we can conclude the following extension of Godel’s incompleteness assertion, which we will call the forbidden information thesis: no sequence that is generated by any locatable physical process is a consistent completion of Peano arithmetic.

"Levin’s exposition of the argumentation we just outlined is, however, rather sketchyand difficult to follow in some parts. Moreover, he tends to implicitly use resultswithout explicitly mentioning them or indicating where they were proved. The main objective of the present work is to completely and critically elaborate Levin’s argumentation."

So Godel's famous proof, cited at the beginning, was that are are un-provable theorems in any formal system powerful enough to do arithmetic. Written before modern computers or calculators were common, this was very much meant to apply to human beings manipulating symbols in any way. I have studied the theorem extensively. Now what is this Church-Turing Thesis, upon which it is pointed out that Godel's result rests, i.e. where he is saying there is no way to compute certain things. Wikipedia says, "It states that a function on the natural numbers is computable by a human being following an algorithm, ignoring resource limitations, if and only if it is computable by a Turing machine." Algorithms would include any known general proof technique, and generally anything that has been formalized in math. A way of thinking about this, in very loose language, is that anything/one capable of a certain, basic level of computation can compute anything computable.

Next comes a key sentence -- ""Levin in his paper “Forbidden Information” argues that we can significantly expand this assertion to other than effectively calculable methods. His argumentation can be outlined as follows."

That is, Levin's goal was to BROADEN the claim. Not only can computer-y things not do it, Level will argue.. it goes beyond that.

Next up -- ""Given any sequence, computing a consistent completion of Peano arithmetic relative to this sequence is equivalent to computing a total extension of the universal partial computable predicate u [...] every sequence that computes a total extension of u has infinite mutual information with the halting probability Ω." Whu? This is too hard to break down briefly but it means doing what Godel proved you can't do would additionally lead to a "halting problem" situation. It is a proof that shows there is no *general* way to prove that a computer-y thing running a program/solving a theorem is going to finish. In this case, it means, if you have a supposed method that purports to calculate halting information, it's bunk. (The why is interesting... the proof uses a paradox, showing that if you could do this, then you actually couldn't. Definitely worth a look if you are interested in this stuff. Next up, we're told that IF we accept Levin's independence postulate -- the supposed basis on which humans can think things that computers can't, remember: "no sequence that is generated by any locatable physical process is a consistent completion of Peano arithmetic."

In other words, Levin concluded that beyond computer-y processes, nothing physical could transcend the Godel limit on computation. Or reduced very simply, even going beyond the computational methods of computers, nothing/no one can produce these kinds of answers that have eluded computational formal systems.
In this light, can you see why Howell's interpretation might be a bit of a reach?

Let's look at the independence *postulate* -- not Law -- itself. No mathematical proof or argument is given, i.e. we're doing metamathematical guessing here...it's okay but very important context.

"In this section we want to discuss Levin’s independence postulate. It is a non-mathematical statement which we will need to argue for the non-mathematical forbidden information thesis." [...]

Ready? Here is the postulate itself -- which this reviewer ultimately concludes is unsupportable, by the way:

Thesis 3.26(Independence postulate). Let α be an infinite sequence that is definable in N [the natural numbers]. Then for an infinite sequence β that is generated by any locatable physical process, we have ˆI(α:β)<∞.

So all this independence stuff? Yeah, it's frigging based on MATH OF INFINITE SEQUENCES.

Shall we get into the actual conservation laws proposed based on this framework? How about I spare us? The important point is that you can't take computer science and math words that vaguely sound like they support your ideas, pull them out of a formal context -- esp. one that is quite aligned with machine functionalism -- and use them to make sweeping claims about what computers can't do, or what humans can do, or anything else. Howell jumps in an out of different metaphors for information without much care, but that is completely invalid. If there are conservation laws for mutual information it only says something about the mathematical system (FORMAL system, remember Godel?) of mutual information, not about the larger, big ideas that this guy wants to apply it to.

Here is my source http://pgeiger.org/dl/publications/geiger2012mutual.pdf -- I also skimmed a few of Levin's papers, I can point you to them if you want.

This is a too broad topic for me to discuss in writing. Maybe I will bring it up in one of Danny's shows. Did you read the study I posted in my previous comment? Here it is again.
Consciousness as a Physical Process Caused by the Organization of Energy in the Brain

Digital computers organize information, not energy. Consciousness and intelligence are energetic processes, not informational ones. Analog computers may be a much better model of the human brain than digital ones. Digital computers cannot model analog ones, in the general sense. In an analog computer no 2 "inputs" are the same. Same goes for the outputs. Inputs in an analog computer (and brain) are energetic/material, not informational. Brain is the same way. Peat has also spoken about that and why he thinks digital computers cannot properly model neither consciousness nor (general) intelligence.
How do you know? Students, patients, and discovery
"...Wiener's goal-directed machines, like Anokhin's functional systems, worked in space and time, and the idea of steering or guidance assumes a context of time and space in which the adjustments or adaptations are made. Analog computers and control systems in various ways involved formal parallels with reality. The components of the system, like reality, occupied space and time. Digital computers, with their different history and functions, for example their use for creating or breaking military codes, didn't intrinsically model reality in any way. Information had to be encoded and processed by systems of definitions. A sequence of binary digits has meaning only in terms of someone's arbitrary definitions. Parallel with the development of electronic digital computing machines, binary digital theories of brain function were being developed, by people who subscribed to views of knowledge very different from those of Anokhin and Wiener. (Anokhin argued against the idea that nerves use a simple binary code.) These computer models of intelligence justify educational practices based on authoritative knowledge and conditioned (arbitrary) reflexes. Neo-Kantianism has been the dominant academic philosophy in the U.S., turning philosophy into epistemology to exclude ontology. "Operationism" and logical positivism share with neo-Kantianism its elimination of ontology (concern with being itself). In the 1960s, Ludwig von Bertalanffy developed a theory of systems, defining a system as an “arrayed multitude of inter-linked elements.” Although it was intended as a description of biological systems, it reduced the teleological factors, needs and goals, to a kind of mechanical inner program, such as “regulatory genes.” “Following old modes of thought, some called this orderliness of life 'purposiveness' and sought for the 'purpose' of an organ or function. However, in the concept of a 'purpose' a desiring or intending of the goal always appeared to be involved--the type of idea to which the natural scientist is justly unsympathetic” (von Bertalanffy). His system theory was highly compatible with programmed digital computers, that could define the interactions of “elements,” but unlike Anokhin's definition of functional systems, it lacked a pattern-forming mechanism. In Anokhin's view, the system is formed by seeking its goal, and perceiving its progress toward the goal."

I am not trying to convince you. By now you probably know that the goal of this forum is not arguments. It is mutual enrichment. You take whatever you intuitively feel is
valuable and use it update your worldview. I do the same. Arguments are pointless and a waste of time really.
 

Literally

Member
Joined
Aug 3, 2018
Messages
300
>> You take whatever you intuitively feel is valuable and use it update your worldview.

That is a statement of a mystical world view: the idea that intuition is not just an available source of truth but a superior source or the ultimate one. Of course people should (and do) update their world view with what they ultimately assess is valuable, but why should we reject methods other than intuition? Any why should intuition be superior in any way, if an empirical or logical source comes to bear on a subject?

I think that logic, math, and -- though a slipperier slope -- science are so darn effective, despite people with radically different 'intuitions' about subjects where they have something to say, because reality is largely objective. It's independent of our personal intuitions. Rejecting argument is tantamount to rejecting that there are formal (i.e. non-subjective) ways of learning about reality, isn't it? I mean if intuition were the only source, there would be no reliable way to compare evidence about anything. If you said the Empire State Building has 102 stories, and I said 101, well we each have our own intuitions. Except of course, that no sane person would hold this. In fact we could, after ensuring that we stuck to the same definitions, count, or cite various other evidence (e.g. the existence of postal addresses, floor by floor). That would be an extremely productive argument. It would not rely just on direct observation, because it would not be possible to observe all the stories at once.

>> Arguments are pointless and a waste of time really.

I both agree and disagree. I took great care to show how some of the points you made in favor of your position are objectively false, @haidut. While it was easy for you to link to something that seemed intuitively true, it took several hours, on one point, to show how the claim failed. Without intending offense, this is essentially Brandolini's Law: The amount of energy needed to refute bull**** is an order of magnitude bigger than to produce it. It is a big problem with arguments.

Now from your point of view, that *was* a waste of time, since you simply moved on to other supposed sources of support for a pre-established belief. And this is, in fact, the only real alternative to the argumentative mode. In the argumentative mode, which is inextricably linked to the idea of formal reasoning, conclusions are meant to follow premises and valid chains of reasoning, rather than the other way 'round.

I have found that people who are intellectually earnest often (not always) do want to defend their theories against arguments. If the theory is any good, it should be able to withstand an argument, either by showing how the argument did no really apply, how the premises did not hold, where there is a flaw in the argument itself, or where it may be problematic, but still preferred on balance of some other evidence/arguments. That is why I look up to scholars and anyone else who is willing to actually debate. It doesn't mean everyone needs to debate.

While makers of arguments are famous for digging in and refusing to change their minds, I would submit that
(i) There are many notable exceptions. I have changed my mind, occasionally, based on arguments -- the clash of ideas put to the test It's not too hard to find other examples of cases where people find common ground or change one anther's mind. Once I completely switched positions with somebody during the course of an argument.
(ii) Even if arguers don't change their minds, others are able to see how two different "proposals" hash out, where one may have advantages over the other. I think THIS process helps to guide and form many people's opinions on many things, over time. This is how, in practice, useful scientific theories have survived.

Which is to say, of course there is value to be gained by comparing ideas critically. Obviously.

But I don't think there would be a lot of value in US debating whether brains can do something that could not be done by any theoretical information processing machine, here. I would like to make a few final points and then invite you to get the last word in if you wish, @haidut.

1. You have cited Robert Pepperell, who consistently says things like

"Although information theoretic tools were being used to analyze and interpret the data in these studies we should note that what was actually being detected by the experimental procedures was not information per se but the organization of energetic activity or processing in the brain."

and

"Third, brain imaging techniques such as fMRI, PET and EEG don’t detect information in the brain, but changes in energy distribution and consumption."

So his analysis required "information theoretic tools," despite neurons not containing any information (see below).

What do you think would be detected were a similar scanning approach to that used for brains be applied to running digital computers? What would the information look like, for example? Might such a study, just possibly, also detect changes in... energy distribution and consumption?


2. Pepperal argues in How a trippy 1980s video effect might help to explain consciousness that

"The two leading contenders – Stanislas Dehaene’s Global Neuronal Workspace Model and Giulio Tononi’s Integrated Information Theory – both claim that consciousness results from information processing in the brain, from neural computation of ones and zeros, or bits."

This is beyond a straw man, it's just wrong. Whether the theories are grossly mis-characterized because the author hasn't bothered to understand them or prefers a straw man, who knows.

Pepperel continues.. "Brains, I argue, are not squishy digital computers – there is no information in a neuron."

There is NO information in a neuron! This guy is willing to twist himself into all kinds of knots in order to avoid making people seem computer-y, but forgets that information theory does not depend on computers and the word "information" well predates computers.

I have already referenced a study showing activation of an individual neuron corresponding to thinking or not thinking about a subject. It seems obvious at face value that brains do in fact "have" information according to just about any formal theory of information. And I don't doubt that @haidut and Pepperel would agree that in principle, humans could do anything computers could do, by manipulating the same symbols the computer manipulates (and if necessary, manipulating symbols that stand in for other relevant physical components in the machine). This is why the claim seems rather bizarre to me. Plenty of people (like that hack Searle) make remotely plausible arguments along the lines that humans can do MORE than computers. But Pepperel wants to say that humans cannot compute. Which seems so crazy I wonder if I have misunderstood him, but... his claims seem straightforward. No information in neurons.

From my point of view, however, the important blind spot seems to be in thinking one could categorically rule out anything in the *other* direction. Any physical process can be described with symbols to any needed degree of precision... and if you are going to deny that, well all you have to do is provide a counterexample -- without simply appealing to the edge of current measurement/sensor technology, I would hope (That is, one could easily have said in 1500AD that there was no reliable way to measure a number of molecules. We now can do so, although precision is always limited with any measurement, and of course, have equations -- which are information models -- to match).


4. Pepperel again: "Brains are delicate organic instruments that turn energy from the world and the body into useful work that enables us to survive. Brains process energy, not information."

Wow, just wow. This is radical. Of course no one disputes that brains are squishy and delicate, while digital computers are hard and resilient. If Pepperel gave the slightest importance to accurately reflecting the idea he is arguing against, he might note that is the prospect of the same kind of patterns existing, *despite* the difference in physical medium, that is really being claimed by information theory. This is why we can say, for example, that two identical computers might have the 'same' information.. or for that matter, why two generals have the 'same' information based on whether they know a certain fact or not.

Following Pepperel's claim, if someone does calculus, there is no information. The point is just to move the mouth and the hands in a certain way. Lol.

But, if a person is in a coma and can only blink once for yes and twice for no, they can still make meaningful decisions, even though they will never *physically* interact with the world again in any meaningful way. Their eye blinks don't physically accomplish anything unless they convey information. And in fact it could be ANY body part being wiggled. The physical part literally doesn't matter as long as something can be moved. If the person in a coma also happens to be an accomplished detective, it would even be possible for her to solve a crime this way -- to produce information, literally in the form of a series of binary digits, that wasn't known before. Would she be any less human because she's not using her brain's energy to move *anything in particular* (i.e. it does not matter what is being moved, as it is "in formation"), or fry eggs, or whatever it is this "brain energy" is supposed to accomplish that is DEFINITELY NOT information related?

It is blindingly obvious that humans can retain and process information. So since that part of it is apparently just nonsense, what about the idea that processing energy is somehow different than procesisng information. Does that hold any water?

The short answer is that there are exceedingly good reasons to believe the exact opposite of that. Pepperel hasn't provide any reasons, as far as I can tell to believe that these categories would be unrelated. The relevant physics and information theory has converged to a point of agreement that they are inextricably related. For example, it has been determined that current classes of computers are hitting a wall in terms of energy efficiency (already a core metric in computing) because of physical information erasure when computations only run "one way" in time, destroying information in their wake. Future computers will be reversible, meaning computations can be run in either direction. Information, in this deep physics sense, will not be destroyed -- or at least this will be something chips and/or programmers contend directly with, avoiding to the extent possible. Because information destruction is closely related to the (broader) physical phenomenon of entropy. Look at the equations. Look at Landauer's principle - Wikipedia

If you know physics, you may be thinking reversible processes in thermodynamics -- bingo! It's the same stuff. There is no fundamental difference between energy and information processing machines -- well, within an appropriate thermal window anyhow (not too cold, not too hot). When information theorists and physicists arrive at the same equation and merge their understanding of those equations, it's something to take note of. Note that it had nothing to do with intuitions. The similarities had been staring us in the face, but it took formal methods -- oh, look these sets of equations are the same, bub?! -- to learn this. The same thing has happened in computer science and old-school logic BTW... it's known as the Curry Howard isomorphism. In both cases, it is possible to establish the correspondence through argument, because it's real. If you have different intuitions about it, you just flunk. And today it is very common to use mechanical (i.e. computer) means of verifying all kinds of formal arguments. It's not just possible, it's done. Systems that have to be correct and reliable get built on formal methods... which means stuff you can argue about and determine that some claims are better than others. Nobody who can have a bridge built on formal methods wants one built on intuition.

Zooming out, the kind of information that humans can work with directly (at a macro scale) is really about patterns. Things "in formation". A MODEL, where each aspect of the information corresponds to some aspect of the thing being modeled. I suspect this is why Pepperel can't seem to find it with a microscope? He seems to deny the validity of such a thing, or to suggest that humans don't do it, purely because he wants a priori humans to be capable of some fundamental physical process that machines aren't. As an argumentative technique, this doesn't hold water.

I think if someone wanted to establish that human brains can do something that in theory, an information processing machine couldn't, it would be incumbent on them to provide examples of things that humans could do that information processing systems, in theory, couldn't. The in theory is important. In 1950 it would be easy to argue, well, computers can't read handwriting, but then they could. But at no time could you have successfully argued that reading handwriting cannot, in theory, be framed as an information processing task. Good luck with that, I guess?

Like I said, feel free to get the last word in, @haidut. You have no obligation to defend your ideas if you don't want to... it doesn't mean they are false. I have not actually argued they are false, I have argued that you are relying on a bunch of arguments that don't actually hold, and that it might prompt you to take stock. It's nothing personal... I hope you don't feel personally assaulted just because I have jumped all over some of the claims. IO some rigor would benefit you in these avenues.

You know, I think intuition does have a big place and I think if you're interested in the idea that humans can do stuff computers couldn't do even in theory, I'd encourage you to keep exploring it... but to find better arguments, yes, arguments are what Pepperel and some of these others you've cited are making -- both for and against.
 
Last edited:
Joined
Apr 1, 2021
Messages
296
I believe that forcing to create precise memories increase stress. One friend of mine has an incredible memory and recall, but he would always prefer entering into a novelty mode using creativity, imagination and looking at the world with child eyes rather than being logical recalling precise information.
 
EMF Mitigation - Flush Niacin - Big 5 Minerals

Similar threads

Back
Top Bottom