How Does Alcohol Effect Neurotransmitters

biggirlkisss

Member
Joined
Mar 1, 2013
Messages
971
how does alcohol effect neurotransmitters? Why do people more confident when they have some alcohol then they have a huge over. It most be increasing dopamine No?
 

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
how does alcohol effect neurotransmitters? Why do people more confident when they have some alcohol then they have a huge over. It most be increasing dopamine No?

Lol, let me chime in sir. Okay, booze effects many neurotransmitter systems. It is primarily an N Methyl D Aspartate receptor antagonist. You may be familiar with others like Ketamine, Magnesium, DXM, PCP, and who can forget Chloroform. It will not cause the strong dissasociative anesthetic effects like PCP, and Ketamine though.

Alcohol also has a degree of GABA receptor agonism, but unlike Benzodiazepines, has no direct binding site. When GABA increases a bit, many get anxiety relief. Perhaps, it’s where the liquid courage moniker stems from. Since GABA inhibits signaling, alcohol is therefore classified as a depressant. It slows us down, mentally and physically.

It also agonizes the 5ht3 receptor, and this is commonly associated with its negative hangover effects like nausea, vomiting, and seizure propensity. I always thought it elicits seizures in withdrawal via Glutaminergic rebound when NMDA receptors are no longer attenuated.

Lastly, rewarding effects of repeated ingestion are mediated via Dopamine 2 receptors, but I’m not sure how this works. It’s not direct agonism.
 
OP
biggirlkisss

biggirlkisss

Member
Joined
Mar 1, 2013
Messages
971
thanks frank that a great explanation I just don't understand how drugs like benzodiazepines increase gaba lowering adrenaline but alcohol causes lower testosterone levels.
 

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
thanks frank that a great explanation I just don't understand how drugs like benzodiazepines increase gaba lowering adrenaline but alcohol causes lower testosterone levels.

Well alcohol effects testosterone via its interactions with our liver. Verious enzymatic reactions occur or get used up.
 

Travis

Member
Joined
Jul 14, 2016
Messages
3,189
I read a study on how it influenced serotonin. Let me see if I can find it.. . . .[searching]. . . ..nope,* but I'm pretty sure that I'd posted it in a Jordan Petersen thread.

But I'm fairly certain that pure ethanol initially lowers serotonin, but the addition of sugar could cause the reverse to happen. Refined sugar is thought to cause a raise in serotonin through insulin causing peripheral amino acid uptake, which generally lowers the amount of amino acids in competition for tryptophan uptake into the brain—the amino acid serotonin synthesis is directly-dependent on. Relatively high tryptophan levels exist in the blood because albumin has a few high-affinity binding domains for it.

But even with straight ethanol, like martinis, chronic intake almost invariably causes brain serotonin to rise. This is probably because the liver has less ability to send tryptophan down the kyneurenine pathway—leading the higher plasma tryptophan, higher brain uptake, and more serotonin synthesis by the substrate-unsaturated and unregulated enzymes tryptophan hydroxylase and aromatic amino acid decarboxylase. This part is the most certain, as serotonin and it's metabolites have consistently and unequivocally been measured in the cerebospinal fluid of alcoholics.

If you want to lower serotonin, then I'd drink martinis (but not all day, every day.) The α- and β- pinenes in gin have slight psychedelic effect of their own, but the stereochemistry matters. Leveorotary α-pinene in a depressent, while (+)-α-pinene is not. And strangely, conifers from England have (+)-α-pinene, while those from the Americas have exclusively (–)-α-pinene. These two molecules are enantiomers, nonsuperimposable mirror images of eachother.
I think Tanqueray is the best gin, and Noilly Prat the best vermouth (hands-down.)


*Edit: Here it is:
  • Badawy, AA-B., et al. "Decrease in circulating tryptophan availability to the brain after acute ethanol consumption by normal volunteers: implications for alcohol-induced aggressive behaviour and depression."Pharmacopsychiatry 28.S 2 (1995): 93-97.
    • And a brief description is given here.
 
Last edited:

sladerunner69

Member
Joined
May 24, 2013
Messages
3,307
Age
31
Location
Los Angeles
I read a study on how it influenced serotonin. Let me see if I can find it.. . . .[searching]. . . ..nope,* but I'm pretty sure that I'd posted it in a Jordan Petersen thread.

But I'm fairly certain that pure ethanol initially lowers serotonin, but the addition of sugar could cause the reverse to happen. Refined sugar is thought to cause a raise in serotonin through insulin causing peripheral amino acid uptake, which generally lowers the amount of amino acids in competition for tryptophan uptake into the brain—the amino acid serotonin synthesis is directly-dependent on. Relatively high tryptophan levels exist in the blood because albumin has a few high-affinity binding domains for it.

But even with straight ethanol, like martinis, chronic intake almost invariably causes brain serotonin to rise. This is probably because the liver has less ability to send tryptophan down the kyneurenine pathway—leading the higher plasma tryptophan, higher brain uptake, and more serotonin synthesis by the substrate-unsaturated and unregulated enzymes tryptophan hydroxylase and aromatic amino acid decarboxylase. This part is the most certain, as serotonin and it's metabolites have consistently and unequivocally been measured in the cerebospinal fluid of alcoholics.

If you want to lower serotonin, then I'd drink martinis (but not all day, every day.) The α- and β- pinenes in gin have slight psychedelic effect of their own, but the stereochemistry matters. Leveorotary α-pinene in a depressent, while (+)-α-pinene is not. And strangely, conifers from England have (+)-α-pinene, while those from the Americas have exclusively (–)-α-pinene. These two molecules are enantiomers, nonsuperimposable mirror images of eachother.
I think Tanqueray is the best gin, and Noilly Prat the best vermouth (hands-down.)


*Edit: Here it is:
  • Badawy, AA-B., et al. "Decrease in circulating tryptophan availability to the brain after acute ethanol consumption by normal volunteers: implications for alcohol-induced aggressive behaviour and depression."Pharmacopsychiatry 28.S 2 (1995): 93-97.
    • And a brief description is given here.


"Non superimposable mirror images of each other"? It that even possible?

Do you enjoy Hendrix gin or is it too floral
 
OP
biggirlkisss

biggirlkisss

Member
Joined
Mar 1, 2013
Messages
971
I thought regular sugar increases t3 production. Helping you so how could alcohol and sugar causes problem unless its a allergic reaction.
 

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
I thought regular sugar increases t3 production. Helping you so how could alcohol and sugar causes problem unless its a allergic reaction.

I’ve definitely read that before and always assumed sugar helps tryptophan across the brain. Then sugar was advocated on this forum, so it puzzled me.
 

Travis

Member
Joined
Jul 14, 2016
Messages
3,189
"Non superimposable mirror images of each other"? It that even possible?
My right hand is a non superimposable mirror image of my left—when the two are forced to lie in the same plane. Molecules of this type need to be designated either "D and L" or as "(+) and (−)", depending on which direction they rotate plane-polarized light in solution. The "R and S" designation is sometimes used based on the priority of functional groups; but this, of course, says nothing about light rotation.
Do you enjoy Hendrix gin or is it too floral
I like it, but I still like Tanqueray the most. It is certainly the piniest London Dry Gin and I've tried nearly all of them (the exported ones, anyway.)

I would put Hendricks right near the top. Bombay Sapphire has a pretty bottle; but I find it not as good. The Bombay in the clear bottle is more spicy than piney, but its still better than Seagrams—the most popular mid-shelf gin.

I've tried all four of the gins produced in my state, and one of them was really good. Craft alcohol supports the local economy.
 

Travis

Member
Joined
Jul 14, 2016
Messages
3,189
I’ve definitely read that before and always assumed sugar helps tryptophan across the brain. Then sugar was advocated on this forum, so it puzzled me.
I think it depends on the velocity you drink it. The way Fernstrom and others describe how it does this is dependent on insulin, so if you drink it slowly you could be okay.

And going by their explanation, eating gelatin would prevent this from happening since the bloodstream would be flooded with the competing amino acids.

But drinking sugar on an empty stomach wigs me out, and this could be the serotonin. Even raw honey in water will do this. I never get this effect from piling-down ½ watermelon, and I think this could be because of the competing amino acids.
 

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
My right hand is a non superimposable mirror image of my left—when the two are forced to lie in the same plane. Molecules of this type need to be designated either "D and L" or as "(+) and (−)", depending on which direction they rotate plane-polarized light in solution. The "R and S" designation is sometimes used based on the priority of functional groups; but this, of course, says nothing about light rotation.

I like it, but I still like Tanqueray the most. It is certainly the piniest London Dry Gin and I've tried nearly all of them (the exported ones, anyway.)

I would put Hendricks right near the top. Bombay Sapphire has a pretty bottle; but I find it not as good. The Bombay in the clear bottle is more spicy than piney, but its still better than Seagrams—the most popular mid-shelf gin.

I've tried all four of the gins produced in my state, and one of them was really good. Craft alcohol supports the local economy.

Isn’t this why the D or L isomers of speed yield opposing effects ?
 

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
I think it depends on the velocity you drink it. The way Fernstrom and others describe how it does this is dependent on insulin, so if you drink it slowly you could be okay.

And going by their explanation, eating gelatin would prevent this from happening since the bloodstream would be flooded with the competing amino acids.

But drinking sugar on an empty stomach wigs me out, and this could be the serotonin. Even raw honey in water will do this. I never get this effect from piling-down ½ watermelon, and I think this could be because of the competing amino acids.

Watermelon is awesome, that citrulline keeps me rock hard, like titanium !
 

ddjd

Member
Joined
Jul 13, 2014
Messages
6,677
It also agonizes the 5ht3 receptor
frank does it only agonise this serotonin receptor, any others. and would different types of alcohol agonise different receptors, like red wine might agonise 5ht2a...
 

Travis

Member
Joined
Jul 14, 2016
Messages
3,189
Isn’t this why the D or L isomers of speed yield opposing effects ?
It must be, since nothing else is different between the two. This type of selectivity, in my opinion, is the best evidence for specific receptors. The wrong isomer would be like putting your left foot in your right shoeprint; it would not align.
If the glove does not fit, you must acquit. ―Johnnie Cochran
 

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
Lol, yeah, one isomer is used in Vick’s. Speed is a great bronchodilator, the other isomer, is a great help for studying calculus till 5 AM
 

Travis

Member
Joined
Jul 14, 2016
Messages
3,189
Lol, yeah, one isomer is used in Vick’s. Speed is a great bronchodilator, the other isomer, is a great help for studying calculus till 5 AM
Your probably right, but I find that coffee works pretty well—something that I'm just now drinking after taking a one-week hiatus.

So I'm ready to have a look at Miles Mathis attacking the logical framework—the very definition—of the derivative (∂x/∂t)! He has such bold and provocative titles such as:

➫ A Re-definition of the Derivative (why the calculus works—and why it doesn’t)
➫ The Calculus is Corrupt
➫ The Proof of the Derivative for Powers is False
➫ The Derivatives of ln(x) and 1/x are Wrong
➫ The Derivative of log(x) is also Wrong
Bold titles indeed; they all can all be found here.

He's certainly right about a few things, notably his article on π and the particle nature of light. I've certainly not read them all, or even a considerable fraction of them—he's got hundreds, and they're long—so I'm still kinda skeptical. I'm getting the feeling that he's about 90% right, but this number will probably change as I read more.

The coolest website for calculus is here, because it's got thousands of practice problems and answers. This can actually be enjoyable, and problem-solving with calculus is far more interesting than the dry–boring mathematics of statistics and combinatorics.
 

Travis

Member
Joined
Jul 14, 2016
Messages
3,189
Okay, so I read the article called A REDEFINITION OF THE DERIVATIVE: why the calculus works—and why it doesn't by Miles Mathis and I must say that it's almost entirely not worth reading. You might almost consider it a waste of time—here's why:

The article is full of what I'd Mathisian pseudo-strawmen—others seem to agree:

Mathis: [I have gotten several emails over the years from angry mathematicians, saying or implying that my mentioning this derivation is some sort of strawman. [...]]

. . . ..in which he follows-up with another pseudo-strawman:

Mathis: [[...] They tell me they don't prove the derivative that way and then launch into some longwinded torture of both mediums (math and the English language) to show me how to do it. [...]]

. . . ..by implying that the pointing-out the flaws and strawmen in his system—such as what follows—is simply "longwinded torture." Could they have a point?

Mathis: I answer, where is the t axis, in that case? How are you or the bug drawing the t component on the wall? You are not drawing it, you are ignoring it. In that case the given curve equation does not apply to the curve you have drawn on the wall, it applies to some three-variable curve on three axes.
The dissenter says, “Maybe, but the curvature is the same anyway, since t is not changing.”
I say, is the curve the same? You may have to plot some “points” on a three dimensional graph to see it, but the curve is not the same. Plot any curve, or even a straight line on an (x, y) graph. Now push that graph along a t axis. The slope of the straight line decreases, as does the curvature of any curve. Even a circle is stretched. This has to affect the calculus. If you change the curve you change the areas under the curve and the slope of the tangent at each point.
You don't need to draw a t-axis in a parametric equation; you can have a time variable without doing this. Imagine it like so: Take a two-dimensional graph and set the time units in minutes, to make it easy; then draw a point at every time t. Voila! you have a parametric equation, and time needs no spatial dimension. By his introduction of the t-axis a necessary component, Mathis is creating a pseudo-strawman that he can easily knock-down for his glorification and the entertainment of his readership.

Mathis: Question 1 concerns finding an instantaneous velocity, which is a velocity over a zero time interval. This is done all the time, up to this day. Question 2 is the mathematical inverse of question 1. Given the velocity, find the distance traveled over a zero time interval. This is no longer done, since the absurdity of it is clear. On the graph, or even in real life, a zero time interval is equal to a zero distance. There can be no distance traveled over a zero time interval, even less over a zero distance, and most people seem to understand this. Rather than take this as a problem, though, mathematicians and physicists have buried it. It is not even paraded about as a glorious paradox, like the paradoxes of Einstein. No, it is left in the closet, if it is remembered to exist al all. [...]
A method that yields an instantaneous velocity must be a suspect method. An equation derived from this method cannot be trusted until it is given a logical foundation. There is no distance over a zero distance; and, equally, there is no velocity over a zero interval.

This is not true, since zero is never achieved. The finite interval of ∂x is always taken as the non-reducible unit: the infinitesimal. This has never been thought of as zero by any author that I've read. This ties-into what follows:

Mathis: The final step—letting δx go to zero—cannot be defended whether you are taking only taking the denominator on the left side to zero or whether you are taking the whole fraction toward zero (which has been the claim of most). The ratio δy/δx was already compromised in the previous step. The problem is not that the denominator is zero; the problem is that the numerator is a point. The numerator is zero.

Mathis: For it tells us that when we are finding the derivative, we are finding the rate of change of the first variable (the primed variable) when the other variable is changing at the rate of one. Therefore, we are not letting either variable approach a limit or go to zero. To repeat, ΔΔx is not going to zero. It is the number one.
That is why you can let it evaporate in the denominator of the current calculus proof. In the current proof the fraction Δy/Δx (this would be ΔΔy/ΔΔx by my notation) is taken to a limit, in which case Δx is taken to zero, we are told. But somehow the fraction does not go to infinity, it goes to Δy. The historical explanation has never been satisfactory. I have shown that it is simply because the denominator is one. A denominator of one can always be ignored.

This is simply false. The denominator never "evaporates in the current calculus proof." This is why Leibniz had always written it as ∂x/∂t — this is the derivative, not ∂x. The term ∂x is the infinitesimal unit. It may seem to "evaporate" with Newton's notation, but this is not generally used in pure mathematics (although it's sometimes used to save space and/or circumvent the typographical inconvenience of expressing a fraction.) To prove further that the denominator does not evaporate, it is treated exactly as if it were a fraction—as such:

∂y/∂x = 3x²

∂y = 3x²∂x

∫∂y = ∫3x²∂x

y = x³

Mathis: Some will think that (y + δy) and (x + δx) are the co-ordinates of another any-point on the curve—this any-point being some distance further along the curve than the first any-point. But a closer examination will show that the second curve equation is not the same as the first. The any-point expressed by the second equation is not on the curve y = x². In fact, it must be exactly δy off that first curve. Since this is true, we must ask why we would want to subtract the first equation from the second equation. Why do we want to subtract an any-point on a curve from an any-point off that curve?

Not true. You can say that the point's vertical position is ∂y "off that first curve", but it's horizontal position changes accordingly by ∂x—placing it back on the curve. These is more strawman-type paragraphs by Mathis—and there's more, but you get the point. So what is his redefinition of the derivative? Well, something that isn't even internally-consistent. He should know that this is a requisite for any mathematical system, since he says so in this very article:

Mathis: Carl Boyer said, “Since mathematics deals with relations rather than with physical existence, its criterion of truth is inner consistency rather than plausibility in the light of sense perception or intuition.” I agree,
He claims to agree, but his system disagrees. It seems plausible only if you cherry-pick the axioms laid-out in his table:

Mathis: Δx = 1, 2, 3, 4, 5, 6, 7, 8, 9...
Δ2x = 2, 4, 6, 8, 10, 12, 14, 16, 18...
Δx² = 1, 4, 9, 16, 25, 36, 49, 64, 81...
Δx³ = 1, 8, 27, 64, 125, 216, 343...
Δx⁴ = 1, 16, 81, 256, 625, 1296...
[truncated here; see article for full listing]
And so on

Voila. We have the current derivative equation, just from a table.

And he selects this as an heuristic pseudo-proof:

Mathis: Now let's pull out the important lines and relist them in order:
ΔΔx = 1, 1, 1, 1, 1, 1, 1
ΔΔΔx² = 2, 2, 2, 2, 2, 2, 2
ΔΔΔΔx³ = 6, 6, 6, 6, 6, 6, 6
ΔΔΔΔΔx⁴ = 24, 24, 24, 24

[truncated here; see article for full listing]​

Do you see it?
2ΔΔx = ΔΔΔx²
3ΔΔΔx² = ΔΔΔΔx³
4ΔΔΔΔx³ = ΔΔΔΔΔx⁴

[truncated here; see article for full listing]
and so on.

All of this makes sense, but it breaks-down of you go any lower; it breaks-down at ΔΔx² = Δ2x. Referring to his table above, this would make (1, 3, 5, 7, 9, 11, 13, 15, 17, 19...) = (2, 4, 6, 8, 10, 12, 14, 16, 18... ). This can never me made equivalent, just try.

So does he ever use the identity ΔΔx² = Δ2x? Well.. . . .

Mathis: So, 3Δx² = ΔΔx³
[Deltas may be cancelled across these particular equalities]*
And, Δt' = 3Δx² = ΔΔx³
Δt = Δx³
Therefore, Δt' = ΔΔt

He states that 3Δx² = ΔΔx³ , but his chart of axioms indicates otherwise:

3Δx² ≠ ΔΔx³

3(1, 4, 9, 16, 25, 36, 49, 64, 81... ) ≠ (1, 7, 19, 37, 61, 91, 127...)

(3, 12, 27, 48, 75, 108, 147, 192, 243... ) ≠ (1, 7, 19, 37, 61, 91, 127...)

Mathis: It means that the x-axis itself has a rate of change of one, and the y or t-axis also. The number line itself has a rate of change of one, by definition. None of my number theory here would work if it did not.

Mathis: I must also stress that the cardinal number line has a RoC of 1 no matter what numbers you are looking at.

Mathis: Even if we put in fractions or decimals, Δx will be changing at the rate of one.

Mathis: Following this strict method, we find that any integer subtracted from the next is equal to 1, which must be written ΔΔx = 1. On a graph each little box is 1 box wide, which makes the differential from one box to the next 1. To go from one end of a box to the other, you have gone 1. This distance may be a physical distance or an abstract distance, but in either case it is the change of a change and must be understood as ΔΔx = 1.
Someone might interrupt at this point to say, "You just have one more delta at each point than common usage.

No, someone will likely interrupt at this point and say: "Why is your interval always equal to one?" The answer is simply that he thinks the denominator of the derivative (∂x/∂t) "disappears," but I've shown that this is never the case; it never disappears. He needs this to be so as the entire premise for his article, and just look at the pains to goes through to argue this one point:

Mathis: If this is not clear, let us take the case where I let you choose values for x1 and x2 arbitrarily, say x1 = .0000000001 and x2 = .0000000002. If you disagree with my theory, you might say, "My gap is only .0000000001. Therefore my RoC must be much slower than one. A sequence of gaps of .0000000001 would be very very slow indeed." But it wouldn’t be slow. It would have a RoC of 1. You must assume that your .0000000001 and .0000000002 are on the number line. If so, then your gap is ten billion times smaller than the gap from zero to 1. Therefore, if you relate your gap to the number line—in order to measure it—then the number line, galloping by, would traverse your gap ten billion times faster than the gap from zero to one. The truth is that your tiny gap would have a tiny RoC only if it were its own yardstick. But in that case, the basic unit of the yardstick would no longer be 1. It would be .0000000001. A yardstick, or number line, whose basic unit is defined as 1, must have a RoC of 1, at all points, by definition.

And below is his only boldface equation in the entire article, the distilled wisdom of Miles Mathis. This is what you've wasted two hours of reading to find:

Mathis: *Therefore, Δt' = ΔΔt
The derivative is just the rate of change of our dependent variable Δt. But I repeat, it is the rate of change of a length or period. It is not the rate of change of a point or instant. A point on the graph stands for a value for Δt, not a point in space. The derivative is a rate of change of a length (or a time period).

And it's no different than the current formalism, just expressed differently. In Leibniz notation:

∂(∂t) = ∂²t

Mathis: You cannot postulate the existence of a limit at a “point” that is already defined by two differentials, (x - 0) and (y - 0).

At this point he gives-up all pretense of being rigorous, and starts using the hypen as a minus sign (-,−).

Mathis: This means that the “uncertainty” of quantum mechanics is due (at least in part) to the math and not to the conceptual framework. That is to say, the various difficulties of quantum physics are primarily problems of a misdefined Hilbert space and a misused mathematics (vector algebra), and not problems of probabilities or philosophy.
No. The primary problem with quantum physics is imagining light as actual physical waves—and especially as pressure waves, as depicted to explain the double-slit experiment (but sine-waves are still inappropriate in most cases.) This is where Miles Mathis really shines. Anybody interested in this needs to read this excellent article by Miles Mathis.
Mathis: [I have gotten several emails over the years from angry mathematicians, saying or implying that my mentioning this derivation is some sort of strawman. [...]]
 
Last edited:

Frankdee20

Member
Joined
Jul 13, 2017
Messages
3,772
Location
Sun Coast, USA
C61B6D54-8ED1-410A-95A7-E75FF6366085.jpeg
Okay, so I read the article called A REDEFINITION OF THE DERIVATIVE: why the calculus works—and why it doesn't by Miles Mathis and I must say that it's almost entirely not worth reading. You might almost consider it a waste of time—here's why:

The article is full of what I'd Mathisian pseudo-strawmen—others seem to agree:

Mathis: [I have gotten several emails over the years from angry mathematicians, saying or implying that my mentioning this derivation is some sort of strawman. [...]]

. . . ..in which he follows-up with another pseudo-strawman:

Mathis: [[...] They tell me they don't prove the derivative that way and then launch into some longwinded torture of both mediums (math and the English language) to show me how to do it. [...]]

. . . ..by implying that the pointing-out the flaws and strawmen in his system—such as what follows—is simply "longwinded torture." Could they have a point?

Mathis: I answer, where is the t axis, in that case? How are you or the bug drawing the t component on the wall? You are not drawing it, you are ignoring it. In that case the given curve equation does not apply to the curve you have drawn on the wall, it applies to some three-variable curve on three axes.
The dissenter says, “Maybe, but the curvature is the same anyway, since t is not changing.”
I say, is the curve the same? You may have to plot some “points” on a three dimensional graph to see it, but the curve is not the same. Plot any curve, or even a straight line on an (x, y) graph. Now push that graph along a t axis. The slope of the straight line decreases, as does the curvature of any curve. Even a circle is stretched. This has to affect the calculus. If you change the curve you change the areas under the curve and the slope of the tangent at each point.
You don't need to draw a t-axis in a parametric equation; you can have a time variable without doing this. Imagine it like so: Take a two-dimensional graph and set the time units in minutes, to make it easy; then draw a point at every time t. Voila! you have a parametric equation, and time needs no spatial dimension. By his introduction of the t-axis a necessary component, Mathis is creating a pseudo-strawman that he can easily knock-down for his glorification and the entertainment of his readership.

Mathis: Question 1 concerns finding an instantaneous velocity, which is a velocity over a zero time interval. This is done all the time, up to this day. Question 2 is the mathematical inverse of question 1. Given the velocity, find the distance traveled over a zero time interval. This is no longer done, since the absurdity of it is clear. On the graph, or even in real life, a zero time interval is equal to a zero distance. There can be no distance traveled over a zero time interval, even less over a zero distance, and most people seem to understand this. Rather than take this as a problem, though, mathematicians and physicists have buried it. It is not even paraded about as a glorious paradox, like the paradoxes of Einstein. No, it is left in the closet, if it is remembered to exist al all. [...]
A method that yields an instantaneous velocity must be a suspect method. An equation derived from this method cannot be trusted until it is given a logical foundation. There is no distance over a zero distance; and, equally, there is no velocity over a zero interval.

This is not true, since zero is never achieved. The finite interval of ∂x is always taken as the non-reducible unit: the infinitesimal. This has never been thought of as zero by any author that I've read. This ties-into what follows:

Mathis: The final step—letting δx go to zero—cannot be defended whether you are taking only taking the denominator on the left side to zero or whether you are taking the whole fraction toward zero (which has been the claim of most). The ratio δy/δx was already compromised in the previous step. The problem is not that the denominator is zero; the problem is that the numerator is a point. The numerator is zero.

Mathis: For it tells us that when we are finding the derivative, we are finding the rate of change of the first variable (the primed variable) when the other variable is changing at the rate of one. Therefore, we are not letting either variable approach a limit or go to zero. To repeat, ΔΔx is not going to zero. It is the number one.
That is why you can let it evaporate in the denominator of the current calculus proof. In the current proof the fraction Δy/Δx (this would be ΔΔy/ΔΔx by my notation) is taken to a limit, in which case Δx is taken to zero, we are told. But somehow the fraction does not go to infinity, it goes to Δy. The historical explanation has never been satisfactory. I have shown that it is simply because the denominator is one. A denominator of one can always be ignored.

This is simply false. The denominator never "evaporates in the current calculus proof." This is why Leibniz had always written it as ∂x/∂t — this is the derivative, not ∂x. The term ∂x is the infinitesimal unit. It may seem to "evaporate" with Newton's notation, but this is not generally used in pure mathematics (although it's sometimes used to save space and/or circumvent the typographical inconvenience of expressing a fraction.) To prove further that the denominator does not evaporate, it is treated exactly as if it were a fraction—as such:

∂y/∂x = 3x²

∂y = 3x²∂x

∫∂y = ∫3x²∂x

y = x³

Mathis: Some will think that (y + δy) and (x + δx) are the co-ordinates of another any-point on the curve—this any-point being some distance further along the curve than the first any-point. But a closer examination will show that the second curve equation is not the same as the first. The any-point expressed by the second equation is not on the curve y = x². In fact, it must be exactly δy off that first curve. Since this is true, we must ask why we would want to subtract the first equation from the second equation. Why do we want to subtract an any-point on a curve from an any-point off that curve?

Not true. You can say that the point's vertical position is ∂y "off that first curve", but it's horizontal position changes accordingly by ∂x—placing it back on the curve. These is more strawman-type paragraphs by Mathis—and there's more, but you get the point. So what is his redefinition of the derivative? Well, something that isn't even internally-consistent. He should know that this is a requisite for any mathematical system, since he says so in this very article:

Mathis: Carl Boyer said, “Since mathematics deals with relations rather than with physical existence, its criterion of truth is inner consistency rather than plausibility in the light of sense perception or intuition.” I agree,
He claims to agree, but his system disagrees. It seems plausible only if you cherry-pick the axioms laid-out in his table:

Mathis: Δx = 1, 2, 3, 4, 5, 6, 7, 8, 9...
Δ2x = 2, 4, 6, 8, 10, 12, 14, 16, 18...
Δx² = 1, 4, 9, 16, 25, 36, 49, 64, 81...
Δx³ = 1, 8, 27, 64, 125, 216, 343...
Δx⁴ = 1, 16, 81, 256, 625, 1296...
[truncated here; see article for full listing]
And so on

Voila. We have the current derivative equation, just from a table.

And he selects this as an heuristic pseudo-proof:

Mathis: Now let's pull out the important lines and relist them in order:
ΔΔx = 1, 1, 1, 1, 1, 1, 1
ΔΔΔx² = 2, 2, 2, 2, 2, 2, 2
ΔΔΔΔx³ = 6, 6, 6, 6, 6, 6, 6
ΔΔΔΔΔx⁴ = 24, 24, 24, 24

[truncated here; see article for full listing]​

Do you see it?
2ΔΔx = ΔΔΔx²
3ΔΔΔx² = ΔΔΔΔx³
4ΔΔΔΔx³ = ΔΔΔΔΔx⁴

[truncated here; see article for full listing]
and so on.

All of this makes sense, but it breaks-down of you go any lower; it breaks-down at ΔΔx² = Δ2x. Referring to his table above, this would make (1, 3, 5, 7, 9, 11, 13, 15, 17, 19...) = (2, 4, 6, 8, 10, 12, 14, 16, 18... ). This can never me made equivalent, just try.

So does he ever use the identity ΔΔx² = Δ2x? Well.. . . .

Mathis: So, 3Δx² = ΔΔx³
[Deltas may be cancelled across these particular equalities]*
And, Δt' = 3Δx² = ΔΔx³
Δt = Δx³
Therefore, Δt' = ΔΔt

He states that 3Δx² = ΔΔx³ , but his chart of axioms indicates otherwise:

3Δx² ≠ ΔΔx³

3(1, 4, 9, 16, 25, 36, 49, 64, 81... ) ≠ (1, 7, 19, 37, 61, 91, 127...)

(3, 12, 27, 48, 75, 108, 147, 192, 243... ) ≠ (1, 7, 19, 37, 61, 91, 127...)

Mathis: It means that the x-axis itself has a rate of change of one, and the y or t-axis also. The number line itself has a rate of change of one, by definition. None of my number theory here would work if it did not.

Mathis: I must also stress that the cardinal number line has a RoC of 1 no matter what numbers you are looking at.

Mathis: Even if we put in fractions or decimals, Δx will be changing at the rate of one.

Mathis: Following this strict method, we find that any integer subtracted from the next is equal to 1, which must be written ΔΔx = 1. On a graph each little box is 1 box wide, which makes the differential from one box to the next 1. To go from one end of a box to the other, you have gone 1. This distance may be a physical distance or an abstract distance, but in either case it is the change of a change and must be understood as ΔΔx = 1.
Someone might interrupt at this point to say, "You just have one more delta at each point than common usage.

No, someone will likely interrupt at this point and say: "Why is your interval always equal to one?" The answer is simply that he thinks the denominator of the derivative (∂x/∂t) "disappears," but I've shown that this is never the case; it never disappears. He needs this to be so as the entire premise for his article, and just look at the pains to goes through to argue this one point:

Mathis: If this is not clear, let us take the case where I let you choose values for x1 and x2 arbitrarily, say x1 = .0000000001 and x2 = .0000000002. If you disagree with my theory, you might say, "My gap is only .0000000001. Therefore my RoC must be much slower than one. A sequence of gaps of .0000000001 would be very very slow indeed." But it wouldn’t be slow. It would have a RoC of 1. You must assume that your .0000000001 and .0000000002 are on the number line. If so, then your gap is ten billion times smaller than the gap from zero to 1. Therefore, if you relate your gap to the number line—in order to measure it—then the number line, galloping by, would traverse your gap ten billion times faster than the gap from zero to one. The truth is that your tiny gap would have a tiny RoC only if it were its own yardstick. But in that case, the basic unit of the yardstick would no longer be 1. It would be .0000000001. A yardstick, or number line, whose basic unit is defined as 1, must have a RoC of 1, at all points, by definition.

And below is his only boldface equation in the entire article, the distilled wisdom of Miles Mathis. This is what you've wasted two hours of reading to find:

Mathis: *Therefore, Δt' = ΔΔt
The derivative is just the rate of change of our dependent variable Δt. But I repeat, it is the rate of change of a length or period. It is not the rate of change of a point or instant. A point on the graph stands for a value for Δt, not a point in space. The derivative is a rate of change of a length (or a time period).

And it's no different than the current formalism, just expressed differently. In Leibniz notation:

∂(∂t) = ∂²t

Mathis: You cannot postulate the existence of a limit at a “point” that is already defined by two differentials, (x - 0) and (y - 0).

At this point he gives-up all pretense of being rigorous, and starts using the hypen as a minus sign (-,−).

Mathis: This means that the “uncertainty” of quantum mechanics is due (at least in part) to the math and not to the conceptual framework. That is to say, the various difficulties of quantum physics are primarily problems of a misdefined Hilbert space and a misused mathematics (vector algebra), and not problems of probabilities or philosophy.
No. The primary problem with quantum physics is imagining light as actual physical waves—and especially as pressure waves, as depicted to explain the double-slit experiment (but sine-waves are still inappropriate in most cases.) This is where Miles Mathis really shines. Anybody interested in this needs to read this excellent article by Miles Mathis.
Mathis: [I have gotten several emails over the years from angry mathematicians, saying or implying that my mentioning this derivation is some sort of strawman. [...]]
 

Similar threads

Back
Top Bottom