Friday, June 02, 2006

Not Even Wrong


Roger Penrose calls it, "compulsive reading", but Peter Woit's new book, Not Even Wrong: The Failure of String Theory and the Continuing Challenge to Unify the Laws of Physics...
has a much greater potential for compelling the mainstream to openly question String Theory, when Peter's book gets published at about the same time as Lee Smolin's new book, The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next".

What comes next should be to fix what never got fixed before moving on...

silly me

Further Developments:
I think that timing makes this relevant since this piece just appeared in the Finacial Times by, Science Writer, Robert Mathews... who severely thrashes String Theory for the farce that it will rapidly become known as, when the carefully manicured public finally gets the wake-up call that string theory is actually just a big ball of mathematical yarn.

Luckily for the world of string theory fanatics, we have the most fanatical of them all, Lubos Motl, to call everyone that doesn't buy the constant barrage of string theory hype, ignorant crackpots. He says that because he believes-in math theorems that may very well eventually prove to reflect no actual physics.

It's funny, but the way I learned physics from decent honest scientists, a crackpot was someone like Lumo, who takes for granted that no new information can overturn their otherwise crackpot-theory. Fanatics, in other words, who enable their belief system to produce willful ignorance that cannot be reasoned beyond with new facts.

scary

18 comments:

Psybertron said...

Hi Island,

Long time no correspond. Ian (Psybertron) here. I like your blog, and I'm still keen to understand your main angle. Whatever my "tautological truism" initial view of anthropic principle(s) I do actually believe you have a point - summarised in "Once upon a space-time" I guess - even if I may not understand it.

I've been corresponding in a group of people called "Friends of Wisdom" which seems to involve genuine academics (including several physicists) as well as a few "post-modern" amateurs like myself.

I'm going to point a couple of those people at this statement of yours from the earlier post ...

"The problem goes back to the negative mass absurdity that falls from the Dirac Equation, and QFT's "ad hoc" assumption is what I meant when I previously said that science has assumed the flaw, rather than to fix it, which only carries and compounds the problems when extended to Quantum Gravity theories of all varieties."

... and ask if you can point to references to back it up.

Take care
Ian

island said...

Hello Ian, LTNS, how are you?

These two short articles sum it up, but I can do a lot better in person, and all of this information is also included in various locations within this blog, as well:

The Second Law of Thermodynamics says "god" doesn't throw dice...

Real Objects of Negative Orthodoxy... not mass

I hope that "wisdom" isn't a code word for something that I should be afraid of getting associated with... ;)

Psybertron said...

Hi Island,

Firstly, like any new "group" openly dicussing the limits to knowledge and "wisdom", it attracts some people with odd views, but I think you'll find enough professional academics and physicists in there prepared to accuse me of being post-modern and new-agey, so you should be safe at least to poke your nose in :-)

Some of the "physics" related topics were seen as tangential chatter, compared to the business of teaching wisom in academia, so one or two people have taken such discussions off-line.

I'm trying to make connections between people who might understand each other.

Ian

Psybertron said...

Secondly, on the collapse of string theory.

Bearing in mind I have only "folk knowledge" of the subject, I also saw it as a metaphor-too-far, with the reality of what might be expressed in its mathematics being too far from anything to ground it in the empirical world.

The trouble is for someone like myself, who is interested in knowledge itself, rather than any particular science, you find that most things are metaphorical to some extent, and the empirical evidence indirect to some degree. Any theory or mathematics to back up such a thing is "explanatory" science" which may be quite different to the scientific methods used to "test" it. So a weird metaphor with incomprehensible mathematics, isn't necessarily wrong, it may just be poorly explained. The public needs help, if "knowledge" is to evolve.

I know you see David Deutsch as one of those guys operating with the flawed basis of QM, but I see him as very interested in "explanation". I'd really like to find some way of getting him connected to your thoughts.

island said...

heh... that's too good to be possible.

The trouble is for someone like myself, who is interested in knowledge itself, rather than any particular science, you find that most things are metaphorical to some extent...

No reflection of nature is perfect, although some are more-absolute than others.

Are you saying that I should go somewhere and poke my figurative head in? Which is not the same as a figurehead... or is it?

Psybertron said...

You should also have received a forwarded copy of an e-mail I sent to FoW-Discuss .... I've linked them to you and you to them ...

People can choose where they poke their own figurative heads in :-)

Ian

Neil' said...

For an end-run around string theory that explains (the summary of) why space must have three dimensions, see my blog. (String theorists are still thrashing that question around.) A good critique of string theory fetishism, albeit a bit old, is in _The End of Physics: The Myth of a Unified Theory_ by David Lindley.

I wonder if ST or anything can really explain this: the *physical mechanism* of renormalization. Sure, they just deduct it with a rather ad-hoc math trick, but what in nature makes it happen? I don't see any good attempts to address this.

island said...

"This procedure consists in crossing out the results of certain calculations and replacing them by a description of what is actually observed. Thus one admits, implicitly, that the theory is in trouble while formulating it in a manner suggesting that a new principle has been discovered"
-Paul Karl Feyerabend on "renormalization"...

He was actually supporting the use of ad hoc assumptions until theory catches-up, and you can't really argue with that for a reasonable amount of time and as long as it isn't forgotten that any assumption or projection is always up for rewiew given new physics.

Problems occur when unproven assumptions get concreted into into theory where they then get accepted as unexplained fact, just because the theory does some things extremely acccurately. This can cause problems because one small missing piece to the puzzle can turn assumptions and projections completely around to mean exactly the opposite of what is expected.

As with the second law of thermodynamics...

Neil' said...

I don't mind scientists saying that nature "somehow" deals with the infinities and then using a mathematical deus ex machina treatment called "renormalization." Presumably they deal with things as best as they can (and must.) My complaint is that they are not adequately candid about the implications thereof, pretending to the middle-brow public that things are essentially figured out and consistent in physical theory (putting aside overall reality/measurement issues like collapse of the wave function) except for outré weird aspects like dark matter, dark energy, maybe some offbeat particle questions, etc.

But, renormalization goes to the very heart of whether we understand the universe and its physical consistency at base. I expect either substantial work to be done (or has it?) addressing the issue of how *nature* does this, or greater frankness about the hokiness of just crossing the unwanted balances off the accounting sheet.

island said...

I agree, but I don't recall anyone denying that this is a very important issue that must be resolved, when pressed, anyway, which is why I pointed out "reasonable time", since the passage of time indicates that the theory really is "in trouble", (at an accelerating rate), because the whole idea was to let stuff catch up that would remove the necessity of the ad hoc assumption, and this "excuse" only flies for so long before it becomes an absurdity.

Infinities are indeed what is at issue, and as I understand it, the solution to the mystery of the mechanism lies in the cause for the gradient that exists between the vacuum and ordinary matter which defines the *actual* mass gap... or the reason for the actual energy-gap between the vacuum and the lowest lying particle excitation.

The spectrum of particle excitation isn't expected to have a continuum that extends all the way down to zero, and this expectation gets exagerated when the vacuum has mass, but is less-dense than matter, since matter only appears over regions of space where the rho is greater than zero.

Given a finite volume, the spectrum is discrete, but in this case the gap increases as the universe expands, so the cut-off point for the Landau pole where the coupling constant goes to *infinity* changes over time, in quasi-static fashion.

...and that, my friend, is a solution and the rest of the story, since the value of the energy scale idepends on the age/size of the universe.

... and I can only imagine and hope that Dirac would approve of this large numbers solution to a long standing problem.

nigel said...

"I wonder if ST or anything can really explain this: the *physical mechanism* of renormalization. Sure, they just deduct it with a rather ad-hoc math trick, but what in nature makes it happen? I don't see any good attempts to address this." - neil

Yes, renormalization is adjusting the electric charge and mass to make the theory work. The electric charge of an electron is only given by the normal databook value (in Coulomb's law, and Gauss'/Maxwell's equation for electric field divergence) at large distances.

The charged particle loops in the vacuum get polarised. Virtual electron-positron pairs exist for a period of about 10^-21 second before annihilating, and in this time they can separate by up to 10^-12 metre.

This is enough to allow them to get polarised around real (long-lived) electrons, with the virtual positrons being attracted and therefore on average closer to the real electron core than the virtual electrons which are repelled and on average are further for the electron core.

The shell of polarized vacuum charge therefore has a net radial electric field vector which points in the opposite direction to the electric field vector from the real electron in the middle, and nearly cancels it out. What we see as the electric charge on an electron is the small residue which is not cancelled by charge polarization.

Penrose in "Road to Reality" speculates that the central bare electron has an electric charge equal to the observed charge multiplied by the reciprocal of the square root of alpha, i.e., 11.7e. However, when you took at the natural charge suggested by Heisenberg's uncertainty principle, it is 137 times the Coulomb law for electrons, hence indicating the bare electron core has a charge of 137e.

(See http://electrogravity.blogspot.com/2006/02/heisenbergs-uncertainty-sayspd-h2.html)

Close to the electron, the charge increases because there is less intervening polarized vacuum shielding, and so there should be a variation of some sort from an asymptotic minimum charge of e at long distances to a maximum value of either 11.7e (Penrose) or 137e near the middle when you get past the polarized vacuum veil effect.

Renormalization is the failure of the mathematical solution to have the right asymptotic limits. The abstract QFT (including not just electron-positron polarization, but all the loops of other charges up to 92 GeV) suggests the electron charge is approximately

e + e[0.005 ln(A/E)],

where A and E are respectively upper and lower cutoffs, which for a 92 GeV electron-electron collision are A = 92,000 MeV, and E = 0.511 MeV. (Reference: http://electrogravity.blogspot.com/2006/06/relationship-between-charge-of-quarks.html and

The problem is the limits in this formula: the formula falsely predicts that the charge endlessly increases with collision energy (i.e., proximity to the electron core), which physically can't happen because you are going to have less and less polarization and eventually there won't be room for any pairs of charges to be polarized between you and the core.

It is also false because of the lower cutoff! Without the lower cutoff, the answer is infinity.

In theory, this is described as the problem that the entire vacuum of the universe should be polarized by a single electron.

Clearly you don't expect the lower limit of the cutoff to be zero in a logarithmic formula of this kind or you get infinity. So instead the electron rest-mass energy of 0.511 MeV is usually used as the lower cutoff.

But this is unnatural, because there is no reason. Again there is a large dose of engineering logic (common sense) missing from QTF.

It seems that there are two sources for the virtual charge creation-annihilation loops in the vacuum: the background energy density of the vacuum (gravitational gauge bosons for the spacetime fabric in general relativity, in a quantum gravity context), and the energy of the force field around a charge.

If the polarized vacuum around the electron core is mostly due to the the background energy of the vacuum, you'd expect it to have an exponential shielding so charge at distance x from an electron core would be modelled by something like

e + (137 - 1)e.exp(-ax)

which has the correct limits of charge e at large distances and charge 137e at short distances.

However, this semi-empirical Dirac sea model lacks a concrete mechanism for why the vacuum polarization doesn't shield 100% of the electron charge, leaving the electron neutral.

The alternative is that the charge pairs, which contribute to the polarised vacuum shielding of the electron core, are those produced by the field itself.

This would suggest a more complex relationship than the simple exponent, and the logarithmic result from abstract QFT is more acceptable.

You could then argue that the lower cutoff limit exists because there is a mechanism: beyond a certain distance, the electric field of the electron is too weak to poroduce and polarize pairs of charges in the vacuum.

I'm interested in the exact physical dynamics of this effect because I think it will explain force unification at high energy without SUSY speculation. It should also make checkable predictions, unlike SUSY.

For instance, since the quarks in a hadron are close together, they should share a mutual polarized vacuum. This should produce a greater shielding effect from the polarized vacuum (due to the higher energy density) than occurs around an electron if d, u quarks have electric charges of integer units, with the increased vacuum polarization shielding these down to effectively the fractional contributions in the Standard Model (-1/3, +2/3).

In this case, the fractional charge values of quarks has a natural physical explanation, and the charge reduction seen at large distances may be balanced by the energy of the nuclear binding forces (color and weak) at short distances.

If you think about force unification in terms of distance instead of interaction energy, you start to get into real, practical physics models. For instance, what happens to gauge boson energy when the charge varies with distance? It's pretty likely that the cause of unification at extremely high collision energies (near the middle of a particle) is due to conservation of energy for the different force gauge bosons. The Yang-Mills interaction picture seems to completely neglect physical dynamics of how the energy of exchange bosons is conserved. When a force strength (alpha) varies as a function of distance, the exchange energy passing through the surface must remain the same . Think about Green's theorem or hydrodynamics when contemplating exchange radiation. If you physically shield the charge (gauge bosons exchange dynamics), by vacuum polarization or whatever, the energy has to go either into heating the shield up or it gets converted into another force.

This is just what is observed in high energy physics: weak and electromagnetic charges increase with interaction energy, and the strong force coupling strength falls. Therefore, you'd expect unification without SUSY from conservation of charge energy.

If the strong force falls because the electroweak force is rising, then eventually you get perfect unification, unlike the picture in the Standard Model (minus SUSY)where the forces cross-over and go on increasing or falling indefinitely.

There is too much prejudice in favor of an abstract solution to the final theory. I think the guys responsible must be the real geeks who can integrate any function in their heads, but don't have the time to think about whether extra dimensional speculation is really relevant to the problem of modelling effects in a cloud of charges around a particle.

Neil' said...

Thanks for the reply and all the work you put into it, but... I thought that it was "infinities" we were trying to get rid of. I have heard before of the explanation you provided. However, the masking by virtual particles should just reduce the amount of trouble, not eliminate it, if my rather sketchy middle-brow understanding is correct. You realize that the excess field energy that comes from integrating down below the "classical electron radius" is still going to be excess, just by not as much, as we approach the electron below this radius - and scattering experiments show the electron to be like a point down to much smaller than the CER.

BTW - Where can I see a nice chart of the measured and/or predicted field around an electron at distances comparable to the CER? It should be available from scattering experiments.

island said...

Talking past each other is common to every physics conversation that includes pet theories, but I'd have to point out to all that every point is moot until somebody proves that Einstein was wrong, because the information and links that I've provided indicate that he was not.

Neil' said...

Island: I don't think what we are arguing about is contingent on whether Einstein was wrong - in what sense would it? I don't know what he said about the electron etc. infinities anyway, but he was suspicious of QM.

island said...

Actually, Einstein was very instrumental in the early development of quantum theory, it was uncertainty and infinities that he was highly suspicious of, since they do not most naturally fall from extensions of general relativity to our universe.

As I have previously indicated, he had very good reason to be, when it is noted that he was unaware that matter generation from vacuum energy **drives expansion** in his finite model, (via vacuum rarefaction), while this effect gets offset by the increase in positive gravitational curvature that you get when you also proportionally condense the matter density from the vacuum energy before Feynman takes over.

Anyway, doesn't it require infinite potential energy for renormalization to encounter a problem with infinities?... because a finite universe only has finite energy, so the effect is that the field energy is finite, except that in this case, expansion imparts that energy is constantly increasing in proportion to gravity, when you increase both the matter density and negative pressure in dispropotionally equal, "see-saw" fashion.

Anonymous said...

EINSTEIN'S SIN

The experiment of Michelson-Morley should have led to two competing interpretations:

1. As far as the speed of light is concerned, Newton's particle model of light is correct. The speed of light is variable, c'=c+v, where c is the speed of photons relative to the light source and v is the relative speed of the light source and the observer. This interpretation is simple, even trivial: no miracles (time dilation, length contraction etc.) can be introduced.

2. The speed of light is constant, c'=c, independent of v, the relative speed of the light source and the observer. In this case miracles (time dilation, length contraction etc.) are obligatory - without them the falsehood of the principle of constancy of the speed of light would be obvious.

The first interpretation is true, the second wrong, and yet the second was adopted. That was the beginning of a wrong science of course but by no means a sin. The sin started when Einstein implicitly introduced the true c'=c+v interpretation, thereby obtaining correct results (e.g. the frequency shift factor), and conserved the false principle of constancy of the speed of light plus appended miracles, thereby destroying the rationality of generations of scientists.

In 1911 Einstein showed that in a gravitational field the speed of light is variable and advanced the formula

c' = c(1 + V/c^2)

where V is the gravitational potential. One can apply the equivalence principle as shown in

http://www.courses.fas.harvard.edu/~phys16/Textbook/ch13.pdf pp.2-4

Note that V=gh=cv. Substitute this in Einstein's formula and you obtain c'=c+v.

Pentcho Valev

Little Miss Anthropic said...

I stand corrected. I meant 'introductory' post. Thank you. You are quite the stickler, huh? Well, darling...then you may want to correct FINACIAL TIMES on your link. :)

island said...

I'm not real good with money... ;)