New models for old

January 21, 2023

About two years ago I posted a paper on the arxiv in which I argued that the correct foundation for particle physics is the real group algebra of the binary tetrahedral group, of order 24. The numerology is correct, as the standard model contains 12 elementary bosons, splitting 1+3+8, and 12 elementary fermions, that is 6 leptons and 6 quarks. The group algebra naturally splits into components which individually provide the gauge groups of the standard model, including the Lorentz group, but with one fatal flaw – the gauge group for the strong force appears in the split real form SL(3,R) instead of the compact real form SU(3). At the time, this was considered by my critics to be a deal-breaker.

But times have changed. The octions paper published last year also uses SL(3,R) for the strong force, and explains how it is possible to use a copy of U(1) to complexify the representations, but not the group itself, and argues that this is sufficient to do everything that the standard model requires. If so, then I can return to that original model, and forget all about the various other models I have worked on in the past two years. The new lamps may be bright and shiny, but they don’t contain the old magic. So let us summon the djinni and make a wish: I wish for a quantum theory of everything.

No sooner said than done. Unimaginable riches pour out of this poor little old lamp. In particular, the group that emerges is the direct product of U(1), SU(2), SL(3,R) and (U(1) x SL(2,C))/Z_2. The group that is used in the octions paper is exactly the same, except without the separate copy of U(1). So I can mix the two copies of U(1) together, and then follow the recipe in the octions paper, and the whole standard model drops out.

But why would I want to do that, and use all the ghastly complications of E_8 and its 248 dimensions, when everything I need is already in the 24-dimensional group algebra? Simplify, simplify, simplify. The fermionic part of the group algebra consists of two Dirac spinors acted on by SL(2,C), plus one weak isospinor acted on by SU(2). The Dirac spinors represent elementary particles (fermions) in the standard model, but the isospinors do not. Why not? As soon as you put any spinor or isospinor into space, you tensor with the 3-dimensional space representation, and you get the sum of a Dirac spinor and an isospinor. What is this isospinor doing? Why isn’t it a particle?

Good question. The answer, I have discovered, is because it is a neutrino. The neutrinos are the fermionic manifestation of the weak force. They appear whenever the weak force indulges itself in destroying mass in radioactive frenzy. Therefore the fermionic representation associated to the weak gauge group SU(2) cannot be anything other than a neutrino. A neutrino, therefore, is not a Dirac particle. Nor is it a Majorana particle. Nor is it a left-handed Weyl particle. It is something else entirely.

Dirac particles participate in electromagnetic interactions. That is what the Dirac spinors, Dirac algebra and quantum field theory are for. The neutrinos do not participate in electromagnetic interactions. They do not belong in the Dirac spinors, they are not acted on by the Dirac algebra, they do not participate in quantum field theory. They are something else entirely. The standard model identifies the neutrino “isospinor” representation with the left-handed part of the Dirac spinor. This confuses two different representations of the binary tetrahedral group, and is WRONG.

Undo this ghastly error, and the whole beautiful elegant and unbelievably simple theory of everything just pops out, pretty much as I explained it two years ago. Of course, I understand it much better now, so I’ll try and explain it again, without so many mistakes, and without so many irrelevant distractions. For example, the change from SU(3) to SL(3,R) is not a bug but a feature – the split form permits the introduction of five apparently independent masses for elementary particles. These five masses live in a spin 2 representation – but it is frankly absurd to interpret this representation as a spin 2 graviton, or even as a set of five gluons. Again, the separation of the real isospinor from the complex spinor entails a separation of a real scalar from a complex scalar, which implies the separation of a real (gravitational) mass from a complex (electromagnetic) mass.

What we lose is special (and general) relativity. Extraordinarily we find that there is no mixing of space with time in the quantum world. Absolutely none. Space is space and time is time and ne’er the twain shall meet. What we gain is a quantum gravity that (unlike general relativity) is consistent with the broad features of galactic and cosmological dynamics. Are you prepared to give up special relativity for this prize? You should be. You are allowed to keep the observational consequences of special relativity, including time dilation, length contraction, and mass increases. But you are not allowed to keep the group SO(3,1). You must instead use the group GL(3,R) for these purposes. This is because SO(3,1), like SU(3), is NOT a subgroup of the group algebra.

You must therefore give up all the theories of fundamental physics developed during the 20th century. ALL of them. It is not that they are especially “wrong” in practice – they describe the universe pretty well. But they are based on fundamentally wrong-headed physical principles. The first fundamental principle (of special relativity) is that time and space are different aspects of the same thing. The second fundamental principle (of quantum mechanics) is that neutrinos and matter are different aspects of the same thing. The third fundamental principle (of general relativity) is that inertia and gravity are different aspects of the same thing.

All these fundamental principles are wrong. Time is not space, and the two do not mix. Neutrinos are not matter, and the two do not mix. Inertia is not gravity, and the two do not mix.

Advertisement

The kilogram

December 27, 2022

In 2019 the official definition of the kilogram changed from the mass of a particular “standard” lump of platinum/iridium alloy, to a defined value of Planck’s constant in kilogram-metres-squared-per-second. This marked the end of a long process of transformation from a gravitational definition of mass to an inertial definition. It is the same process that happened earlier to the metre – once defined as the distance between two marks on a particular bar of platinum-iridium alloy, it long ago acquired a more accurate definition via a defined value of the speed of light in metres-per-second.

What is wrong with that, you might ask? If we need more precise definitions as time goes on, surely we are allowed to change the definition from time to time? Yes, up to a point. But you have to be sure that you haven’t at the same time changed the concept that you are measuring. This is a subtle but very important point, that is never satisfactorily addressed by physicists. Philosophers sometimes concern themselves with this issue, but physicists don’t listen to philosophers.

Let us first consider the metre. The original definition allowed the Michelson-Morley experiment to determine that the speed of light in a vacuum was independent of the velocity of the observer, at least in the limited circumstances of the experiment itself. The new definition requires the speed of light in a vacuum to be independent of the observer under all circumstances. What if this isn’t actually true in the real universe? Well, in a sense this is a meaningless question. The question really is, could a more general version of the Michelson-Morley experiment detect subtle differences in the speed of light as measured by different observers, if these observers are accelerating with respect to each other?

The new definition of the metre pre-judges this issue, and assumes that this could not happen in principle. This is a very dangerous attitude to take. This is the King Canute syndrome – physicists presuming to tell the universe how to behave. For practical purposes, it doesn’t matter, because the speed of light is sufficiently constant on and around the Earth that the metre so defined is sufficiently accurate for all practical purposes. But what happens if you send a spacecraft to the outer edges of the Solar System? Can you be sure that the metre stays the same length on that scale? Can you be sure that your spacecraft remains the same size? The honest answer is no, you cannot be sure. But don’t expect a physicist to give you this answer.

If you insist on defining the speed of light to be constant in this way, then you may be forced to measure a changing metre as you travel through the universe – this leads to the “curvature of spacetime” that is the modern interpretation of General Relativity. Not Einstein’s interpretation, by the way. If, however, you regard this definition as unsafe, because it is based on hidden assumptions that are not adequately supported by experiment, then you are labelled a crackpot. In this sense, I am a crackpot – although I prefer the term “sceptic”.

Now let us consider the kilogram. Originally, masses of objects were compared to the standard mass(es) with weighing scales, which determine which of two masses is the greater. If the masses are sufficiently finely balanced that you cannot tell which is heavier, you say the masses are equal. So if you have two masses which balance each other, and which added together balance a kilogram, you can say they are half a kilogram each. And so on. But each time you halve the mass, you introduce extra uncertainty. So very small masses are very difficult to measure accurately. That is the main reason why the “standard kilogram” eventually became inadequate for modern purposes.

Weighing scales of this general type, with cast iron “standard” weights, were in universal use in shops, markets and kitchens when I was a child. Such scales compare the (local) gravitational effects on the masses directly, and therefore measure the (local) gravitational mass. There was another type of weighing machine, based on a spring of some kind, which worked by balancing a gravitational force against a mechanical force. The mechanical force is ultimately an electromagnetic force between the atoms of the mechanism being employed, so that these weighing machines measure a “compromise” gravitational/electromagnetic mass.

Experiments, of course, support the idea that gravitational and electromagnetic masses are the same, to within certain tolerances, on or near the Earth. But that is not the same as saying that they are the same, exactly, universally throughout the cosmos. Modern physics assumes they are the same, universally. I say this is an unsafe assumption. It is not adequately supported by experiment.

If you define gravitational and electromagnetic mass to be equal in this way, then you may be forced to measure mass differently for different observers. If you refuse to do so, you may be forced to conclude that the universe if full of invisible “dark matter”, to account for the discrepancy between the mass that we measure locally on Earth, and the mass that is measured locally in a distant galaxy. If, however, you regard this definition of mass as unsafe, you are regarded as a crackpot. In this sense, I am a crackpot, because I don’t believe in “dark matter”, and I do believe that different types of weighing machines are measuring different types of mass. I prefer to say I am a sceptic, because I listen more to the experimental evidence than I do to my initial hunches and inherited assumptions and prejudices. To me, a belief in “dark matter”, without experimental evidence, is the mark of a crackpot.

Now to get back to the kilogram. In the century or so that the International Prototype Kilogram in Paris was used as the definition, and many copies have been kept around the world, and calibrated and re-calibrated from time to time, some curious changes in the relative masses have been observed. While many of these changes can be explained by differences in cleaning or lack of cleaning, it is not clear that all the changes can be explained in this way. Perhaps they can, but for the moment I remain, shall we say, sceptical. It is important to note that the calibrations are not done purely gravitationally, because such calibration is not sufficiently accurate. It is done, therefore, with a mixed gravitational/electromagnetic experiment. Perhaps the ratio of gravitational to electromagnetic mass actually does differ very slightly in different parts of the world? Perhaps it actually does change very slightly over time in any given place on Earth? Either of these effects might be enough to explain any observed discrepancies that cannot be attributed to more mundane causes.

Tensor-vector-scalar gravity

December 9, 2022

There seems to be some excitement in the MOND (“MOdified Newtonian Dynamics”) community about a new version of tensor-vector-scalar gravity that appears to have better consistency with observations than earlier versions. I am not competent to join this debate, but I can tell you roughly how tensor-vector-scalar (TeVeS) models of gravity work, why they are relativistic versions of MOND, why they are extensions of General Relativity, and how they are related to my icosahedral model. In particular, the latter can be construed as containing a quantised version of TeVeS. However, I should warn you that my model only contains the basic algebraic structure, and does not contain all the technical details of the Lagrangian and other things that are needed in order to make quantitative predictions.

TeVeS, as originally proposed by Bekenstein in 2004, contains the metric tensor, which forms the basis for Einstein’s general theory of relativity; plus a vector, which replaces the concept of dark matter in standard cosmology; plus two scalars, which replace the concept(s) of dark energy and the cosmological constant. Since this is a relativistic model, vectors have four components, and the metric tensor has ten. The latter breaks up as 1+9 under the action of the Lorentz group, but is irreducible under general covariance. Thus there are fields of dimensions 1+1+4+9 for gravity, in addition to the 3+3 for electromagnetism.

In my model, there are five types of fields, of dimensions 1+3+3+4+5, which I label 1, 3a, 3b, 4a and 5. The 4a fields are vector fields, and the rank 2 tensors are as in the standard theory, that is antisymmetric for electromagnetism and symmetric for gravity. The antisymmetric tensor breaks up as 3a+3b, which is related in a complicated way (which I’ll come back to) to the splitting of electromagnetism into electric and magnetic fields. The symmetric tensor breaks up as 1+4a+5, and for a complete quantum theory of gravity we need to understand these three components separately.

General relativity treats the symmetric tensor as a unified whole, but partly separates off the scalar (1) by introducing a cosmological constant. Observation indicates that the cosmological constant probably isn’t constant, so that TeVeS introduces a second (dynamical, i.e. not constant) scalar to deal with this. This scalar field can then be adjusted to reduce the symmetric tensor to 4a+5. Next we need a vector field (4a) to split the 4a component from the 5 component. After that we have all the fields that are necessary for a complete theory of gravity, according to my model.

The devil is then in the details of the dynamics of these fields. These details differ between different versions of tensor-vector-scalar gravity, and hence lead to different predictions, and different degrees of agreement with observation. The new version that people are getting excited about is called AeST (Aether-Scalar-Tensor), but “Aether” is just a name for a particular type of vector field.

Now let me tell you how such models can be derived from my icosahedral model. Let us start with the vector fields (4a). The basic one in classical physics is spacetime itself, together with its dual (in the Hamiltonian sense) energy-momentum. In quantum mechanics these become dual in the Heisenberg sense, and become (partly) “quantised” – which means that spacetime remains continuous, but momentum-energy changes become discrete. In my model there are other quantised vector fields, of which the most important is the “matter” field, which is quantised into four particles – the proton, the electron, the muon and the tau – that is, the proton and the three generations of electron. It is this vector field that replaces dark matter in standard cosmology, just like the vector field in TeVeS.

The fields that describe the interaction between matter particles are therefore the components of the rank 2 tensor, that is 4a x 4a = 1 + 3a + 3b + 4a + 5. In macroscopic terms, 3a+3b is electromagnetism and 1+4a+5 is gravity, as already explained. But at the quantum level we need the individual components. Neither classical electromagnetism nor quantum electrodynamics contains the full subtlety of the difference between 3a and 3b, so let us discuss this further. Any reflection in spacetime (4a) acts by swapping 3a with 3b. Therefore they are chiral pieces of the theory. This chirality is incorporated into the standard model by introducing a chiral “weak force” to describe how the electron and proton combine to form a neutron (plus a neutrino). This introduces a field of type 3a (say), separate from the 3a+3b of electromagnetism. Hence in the real world the (weak) 3a mixes with the other (electromagnetic) 3a.

Thus the standard model has 9 dimensions of electromagnetic-weak fields to deal with this. My model has only 6, and deals with electro-weak unification in a completely different way. Essentially, the splitting into 3a+3b is what happens at the quantum level, whereas the splitting into electricity and magnetism is what happens at the classical level. The two things are the same “real” 6-dimensional field, but interpreted differently in different contexts. The details are complicated, and I haven’t completely sorted them out, but the important point is that splitting into electric and magnetic fields depends on our choice of coordinates for spacetime, whereas the splitting into 3a+3b depends only on the “fundamental” particles – proton, electron, muon, tau.

In particular, the local characteristics of electromagnetism depend on the way that the momentum-energy vector relates to the matter vector. These local characteristics (which have no fundamental reality, but are just parameters of our chosen coordinate system) are essentially just the mass/charge ratios of the four particles. One can, of course, use these local parameters globally, so that electromagnetism is the same for all observers, but then one has to be very careful about how one interprets these parameters. In particular, one must not jump to conclusions about how these parameters would be interpreted by hypothetical intelligent observers on Mars, or on Uranus, or on Earth in the Jurassic Era. My critics unfortunately jump to false conclusions here, which is why they reject my model out of hand, without even thinking about it.

In any case, we now have to apply the same logic to gravity. TeVeS works in an analogous way to electro-weak unification, by adding extra scalar (1) and vector (4a) fields to the full tensor (1+4a+5), in order to split the latter into its components. We therefore have an analogous problem of gravi-weak unification to solve. TeVeS doesn’t do this, as it is an entirely classical field theory. My model attempts to do this, but the details are not worked out. Again we have to work out a relationship between the matter field (4a) and the spacetime field (another copy of 4a), but this time the relationship is independent of charge, and is a mass/energy relationship for the four particles. The difference is that “mass” now means “gravitational mass”, whereas before it meant “inertial mass”.

At this point we are in serious trouble if we fail to distinguish between these two different types of mass. They can only be equated locally, that is for a specific choice of spacetime coordinates. As we move through space and time, our choice of coordinates changes, and the equivalence of gravitational and inertial mass drifts. Local experiments cannot, even in principle, distinguish between gravitational and inertial mass. But non-local experiments can, do, and have done, repeatedly over the course of the past century or more.

Finally, note that the vector and scalar fields of TeVeS are not separate from the tensor field – they are part of it. Just as the field of the weak force field is part of the (quantised) electromagnetic field. The scalar, incidentally, is not a cosmological constant, or dark energy. It is local neutral matter – quantised as the neutron. The most important part of gravi-weak mixing is therefore the “angle” between our local concept of neutral matter (the Earth) and the quantised version (the neutron). If we approximate the Earth by a sphere of iron, then we can approximate the neutron field as 26/56 of the total matter field. This proportion must be measured in the fermion field, not the boson field, so that while we get a splitting of the bosons given by sin^2 and cos^2 in the usual (Pythagorean) way, we must measure it with an angle whose sine is 26/56 – approximately 27.7 degrees.

Of course, the Earth is not a perfect sphere of pure iron, so this value is only an estimate. But it gives a pretty good estimate of the proportion of neutronian gravity to total gravity of sin^2(27.7) = 22%. What does this angle mean? It is an angle that describes a local relationship between classical gravity and quantum gravity. It is an angle that must appear in particle physics, if quantum gravity is compatible with, or emerges from, the particle physics we know. Weinberg angle, anybody?

The mysteries of spin

November 24, 2022

There’s a new post with this title over at Peter Woit’s blog, in which he “explains” spin in the traditional way using the group SU(2), as has been done consistently since the early 1920s. Well, when I say “consistently”, the problem is it isn’t consistent. But the inconsistency doesn’t become apparent until you’ve moved a long way from these fundamentals, so that the cause of the inconsistency is no longer obvious. What appears to me to be the problem is that this theory tries to explain spin without a concept of time. It is impossible to describe classically rotating objects without time, and I believe is it equally impossible to describe quantum spin without time.

The question then is, what sort of time do you need for quantum mechanics? Woit makes a big thing in other places about using Euclidean spacetime for quantum mechanics, rather than Minkowski spacetime, which is appropriate for relativity. Now it makes a big difference to the mathematics which you use. Philosophically, the question is, does an individual elementary particle have a concept of time? Philosophically, my answer is “no”. My reasoning is that in order to measure time, one needs a clock, and for a clock, one needs an atom.

For precise measurements of time on Earth, a caesium atom is ideal. It ticks so fast that by counting the ticks one can measure time extremely accurately. But a hydrogen atom is big enough – it is used to measure the age of stars, galaxies and the universe in general. However, once you pull apart the electron and the proton, and treat them in isolation, the clock is broken. It no longer ticks. At least, that is my view. Others believe that the electron still ticks even if nothing observes it. Others believe that the electron experiences a continuous time, But let’s not get involved in this philosophical argument. I shall show you that in either case, the standard mathematical description of spin is wrong. The correct descriptions are quite different in the two cases, but neither of them resembles the standard 1920s picture.

Let’s assume first of all that the electron ticks, but that it cannot remember how many times it has ticked. The group SO(3) does not tick. The group SO(3,1) cannot tick if time goes in one direction. The only four-dimensional group that ticks is SO(4). It ticks because it is a commuting product of two copies of SU(2), and the electron ticks like a pendulum clock, between the left-handed state and the right-handed state. The group that you need to describe an electron in isolation is therefore SO(4). Not Spin(4). Not Spin(3,1). Not SO(3,1).

Now let’s assume instead that the electron has a continuous concept of time. If it ticks, it can count the ticks. If it doesn’t tick, it feels time in some other way. Either way, it requires a copy of the real numbers (or the integers) in its symmetry group. Since we cannot measure the direction of spin, it only requires a group SO(2,1), not SO(3,1). But it must have the double cover of SO(2,1), that is SL(2,R), because it has spin 1/2. Now the group theory is the same as for a classically rotating particle, such as the Earth, if you distinguish day from night. Of course, you cannot say that the Earth is in the “day” state or the “night” state, because that depends on the observer. But any particular observer or experiment can distinguish the two states very clearly.

Likewise, I am not saying that the electron “is” a classically rotating particle, but the underlying group theory is the same. Now the important bit is what happens when an electron and a proton interact, and their two copies of SL(2,R) generate a bigger group? Dirac (1928) assumed (and I emphasise this, assumed) they generate SL(2,C). But is this correct? This is not how classically rotating particles interact, so why should we think that quantum spinning particles interact in this way?

Classically rotating particles, like the Earth and the Moon, generate tides, which are spin 2 effects, whereas SL(2,C) has only spin 1 force fields (electric and magnetic). The interaction group for classical rotating particles is SL(3,R), which consists of a spin 1 gravitational field and a spin 2 tidal field. So should we not rather assume that an electron and a proton behave in the same way, and that their interactions are described by SL(3,R) rather than SL(2,C)? Of course, no-one listens to this idea, since SL(2,C) has worked perfectly well since 1928, so why would anyone want to change it?

The reason we might want to change it is that it doesn’t work accurately enough for muons. Anyway, I’ve offered two alternatives, depending on your philosophy. Either SO(4) or SL(3,R). But not Spin(3,1)=SL(2,C) under any circumstances. My icosahedral model offers both at once, so that you can do quantum mechanics and relativity at the same time, in a consistent way.

Peer review

November 17, 2022

It is no secret that the system of peer review that is supposed to keep the progress of science on the straight and narrow is badly broken. This isn’t a new problem, although it certainly appears to be worse now than it was a few decades ago, but it is almost as old as the history of science itself. It is in essence one of those “unconscious bias” problems. Unconscious biases against the non-male or the non-white are not the only ways in which peer review exerts a malign influence on the progress of science. Unconscious bias against the innovative is equally culpable.

Ask any individual peer-reviewer, and they will deny any such bias. But it is an established fact that is proved in the aggregate. Research Councils in the UK woke up to the fact many years ago, when they realised that they were funding only safe, predictable and boring research, and the exciting and innovative stuff was no longer happening. They reacted by forcing their reviewers to evaluate innovation and speculation as positive rather than negative. But it hasn’t had enough of an effect, and it hasn’t had any effect on universities, which stifle creative innovation coming from below, and impose stultifying “innovation” from above.

Another demonstrable unconscious bias of peer review is a bias against interdisciplinary research of all kinds. This is effectively a bias of peer reviewers against all disciplines other than their own. Research Councils and other research funders have to react by specifically diverting funds into interdisciplinary research. But these funds are often not taken up, because interdisciplinary scientists are squeezed out of their jobs by the insidious effects of peer review within universities. I know this from personal experience – by 2014 I had moved whole-heartedly into an interdisciplinary area between mathematics and physics – and within three years, my academic career was at an end. I am not alone – this is a generic problem for interdisciplinary research, caused by the mechanisms of peer review, and needs to be addressed at a fundamental level.

Truly innovative thinkers are always discriminated against in academia, which is heavily biased in favour of the status quo, and likes a comfortable existence. Anyone who threatens to rock the boat in any way is punished, often by being thrown out of the boat and left to drown. New ideas are only allowed to come from the top, never from the bottom.

The worst part of peer review, however, happens not in research funding bodies or universities, but in journals. Journals have ceased to be a vehicle for dissemination of ideas, as they once were, and have become a vehicle for profit and status. They still use what they call “peer review”, which is a form of slave labour in which academics provide their work for free to keep the journals making profits. Peer reviewers often take their revenge by writing shoddy, ill-informed reports in which their unconscious (or conscious) biases have free reign to do their worst, under cover of anonymity.

I know this, because I have just received such a report. The reviewer bases their recommendation on the assumption that the paragraphs labelled “Speculative remark” in my paper form the “main point”. Now the universal convention in the scientific literature is that paragraphs labelled “Remark” are peripheral to the main text, and can be omitted without damage to the main arguments. They never form the “main point” of anything. The journal was the Journal of Mathematical Physics, the editors of which ignored these grounds on which I submitted an appeal, and simply repeated the insulting and untrue comments of the reviewer.

Unfortunately, taking time to read a scientific paper properly, to understand what it says and critique it fairly, is something that no academic these days can afford to do – there simply isn’t time, and there is no credit to be gained from it. So academics have largely given up doing it, and resort to quick short-cuts that make the unconscious bias problem much worse than it used to be. As a result, peer review no longer works. It is a system that is no longer fit for purpose, and must be abandoned if scientific progress is to resume.

Peer review creates large herds of scientists who are unwilling or unable to think for themselves. Real progress in science relies on the lone wolf who thinks outside the box. The wolf is extinct in the UK, and progress in science is going the same way.

Cooking

November 13, 2022

I don’t claim to be an expert on cooking, but it seems to me that there are essentially two schools of thought: there are those who look at the recipe first, and then get the ingredients; and there are those who look at the available ingredients, and then find the recipe. I belong firmly to the second school of thought. If I have to invent a new recipe that no-one has thought of before (unlikely, but still) then that is what I will do. Some culinary disasters undoubtedly arise from this strategy, but some remarkable successes can also arise.

Much the same applies to theories of physics. There is the theoretical school of thought, that looks at the textbooks, and tries to find the ingredients (e.g. curved spacetime, dark matter, spin 2 gravitons, supersymmetry, etc etc), with conspicuous lack of success. There is the practical school of thought, that looks at the evidence provided by experiment, and tries to find a way to cook up a theory that looks and tastes like the universe we observe.

Now in my cupboard I have a lot of ingredients that many people consider inedible. You do have to be careful with them, because they can be poisonous if not cooked properly. Acorns, beech nuts, mahonia berries, laurel berries, fuschia berries – all grow in my garden, and can be eaten if you know how to treat them. But why would I need to, when we’ve had the best apple season for years?

Theoretical physics uses a number of ingredients that many people consider inedible. You do have to be careful with them, because they can lead you astray. Differential geometry, spin connections, chiral spinors, Clifford algebras, gauge groups – they all grow in my garden, and can be useful if you know how to treat them. But why would I need to, when my group algebras provide the tastiest apple pie theories you could ever hope for?

Of course, those physicists who are looking for ever more exotic ingredients that grow only in some as yet undiscovered (and probably mythical) spice islands are not interested in something as simple, straightforward and wholesome as apple pie. But, trust me, if you understand apples as well as I do, and as well as Newton did, then you have no need to look any further.

Weighing the Earth

November 1, 2022

The Cavendish experiment to weigh the Earth, conducted in 1798, is one of those iconic experiments that everyone should know. It was the definitive test of Newtonian gravity – it proved beyond reasonable doubt that not only are the apples attracted to the Earth, but they are also attracted to each other. I remember this experiment being done in physics lessons at school, and the sense of awe I felt that you could actually measure the gravitational pull of one stainless steel ball on another. I do remember that we were a little disappointed at the level of accuracy we could obtain, but I have since learnt that this is a feature of the Cavendish experiment that applies even to the best experts. We may have struggled to get 2 significant figures (which is what Cavendish got), but we certainly got 1, and nobody convincingly got more than 4 significant figures before the 21st century.

There are almost no other methods of directly measuring active gravitational mass, although some variations are now used, and accuracy is improving as a result. But as accuracy improves, the inconsistency between different experiments becomes more noticeable. It is already at the level of serious concern that there must be some errors that are not understood in the experiments. It is not yet at the level of actual disproof of the theory of gravity. But it may not be far off that.

So what can theory do to help us understand this situation? The real problem with gravity is that matter consists of electrons, protons and neutrons, and the proton and neutron masses differ by about 0.14%, and the electron mass is less than half of that difference. So until you get at least 4-figure accuracy, you cannot distinguish at all between the contributions to gravity from the electrons, protons and neutrons, and simply have to treat all matter particles the same. Until you get to 6-figure accuracy, you’re not going to get any significant evidence as to whether gravity treats electrons, protons and neutrons in the same way or not.

But one thing we do know is that that there is nothing in quantum physics that treats electrons, protons and neutrons the same. So it is certainly not safe to assume that quantum gravity treats them the same. That means that atoms that have relatively few electrons (like lead, gold, platinum) may not behave in quite the same way as atoms with relatively many electrons (like iron, copper). Experiments are being done to try to detect differences between copper and gold – essentially testing whether the inertial mass ratio of copper to gold atoms is the same as the gravitational mass ratio.

My models make predictions for the magnitude of the differences that will be detected in such experiments. They are of the same order of magnitude as the reported anomalies. So why are the experimenters not directly testing my predictions, rather than looking for differences without having a theory to test? I suspect the reason is sociological – experimenters have given up on the theorists, who have produced so many crap predictions over so many years that the experimenters have stopped listening. I don’t blame them.

The McKay correspondence

October 29, 2022

I first met John McKay in Cambridge in 1980 (if I remember correctly), and last met him in Edinburgh in 2004. We occasionally corresponded by email over the years, but this correspondence has now come to end, since he recently died. But this isn’t the McKay correspondence I want to talk about.

McKay was famous for his crazy ideas. He specialised in ideas that were so crazy, that they had to be true, because you just couldn’t make it up. The most famous is the idea that 196884 = 196883 + 1, an idea so outlandish that no-one took it seriously to begin with, but an idea that links two previously unconnected areas of mathematics, and has spawned a wealth of new mathematics. Yet it still goes by the name of “Monstrous Moonshine”, which gives you an idea of what a ridiculous and stupid crackpot idea it was originally considered to be.

Another of his crazy ideas was that if you take the extended E_8 Dynkin diagram, consisting of 8 nodes joined in a straight line plus another joined to the 6th in the row, and label the nodes 1,2,3,4,5,6,4,2 along the line, and 3 for the extra node, which is a reasonably natural thing to do, then these numbers are the degrees of the (irreducible complex) representations of the binary icosahedral group. They are also the orders of the products of two 6-transpositions in the Monster. Crackpot, or deeply significant?

Deeply significant, of course. The three objects here: the icosahedron, E_8 and the Monster, are the largest exceptional objects in their respective domains of mathematics, and the connections between them are a sign of the deep unity of mathematics, at a level we still cannot really comprehend. I’m not going to talk here about the Monster connections, which have led some physicists off into a wild goose chase, but about E_8 and the icosahedron.

This McKay correspondence enables me to translate backwards and forwards between a discrete fundamental theory of quantum physics, based on the icosahedron, and a continuous quantum field theory, based on E_8. As each theory tells us something that the other does not, between them they can solve all the fundamental problems. Let me first refine the notation so that we distinguish representations 1a, 2a, 3a, 4b, 5a, 6a, 4a, 2b along the line, and 3b for the extra node. Alternate nodes are bosonic (1a, 3a, 5a, 4a, 3b) and fermionic (2a, 4b, 6, 2b). The first 6 nodes are the symmetric powers (0, 1, 2, 3, 4, 5) of 2a. The 6th symmetric power is 3b+4a, and the 7th is 2b+6a.

Now every reasonable model of physics is obtained by deleting some nodes from this diagram. The Standard Model of Particle Physics, for example, appears if you delete 2a, 4b, 5a and 6a. Penrose twistors appear if you reinstate 2a. The Georgi-Glashow SU(5) theory appears if you reinstate 6a, and if you add in 5a as well you get the SO(10) theory. The paper I recently finished (see the previous post) describes what happens if you combine these theories by adding in either 2a, 5a and 6a to get a Georgi-Penrose model, or 2a, 4b and 6a to get a supersymmetric model.

At this point I can see what I did wrong. The correct way to look at the model is to add in 2a, 4b and 5a, and leave out 6a instead. Now we have a Grand Unified Theory with gauge algebra su(2,4) + su(2) + su(3), where su(2)+su(3) describes all the nuclear forces, and su(2,4) describes everything else, that is electromagnetism and gravity. If we restrict from su(2,4) to the Lorentz algebra so(1,3), then the gauge algebra acquires extra terms su(2) + u(1) + so(1,1). Here the second copy of su(2) is Woit’s chiral gravity gauge algebra, and u(1) is the gauge algebra for electromagnetism, while so(1,1) is the mass gauge algebra, that explains how mass works.

So now I should be able to explain how mass works, and why the proton/electron mass ratio is 1836.15, and not some other equally stupid number. First of all, it is now obvious that this mass ratio is not a universal constant, but depends on the choice of Lorentz gauge. It is now obvious that this choice was made in 1973, when the standard model was adopted. It is now obvious that the standard Lorentz gauge is stuck in a time warp back in 1973, and no longer corresponds exactly to the Lorentz gauge applicable to the Large Hadron Collider in 2022. It is now obvious that if we re-calibrated the whole of physics from scratch, with a new choice of Lorentz gauge applicable to the laboratory today, we would be forced to conclude that the proton/electron mass ratio is 1836.45, not 1836.15.

This idea is, of course, completely and utterly crazy. It is such a crackpot idea that it must be true. You just couldn’t make it up.

I would like to dedicate this post to the memory of John McKay, who taught me how to go beyond crackpottery to find the ultimate truth.

New paper on chirality and E8

October 13, 2022

My long-promised paper on chirality and E8 has today appeared on the arXiv, and you can find it at https://arxiv.org/abs/2210.06029. Of course, they downgraded it from hep-th (where I submitted it) to gen-ph, although it is clearly a sequel to https://arxiv.org/abs/2204.05310, which is in hep-ph (where I am not allowed to submit). But they did post it immediately, without keeping it on hold for a day or two, which I suppose is progress. Anyway, the paper shows that by making making small but fundamental changes to the model in the earlier paper, it is possible to remove a number of technical objections to the model.

First, it is possible to have compact gauge groups: compactness is a basic assumption of all gauge theories of particle physics, and although one can argue that this assumption is not necessary, it is much easier to get a theory accepted if the gauge group is compact.

Second, the chirality of the model is explicit, and fundamental. It has long been considered that chirality is a major stumbling block for E8 models of fundamental physics, but I show that this is based on an incorrect mathematical definition of chirality. The correct mathematical definition has been known to physicists since 1937, but has been almost completely ignored. Unfortunately, the basic assumption of physicists, that they do not need mathematical rigour, is false.

Other features of the model are perhaps even more important:

Third, the model allows for non-inertial motion of the laboratory and/or the observer, by allowing for an 8-parameter family of copies of the Lorentz group. This opens the door to a uniform explanation of many different anomalies, including the muon g-2 anomaly, the W mass anomaly and various kaon and B-meson anomalies, that are not currently explained, as well as old anomalies such as neutrino and kaon oscillations (CP violation), that are generally considered to be adequately explained, but in my opinion are not. Four of the parameters are dimensionless, and correspond to the four fundamental dimensionless parameters of the non-inertial motion of the laboratory, that I have expounded many times and in many places (the number of days in a month, and in a year, and the angles of inclination of the Earth’s axis and the Moon’s orbit).

Fourth, the Dirac equation appears in two forms (a differential equation and a momentum-space equation) which are not mathematically equivalent (unlike in the Standard Model), and therefore the model permits a distinction between gravitational and inertial mass which I have argued extensively is both necessary and experimentally confirmed, if one cares to look at the evidence in an unbiased fashion.

Fifth, the infinitesimal version of the Dirac equation is the same for both particles and anti-particles, which implies that anti-particles have positive energy and positive mass (as is experimentally confirmed) rather than negative energy/mass (as Dirac’s original equation, still in use today, implies).

Sixth, there is an explicit restriction of this model to my discrete model, based on the binary icosahedral group, which permits a reduction of quantum field theory to a (potentially, but not necessarily, completely deterministic) model in which the ultraviolet catastrophe is avoided, and singularities such as black holes and the Big Bang do not occur. Now that the James Webb Space Telescope is producing strong evidence that the `early universe’ does not look the way the Big Bang Theory predicts, the necessity for replacing the Big Bang by something else is becoming urgent.

Energy from friction

October 8, 2022

You are no doubt familiar with XKCD: the latest cartoon at xkcd.com/2682/ is a pretty typical example, and teaches a lot of physics by means of jokes. Look at the bottom right hand corner – why does rubbing a ballon on your hair make your hair stand on end? As it says, the solution should be “easy to look up”. But apparently the problem is “extremely hard, currently unsolved”. That’s my type of problem. Of course, I have no idea how to solve this problem, but if no-one else has any idea either, then we’re on a level playing field.

Obviously we need to work out why the electrons on the outside of the molecules migrate preferentially from one side to the other, not the other side to the one. Complicated molecules, of course. Complicated quantum mechanics. Think outside the box – might quantum gravity have something to do with it? Are the virtual photons sufficient to explain the phenomenon? Maybe not. How about the virtual neutrinos? Ah well, we have no idea what the virtual neutrinos might do, so maybe there is a way in to the problem?

Quantum anti-gravity? What do you think? Well, you can’t rule out the possibility, can you? It is clear that in the ultimate unified theory, quantum electrodynamics and quantum gravity couple together, so that there are phenomena which require essential use of this coupling for their solution. Might this be one of those problems? I have no idea, but surely it is worth thinking about? The electrons, once dislodged from their atoms, migrate preferentially from the lighter body (the balloon) to the heavier one (your head), or is it perhaps the other way around?

What about lightning? A very similar phenomenon, wouldn’t you say? If I remember what I learnt at school, the electrons migrate upwards from the ground to the clouds, don’t they? Anti-gravity perhaps? Or is the fact that electrons actually have zero gravitational mass, compared to their very small inertial mass? Should we actually call a thunderstorm a gravitational storm, rather than an electrical storm? Is it the fact that the normal balance of gravity and electromagnetism has been disturbed, and gravity puts its foot down, and says, enough of this nonsense, get back to your rightful homes?

I have no idea, but I’ll add it to the already long list of unsolved problems that a proper theory of quantum gravity might have something useful to say about.