На русском языке


Sergey Fedosin. The physical theories and infinite hierarchical nesting of matter, Volume 1. – LAP LAMBERT Academic Publishing, pages: 580, ISBN-13: 978-3-659-57301-9. (2014).




At each stage of its development the science faces intriguing and incomprehensible phenomena, unresolved questions, the facts which do not fit into the framework of old theories. As it is indicated in [143], the following questions still remain a mystery:

1. The nature of gravitation.

2. The nature of the medium, the "empty" space (ether, physical vacuum).

3. The nature of electromagnetic wave propagation.

4. The nature of electricity and magnetism. Theoretically "open" magnetic monopoles are still being searched for.

5. The nature of limiting of the speed of light in the medium and in the substance.

6. The nature of quantization of the orbits of electrons in atoms.

7. The nature of the wave-particle phenomenon.

8. The nature of the structure of "elementary" particles.

9. The nature of nuclear forces.

10. The nature of the electric charge and the mass.

11. The integration of all interactions on some basis.


This list could be continued, proceeding to more specific issues almost in all areas of modern physics. For example, in [144] the committee on the physics of the Universe in order to understand the relation, assumed between quarks and cosmos, has presented a list of tasks for further research:

1. What is the dark matter like?

2. What is the nature of dark energy?

3. What was the origin of the Universe?

4. Is the Einstein theory of gravitation complete?

5. What are the masses of neutrinos and how have they influenced the evolution of the Universe?

6. What is the structure of "cosmic accelerators" like and which particles do they accelerate?

7. Are protons stable?

8. What are the new states of matter at very high densities and pressure?

9. Are there any additional spacetime dimensions?

10. How did the elements from iron to uranium appear?

11. Do we need a new theory to describe the behavior of substance and emission at high energies?


One of the main goals that were set by the author of this book was the use of syncretics as new philosophical logic, the philosophy of carriers, the similarity theory and the theory of infinite hierarchical nesting of matter to solve the acute problems of modern physics. The classical and relativistic mechanics, special and general theories of relativity, the theory of electromagnetic and gravitational fields, the theory of weak and strong interactions have been analyzed from a new perspective. In many cases the result was the models of phenomena clearly showing their structure and the mode of existence or interaction.

A typical example is the model which describes the structure of bead lightning in § 1, and is based on the electron-ion model of ball lightning, according to [2], [3], [4], [145], [146]. An interesting example of the similarity of atomic and stellar systems was quantization of specific orbital and spin angular momenta of the Solar system planets found out in [4] and [9]. In § 2 we showed that the specific orbital angular momenta of the moons of such planets as Jupiter, Saturn and Uranus are quantized. To a number of dependences characterizing the Solar system, we have added one more dependence. According to it the average surface temperatures of planets depend in inverse proportion on the square root of the distance between the Sun and the planets.

The cosmic theme was continued in § 3, analyzing the evolution of the "Earth–Moon" system. The results show that the Moon could probably appear at the distance of 29 Earth radii from the Earth at the same time when the Earth itself was formed. The calculations are consistent with the energy release in the lunar tides and assume synchronization of the proper rotation of the Earth and the orbital rotation of the Moon in 2.6·1010 years.

The analysis of the action function in § 5 shows that it is not only useful for finding the equations of motion based on the principle of least action, but actually influences the properties of bodies. This follows from the fact that parts of the action function are the gauge field function used for potential calibration as well as the function of energy, depending on the velocity of the substance. If these functions change due to changing of the potentials and the velocity, this leads to slowing down of the processes and time, the phase shift in the considered reference frame relative to the control reference frame. Besides the action function also contains the terms with the field energies, depending on the field strengths. This means that not only the potentials but also the field strengths are involved in changing the properties of the bodies.

Using the theory of infinite hierarchical nesting of matter [36] in § 6 the existence in cosmic space of the so-called "new" particles (nuons) is substantiated. These particles belong to neutral leptons, and their analogues among the stars are white dwarfs. Since the substance density of nuons averaged over space is fairly close to the average substance density of nucleons in cosmos, so nuons play the role of dark matter, affecting the motion of stars and galaxies. Due to their large number and relatively large sizes in comparison with nucleons, the cross section of nuons is of such kind that they are able to create redshift of the emission from distant galaxies. Nuons not only reduce the energy of electromagnetic quanta which are propagating in space, but also scatter them partially. This leads to weakening of the emission intensity with the distance, which has been recently detected by comparing the amplitude of the outbursts of supernovae of the same type with different redshifts, so that the solution of the problem does not require involving exotic dark energy. Besides nuons are one of the sources for thermalization of electromagnetic emission which is pervading the Universe and is converted into isotropic microwave background radiation with the effective temperature of about 2.725 ± 0.001K. Interpretation of the effect of redshift due to the loss of emission energy on nuons allows us to explain the observed fact of quantization of the redshift values in nearby galaxies. Since the losses of the emission energy depend on the distance traveled, then the recurring periodicities of the redshift values reflect almost equal average sizes of typical galaxies, separations between the neighboring and binary galaxies, as well as average distances between the clusters of galaxies.

It is well known that the Newton law and the general theory of relativity (GTR) only describe gravitation, but do not give specific explanation of it. In § 7 on the basis of [9] and [147] we developed the theory of gravitation in the concept of gravitons, proceeding from the representations of the Fatio – Le Sage kinetic theory. This leads to the derivation of Newton law and the definition of the basic characteristics of fluxes of gravitons – the density of their energy in space and the penetrability in the substance. The gravitational force is responsible for the shape and the integrity of cosmic objects, while gravitons in the form of relativistic particles, photons and neutrinos are generated by the substance at all levels of matter. At the level of stars the emission of particles and photons is most active near neutron stars and at the atomic matter level – near nucleons. As we move deeper into the matter the energy density and the concentration of particles in the fluxes of gravitons increase, but their free path in the substance decreases. This results in the complex structure of gravitons existing in the space, as well as in the impossibility of black holes as objects absorbing any substance and not releasing anything out. If the latter statement would be true, we should expect black holes at the deepest levels of matter. But then black holes would absorb all the minute substance and at our level of matter there would be neither gravitons nor gravitation.

One of the most successful approaches in physics is the principle of least action. The essence of it is that when the system passes from one state into another, among all the possible ways that one is realized at which the function has the least stationary value. There are two variants of the principle: in the form of Hamilton–Ostrogradsky with the additional condition that the possible ways are passed during the same time; and in the form of Maupertuis–Lagrange, with the condition of conservation of the initial energy. We want to focus on why performing this principle is possible. Let us take, for example, bodies which are interacting by means of the gravitational field. How do these bodies determine how they should move and along which trajectory? It is obvious that the uniqueness of motion is achieved due to the action of a great number of particles. An example is the gas temperature which is stable because it is the average characteristic of the kinetic energy of a large number of molecules. For emerging of gravitation, as we assume in accordance with § 7, there are numerous fluxes of gravitons, which cause attraction of bodies. That is huge multiplicity of particles that creates the effect of the gravitational field and ensures the motion of bodies according to the principle of least action. Apparently, a similar situation exists with regard to the electromagnetic field. Both fields together are fundamental long-range fields and they generate the basic forces that we observe in the nature. As a consequence of the principle of least action, we should consider other less general principles that are performed under additional conditions, such as the principle of minimum energy dissipation by N.N. Moiseyev [148], the principle of least entropy production by I.R. Prigogine, and the principle of minimum energy dissipation at the boundaries by L. Onsager [149].

In § 8, we state difference of mass-energy of the gravitational field of a spherical body in two reference frames, in one of which the body is fixed, and in the second it is moving at a constant velocity. The first mass-energy is calculated through the gravitational energy  of the static field as mass-energy .

If we integrate the vector of the momentum density of the gravitational field of the moving body over the entire volume, occupied by the field, then the momentum of the field will be proportional to the velocity of the body and the mass-energy of the field . Inequality of the gravitational mass-energy  and the inertial mass-energy  means the probable violation of the equivalence principle, as applied to the mass-energy of the gravitational field, which can not be explained in GTR in the weak field approximation. From the point of view of the Lorentz-invariant theory of gravitation (LITG) [9], the difference between the mass-energies is associated with the existence of an isotropic reference frame, in which the fluxes of gravitons are isotropic [150]. Any motion of the body relative to this reference frame leads to distortion of the fluxes of gravitons, acting on the body, and to emerging of wave disturbance in the fluxes of gravitons, which is moving at the same velocity as the body. The mass-energy of such disturbance can then be equated to the difference between mass-energies  and . According to extended special theory of relativity we show that the Lorentz transformations are conditional and that other transformations, containing the absolute velocities of reference frames relative to the isotropic reference frame, are possible. One of the consequences of this is that the observer moving with the body relative to the isotropic reference frame and using the principle of relativity, can calculate only the mass-energy  and does not see the increased mass-energy . The difference between the mass-energies  and  can be understood as the additional energy of the gravitational field arising from the work required to bring the initially fixed body in some reference frame into the state of motion.

According to GTR the gravitational field is very special – it is created by all possible sources of mass-energy, from the substance to the electromagnetic field, but it does not necessarily produce either substance, or any other field. This follows from the geometrization of the gravitational field and its representation by the curvature of spacetime and the equivalence principle. This approach is a necessary measure, since in GTR there is neither the model of the gravitational field, nor the mechanism of its action. Accordingly, the main task is the description of the gravitation effect without going into the essence of the phenomenon.

The most obvious weakness of this approach reveals in the fact that the gravitational energy density in GTR is not a real tensor but only pseudotensor. This is natural – the energy density of the physical field is always a tensor and can be transformed into any reference frame, while the geometric analogue of energy can not be directly transformed from one frame to another, as it requires to know beforehand the geometry of the new reference frame. The problem with the energy actually means the problem of its localization – in different reference frames in GTR, it is concentrated in space differently.

In contrast to this in the covariant theory of gravitation (CTG), which is the continuation of LITG for the case of Riemannian space and arbitrary reference frames,  gravitation is explained in the model of gravitons and is a real physical force. The metric in this case is not identified with the gravitational field, but characterizes the degree of influence of the matter and fields on the deviation of the results of spacetime measurements from their values in inertial reference frames. In § 9, according to the results of [151] we show the similarity of the equations of electromagnetism and gravitation, and construct a unified electromagnetic and gravitational picture of the world, which conforms to the principle of relativity.

If in the macroworld the key role is played by ordinary gravitation, then in the microworld strong gravitation is responsible for the integrity of elementary particles. By introduction of the strong gravitational constant:  m3∙kg –1∙s–2,


where  and  are the charge and the mass of the electron,  is the vacuum permittivity,  is the mass of the proton, in § 10 we managed to describe the force and the energy of gravitation, to express the gravitational torsion field  from the motion of nucleons. This made it possible to specify in the mathematical form the conditions of the equilibrium of nucleons, to determine the structure of the simplest nuclei, to present the nuclear forces as the combination of strong gravitation, torsion field and electromagnetic forces. It is shown that in the massive nuclei the saturation mode of specific binding energy is realized when adding new nucleons stops increasing the gravitational potential of the nucleus and the gravitational force, and the subsequent increase of the number of protons leads to decreasing of the specific binding energy of the nucleus due to the additional positive electrical energy.

In § 11, based on the similarity of the properties of nucleons and neutron stars, the model of the internal electromagnetic structure of the neutron and the proton is constructed, taking into account the configuration of their magnetic field, composition and charge of the substance of nucleons, the possibility of transformation of the particles into each other in the reactions of weak interaction. The analysis of reactions involving nucleons, pions, muons, electrons and neutrinos leads to the conclusion that the muon and electron neutrinos (antineutrinos) are composed of the corresponding beams of electron neutrinos (antineutrinos), but belong to a deeper level of matter. The weak interaction is the result of the natural transformation of the substance of elementary particles, but not some special force. The considered approach follows from the theory of infinite hierarchical nesting of matter [36] and allows us on the unified basis to present the evolution of matter in time and space. In particular, this means that neutron stars will eventually experience transformation of their substance, similar to the reactions of weak interaction, for example transforming into magnetars as in charged and magnetized stars. The structure of nucleons is supplemented by the picture of their electromagnetic and gravitational fields at limiting rotation of the particles in § 13. This allows us to estimate the radius of the proton by comparing the spin and the angular momentum of the gravitational and electromagnetic fields of the proton.

Introduced by the theory of elementary particles and quantum chromodynamics, quarks and gluons are considered as the basic building material that makes up the mesons and baryons. The properties of quarks in the theory are chosen in such a way that their combinations would correspond to the properties of elementary particles. As for leptons, they stand apart from hadrons and can not be modeled with the help of quarks. What are leptons made of – it still remains unknown. The analysis of the concept of quarks was made in § 12, where it was shown that quarks could be presented in the form of different sets of – phase and – phase of hadronic substance. These phases have different charges and magnetic moments and are the constituent parts of the substance of nucleons, being located either in the nucleus or in the shell of the particles. Reducing the properties of quarks to the typical phases of hadronic substance means that quarks are not independent particles but rather a special kind of quasi-particles. The similar conclusion follows in respect of the intermediate vector bosons of electroweak interaction  with their energies of the order of 80 – 90 GeV. These bosons are detected by symmetrical tracks of high-energy leptons arising from collisions of counter-propagating beams of protons and antiprotons with the energies of the order of  GeV. However, at these energies the particles of the substance of colliding nucleons can reach almost the maximum possible speed , where  is the speed of light,  is the coefficient of similarity in velocities for degenerate objects. The speed  is determined with the help of the theory of similarity of matter levels [152]. In this case, instead of appearing of vector bosons,  we can simply state that the boundary is reached, at which the substance of elementary particles starts interacting with velocities close to the maximum velocity for this type of substance.

As for the weak interaction, the possibility to model it in the framework of the electroweak quantum formalism with the help of vector bosons, as the carriers of interaction, does not mean the construction of the essential mechanism of the phenomenon. Indeed, in § 11 the weak interaction is reduced not to the forces but to the transformation of the substance of elementary particles which occurs in it at a deeper level of matter. In the interaction of pions and nucleons the gravitational and electromagnetic forces are significant, leading to appearing of nucleon resonances  or , depending on the order of addition of the orbital angular momentum and the charge of particles. And the analysis of Regge hadron families shows that they can be explained taking into account the spin quantization and the state of the particles’ substance, retained by the strong gravitational field.

In addition to nucleons, another basic constituent part of the substance is electrons the properties of which in the atoms determine the variety of chemical substances. However quantum mechanics and the theory of elementary particles due to the probabilistic and statistical methods can not give exact substantial models of the electron and the elementary particles. In § 14 we found the feature of the electron, consisting in the absence of its proper radius as of an independent particle. This follows from the weakness of the gravitational force which is not able to keep the electron substance from the repulsion force of its proper electrical charge. From the evolution of substance at the level of elementary particles we can understand that the electron appears as the necessary consequence of achieving electrical neutrality of the hydrogen atom (or another atom), when around the positively charged proton (the nucleus) a negatively charged electron cloud is formed.

Since this electron cloud can not have proper static magnetic and mechanical moments, which are assumed in quantum mechanics for the electron spin in order to explain the spin and the spin magnetic moment, we introduce a dynamic concept. This means that all phenomena which are attributed to the spin, occur only at the moment of transition of the electron from one energy state to another, since in the stationary states there is no spin. The main reason of emerging of the spin is deflection of the center of mass of the electron cloud in the atom from the nucleus, which takes place, for example, after interaction of the electron with the photon. In this case the spin is part of the total angular momentum of the electron cloud associated with rotation of the center of mass of the cloud around the atomic nucleus. Different directions of rotation of the substance in the cloud, taking into account the orbital motion of the nucleus relative to the center of mass of the electron, result in fine splitting of atomic energy levels. While the electron cloud as a whole is shifted relative to the nucleus and is rotating, the electromagnetic emission takes place and the loss of energy by the atom. Then the atom achieves the equilibrium state with the simultaneous disappearance of the spin.

Based on this picture, we can get rid of the famous paradox of quantum mechanics, according to which in the ground state of the atom the electron can not have the orbital angular momentum, but it has the spin angular momentum. If the spin is a dynamic phenomenon, this should be just the opposite way – in the ground and in the s-states there is no spin, but there is the orbital angular momentum which it ensures the magnetic moment, which had been previously considered to be the spin magnetic moment of the electron. The orbital angular momentum does not allow the electron cloud to fall on the nucleus, and due to the axisymmetric configuration of the cloud there is no emission from it and the rotational energy is not lost. A number of other effects, that are assumed to be associated with the electron spin, obtain new explanation. In particular, the fine splitting of the atomic energy levels and the multiplicity of atomic spectra are derived as the consequence of combining the contributions of magnetic energies in the nucleus’ field from all the excited electrons of the open outer shell of the atom. Taking into account the energy of the magnetic moments of electrons in the proper magnetization field of the studied samples leads to the fact that in magnetomechanical phenomena such as Barnett and Einstein-de Haas effect with ferromagnetic samples the Landé g-factor of electrons is  as in the classical case, but not , as it is expected for the spin.

Successive consideration of the structure of the electron cloud in the atom allows us to determine the probable nature of the annihilation of electrons and positrons, to take into account the contribution of strong gravitational field in the balance of forces and energies of the electron, in the process of photon emission. The analysis of the stationary states of the atom shows that there is a balance between the fluxes of electromagnetic and gravitational field energies and the flux of the kinetic energy in the substance of the electron cloud. This leads to quantization of energy and the angular momentum of the electron states and to discreteness of atomic spectra. Representation of the structure of electron clouds in the form of discs allows us to calculate the parameters of the helium atom and to relate the emergence of the Pauli exclusion principle to the electromagnetic induction in neighboring electron clouds and to the Lenz's law. Among other conclusions we can mention the attainability of wave-particle duality only at energies comparable to the rest energy of particles and the explanation of high energy cosmic rays as the consequence of the existence of the positive electrical charge in magnetars.

We shall say a few words about the uncertainty principle and the principle of complementarity in quantum mechanics. In our opinion, the uncertainty principle on the one hand is the consequence of the fact that physical quantities associated with elementary particles must be measured with the help of the same elementary particles or rather energetic field quanta. Of course, this implies that measuring the position of the electron with the help of the photon we can not determine the exact momentum of the electron, changing while interacting with the photon. If we can make of two physical quantities  and  the product with the dimension of the quantum of action, then these quantities can be included into the uncertainty relation of the form: , where  and  are the mean square deviations of physical quantities from their average values. Due to the uncertainty principle, the more exactly some quantities are determined, the less exactly are known at this moment other quantities associated with them. Therefore, each time under these conditions our knowledge about the system is incomplete, some information is lost.

On the other hand, the wave functions of elementary particles and the probability of events with them strongly depend on the type and the energy of the ongoing and sometimes not accounted interaction. Therefore, in quantum mechanics the initial conditions are considered and the probabilities of final events are searched for, avoiding the description of intermediate processes. Impossibility of continuous description of a phenomenon in terms of physical quantities and the use of the wave function as the event probability amplitude are also the cause of the uncertainty principle, since between the initial and the final states of the system unaccounted deviations of physical quantities from the average or probabilistic values are admitted. In the probabilistic approach of quantum mechanics, inevitably there are average values and deviations from them, and some measurements can give close but different results. Thus, the uncertainty principle reflects the degree of our unawareness of the course of intermediate processes and also takes into account the lack of measurement tools that would not allow distortion of the measurement results.

As a result of incomplete knowledge of intermediate processes and physical quantities the principle of complementarity appears. The fact is that in the initial and final states, we can prepare different sets of conditions to observe different physical variables, and to make the corresponding probabilistic predictions. The complete available knowledge of the system is achieved when we examine all the possible initial conditions for the considered sets of physical variables and conduct corresponding experiments to prove the theoretical predictions. The results will complement each other, giving the general picture. The principle of complementarity is also reflected in the fact that the mathematical description itself and the formulas for finding the experimental results depend on the sets of the considered physical variables. All this implies that determinism in quantum phenomena does not disappear. But the available theoretical and experimental tools of quantum mechanics are not able to show us this determinism in the usual form. We observe a special variant of determinism, partially accompanied by indeterminism, which is limited by the uncertainty principle and the principle of complementarity with each set of selected physical variables.

We shall note one more feature of the uncertainty principle. It is related to the fact that this principle works only within a certain level of matter, since each level of matter has its own characteristic angular momentum. For example, at the level of elementary particles the characteristic angular momentum is of the order of the Dirac constant  J∙s. At the stellar level of matter the objects such as neutron stars have the characteristic angular momentum equal to  J∙s according to (345) and somewhat less magnitude  J∙s for planetary stellar systems according to (10). For the objects such as neutron stars the uncertainty relation is:


.                                               (557)


It means that measuring the location of the star with the help of other stellar objects with the spread of their momenta to the quantity  we have uncertainty about the coordinate  showing the degree of inaccuracy of our knowledge about the location of the star. The uncertainty relation (557) must be satisfied in all interactions between stars, but we can show that it stops working for internal components which could make up the star. Thus, according to the quark hypothesis nucleons consist of three quarks (in fact, in § 12 we showed that all quarks, and hence all hadrons, can be presented as combinations of the substance from the nucleon nucleus in the form of – phase of the substance, and of the substance from the shell of the nucleon in form of – phase of the substance).

We shall suppose further that because of similarity relations between the nucleon and the neutron star, the star consists of three objects similar to quarks. Substituting in (557) instead of  the diameter of the neutron star of the order of 24 km, we find  N·s. From this  by dividing it by the mass of the object  kg, we can obtain for the mean square deviation of the speed of this object inside the star:  m/s. This value is an order of magnitude less than the speed of light and seems acceptable. But in fact the neutron star is not composed of three objects, but of number of neutrons – their number reaches . If we apply (557) to these neutrons, then in view of their low mass, the spread of speeds of neutrons will be much greater than the speed of light. Therefore, the relation written for the objects of the stellar level of matter (557) can not be directly applied to such objects as elementary particles, since beforehand in (557) we must substitute  by .

In this case, we can assume that the approach of quantum chromodynamics, in which quarks and gluons are placed inside the hadrons, and the quarks are attributed the spin equal to , is incorrect. The situation becomes even more complicated when elementary particles are assumed to consist of a set of partons or preons with the same spin . In our opinion this approach is incorrect because the less is the mass and the sizes of the object, the less is its characteristic angular momentum. For each level of matter there is its proper characteristic angular momentum and its uncertainty relation, and the use of the same Dirac constant  for all objects regardless of their belonging to the level of elementary particles leads to an error.

The author hopes that this book will be a good introduction to the range of problems and the solution of the fundamental questions that are arising in the natural science nowadays due to the theory of infinite hierarchical nesting of matter. We shall remind that in fact some predictions of this theory have been already proved – the minimum mass of main sequence stars equal to 0.056 solar masses by discovery of brown dwarfs, and the typical parameters of dwarf galaxies with the mass of 4.4∙106 solar masses and the radius up to 371 pc [153], [154]. From the possibility of locating all the known objects in the form of points at the infinite scale ladder of matter in this theory also a new degree of freedom (scale physical dimension) follows, in addition to three spatial dimensions and time [155].


Sergey G. Fedosin


Source: http://sergf.ru/con5en.htm

On the list of books