На русском языке

 

Problems of fundamental physics and possible ways of their solution

S.G. Fedosin

E-mail: intelli@list.ru

 

In the development of any subject it is always possible to trace phases of reorganization marked by the change of structure, the so-called revolutions, and more or less quiet evolutionary phases taking place without any dramatic moves. The development of separate sciences about nature, along with natural science as a whole, is no exception. In the latter case there is a periodic change not only of fundamental theories, but also of the paradigms, which in separate sciences predetermine an opportunity of reorganization itself. At what stage is the development of modern natural science now? Can it be described as an evolutionary process or are we on the point of another scientific revolution in the near future? Let’s try to answer these questions through the analysis of the most fundamental theoretical developments in physics; pointing out contradictions inherent in them we will be able to draw the appropriate conclusions. The majority of new and alternative models represented here still demand completion, but without them it is already impossible to imagine further development of science.

Special Theory of Relativity (STR) is aimed at the description of events in moving coordinate systems, predetermined by the course of these events in motionless coordinate systems, in which we can repeat the same experiments in laboratory conditions many times. As soon as the observer starts moving about the objects of research, there is a deviation of a seen picture of the phenomena from the static case – for example, there appears non-simultaneity of previously simultaneous events. All effects of this kind are very accurately described by the STR formulae, received on the basis of the following postulates:

1. Low-mass bodies are under consideration, so the force of their gravitation towards each other can be neglected. External influences and fields should be either small or compensated.

2. The principle of relativity is as follows: if the observer and his experimental system are in the condition of free gradual and rectilinear motion as related to the previous condition, which can be conventionally described as based, processes for the observer will proceed in the same manner as earlier in the based system.

3. It is accepted that symmetries as to shifts in time, in space, and at turn are valid; i.e. there are such properties as uniformity of time, space uniformity and isotropy.

4. All base spatiotemporal parameters are measured with the help of electromagnetic waves, i.e. the toolkit may include electronic watch, light rulers etc. which are standards for usual mechanical rulers and watches of any type. Synchronization of some watch with the others is carried out with the help of circulation of electromagnetic signals while allowing time for their delay at a certain distance. In other words, the signal from the first watch should reach the second one and then return; in this case the observer with the first watch can give the observer with the second watch the instruction for fixing his watch with the time shift equal to half of time of signal traveling forward and back. Direct measurement of length is possible only in motionless coordinate system, and when an object is moving, its length is identified indirectly through the light signals sent simultaneously from the ends of object to a motionless ruler.

5. Speed of light (or an electromagnetic wave) in vacuum is considered the same in all inertial coordinate systems.

Since in STR the principle of relativity is used, STR is valid for inertial systems, which are defined as moving rectilinearly with constant velocity in absence of external influences, and is the first approximation to the results determined for no inertial systems.

The consequence of above-mentioned axioms is independence of speed of light on direction and speed of motion of light sources. Other well-known effects of STR are relativity of simultaneity, time slowdown, and longitudinal reduction of sizes of moving bodies. And this description would seem perfect if not for one thing – each STR formula contains speed of light, but we do not know how light is distributed; nor do we know about the spread of electromagnetic wave in general! And do we have to use the photon concept, i.e. change from classical electrodynamics to quantum electrodynamics, only to describe how the high-frequency electromagnetic field acquires a new property – quantization? Just to say that electromagnetic field is a special kind of matter, which enables interaction of charges, means to put the problem under cover. Failure in explanation of the internal structure of electromagnetic wave entails only formal and mathematical aspect of STR, not allowing us to find its restrictions and to outreach its framework. And it is already written in many textbooks that speed of light is limit of interactions transfer, and no carrying medium, which is different from the electromagnetic wave itself, is necessary to transfer electromagnetic interaction. But the last means absolute autonomy and non-destructibility of electromagnetic radiation in the sense that one such wave cannot be neutralized by the other counter and opposite wave. And if it took place, where would the energy of both waves disappear, if the matter of waves differs basically from the matter of substance? And then how can interaction of electromagnetic radiation and the charges inducing it be possible in general?

Thus, it is much more natural to have a certain material medium (ether) as a carrier of electromagnetic interaction. It allows us to look at the whole spectrum of possible structures of electromagnetic oscillations at once – from ordinary periodic waves moving in the environment and capturing substance in each point where there is active short-term spatial movement of this substance, up to sole structures similar to soliton. Besides, it is also possible to present the model of photon as that of moving independent quanta containing a captured and organized medium or substance inside [1].

Suppose we speak of ether as medium, without which electromagnetic oscillations are inconceivable, might we combine this idea with conclusions of STR or not? It turns out to be possible. In [2] it is shown, that it would be sufficient for us to accept the existence of such an initial isotropic coordinate system, in which the speed of electromagnetic waves is the same in all directions, to make it possible further to deduce all formulae of STR according to the principle of relativity. Thus consequently the constancy of the speed of wave in all inertial systems and independence of the speed of light in relation to the speed of light sources should be accepted. The initial postulate of the classical STR appeared itself deduced from other assumptions! Simultaneously we get rid of the restrictions imposed by the formal scheme of the classical STR. First, it is now possible to examine seriously various models of ether as a carrier of electromagnetic field, choosing from these models the ones that do not contradict the essence of the new concept of STR. So it is easy to imagine such an  isotropic coordinate system in ether, in which the speed of electromagnetic wave is the same in all directions, or the other way round, to acknowledge that isotropy is provided by ether as some medium, which is generally motionless. Second, we can now understand the existence of quite accurately stated value of the speed of light in vacuum as a necessary consequence arising from the properties of the particles of ether. Third, the observed independence of the speed of wave from the speed of the sources of electromagnetic radiation can be proved by the action of two factors – the influence of the procedure of spatiotemporal measurements used by us in different inertial coordinate systems, and the action of ether as a carrying medium in an isotropic coordinate system. Fourth, the existence of an isotropic coordinate system sets it apart from all the other inertial coordinate systems. It means that we avoid ascribing relativity completely absolute character, getting rid of metaphysics from the philosophical point of view. And fifth, the new concept of STR completely coincides with the classical variant in two limiting cases – in vacuum experiments when ether is as though not picked up by moving bodies at all, on the one hand, and in case of the conditional full capture of ether, on the other hand. As in the classical STR ether is rejected as superfluous, it is not supposed to be discovered. However, in STR with ether it may be theoretically found out in an intermediate case with the incomplete capture of ether, since the external ether wind should somehow influence the distribution of light inside material bodies when these bodies are moving in relation to the isotropic coordinate system. This conclusion might help us to assess the properties of ether through experiments such as the well-known Fizeau experiment with the passage of light in water. It should at least be correct when material bodies or water in Fizeau experiment move with acceleration.

Let's turn now to the General Theory of Relativity (GTR). Its scheme was aimed at the development of the Newton’s law of universal gravitation, on the one hand, and on the other hand – applying the STR methodology to no inertial coordinate systems. When the sources of gravitational or electromagnetic fields are quite big, or in case of the necessity to measure the account of mass-energy of moving particles in GTR, it is already impossible to neglect their influence on the seen course of processes, which is expressed in the deviation of metric tensor from its simple form which is accepted in STR. Accordingly, spatiotemporal relations between events, and the way the bodies, affected by the field from massive sources move, vary. All looks as if substance influenced properties of space-time, and these properties in turn influenced the movement of bodies and physical processes.

Logic of GTR looks as follows. The principle of proportionality of the inertial masses, responsible for the resistance of bodies to acting forces, and gravitational masses, affecting the bodies attraction (the principle of Galileo), results in the identical acceleration of various bodies near massive source under identical initial conditions. From this the principle of equivalence follows – it is possible to replace a gravitational field, at least locally, by an accelerated moving coordinate system; gravitational forces thus are replaced by the forces of inertia, and the general view of the phenomena in case of such replacement will be the same. The presence of acceleration means a transition to no inertial systems that changes the components of metric tensor, which is necessary for the description of an interval. The interval both in STR and in GTR is described as the distance between indefinitely close to each other events in four-dimensional space-time. The change of the metric tensor components in GTR from the geometrical point of view is equivalent to the fact that space-time in each point is distorted, becoming non-Euclidean. As presence of field sources and energy sources of various kinds in general changes geometry, the simplest recording of GTR equations is the linear dependence of tensor space-time curvature and tensor density of matter energy-impulse. In terms of geometry motion of free bodies in STR takes place along a straight line from force of inertia (gravitations of bodies are absent), and in GTR the similar line named the geodetic line, is bent under action of gravitation, differing from a straight line. In both cases under identical conditions bodies travel along the same geodetic lines, appropriate in either in STR or GTR. As a matter of fact, in GTR the acceleration of bodies does not depend on mass (the principle of equivalence), but depends on the choice of a geodetic line, i.e. on geometry.

The well-known effects of GTR are the slowdown of time near massive bodies and the reduction of the bodies’ sizes in direction of the gradient (the greatest change) of gravitational field. All results of GTR proceed from the assumption that the search for gravitational field is replaced by finding metric tensor components, used further for the calculation of bodies’ motion on geodetic lines. Moreover, metric tensor appears to be the basic characteristic of the gravitational field. As a result, geometry absorbs physics – from a real gravitational field remains just metrics, force of gravitation is reduced to force of inertia and is explained in kinematical way. The most obvious weakness of such an approach is that the energy of gravitation in GTR is just pseudo tensor, and not a real one. It is quite natural – the energy of a physical field is always tensor and may be transformed to any coordinate system whereas the transformation of geometrical analogue of energy cannot be made from one coordinate system to another directly, as it demands preliminary knowledge of the geometry of a new coordinate system. The problem with energy means actually the problem of its localizability – in different coordinate systems of GTR the energy is focused in space in various ways.

Are there ways for restoration the status of gravitational field as that of real physical and not geometrical field? One of attempts is made in works [3–4] on the basis of changing the tensor equations of gravitational field, used for space-time in STR. Another approach is offered in [1–2], proceeding from the following reasoning. Gravitational field is considered similar to electromagnetic one, so for it the equations like Maxwell equations are formed and there are appropriate scalar and vector potentials. If we apply standard Einstein equations in order to find metrics inside a homogeneous massive sphere, containing incompressible liquid moving arbitrarily, we will see that all non-diagonal components of metrics in approximation of the weak field are veritably proportional to vector gravitational potential, and diagonal components are functions of scalar potential! Thus approach when gravitational field is described directly in terms of STR, and not only GTR as it used to be previously, proves to be correct. Moreover, in STR gravitational field receives not only the energy formula, but also the impulse formula, and becomes Lorentz-invariant field indeed. In particular, rotation of a charge derivates magnetic field, just as rotation of a mass creates torsion in space as an independent component of gravitational field. Torsion appears to be necessary because otherwise it would be impossible completely to describe the force of gravitational interaction acting between two masses. In fact, in the state of rest force of Newtonian attraction operates between masses, but if we try to record precisely the transformation of this force in moving coordinate system, it will hardly be possible without taking into account vector potential. It is Lorentz-invariance that gives an opportunity to transform forces and potentials of a field from one coordinate system into another with the help of standard Lorentzian transformations. Fruitful results of consideration of gravitation in STR are shown in article [5] when calculating angular momentum and radius of proton, and also proving the analogue of the virial theorem for angular momentum of gravitational and electromagnetic fields. If by virtue of the virial theorem gravitational the energy of a big space body has twice the module the kinetic energy of motion of substance particles of this body has, it turns out that angular momentum of gravitational field outside a space body is also twice as big as angular momentum of gravitational field inside this body.

What should be done now about Einstein tensor equations in GTR, if we consider gravitational field real in STR as well? How will the contents of these equations change? Here it is necessary to take into account the same way, which is widely used to include electromagnetic field in GTR. Namely, all tensor quantities should be written down in necessary covariant form, and only after that tensor equations should be substituted in for the calculation of metrics. Having done this for gravitational field as well, we can find the metrics varied under joint action of electromagnetic and gravitational fields. It is already impossible to count energy-impulse equal to zero outside a lone massive body as it is done in traditional GTR, because around a body there is always its own gravitational field and field from other sources. Gravitation can be treated as joint effect from the action of real physical fields – electromagnetic and gravitational. The validity of the offered approach is proved in a different way in [2] when the inertial observer in infinity having used the principle of equivalence finds in STR precisely the same slowdown of time that in the framework of GTR real gravitational field brings after its covariant has been included in equations in the framework of GTR. In result not metric, but the real gravitational field obtains the property of gravitational radiation, the source of which are moving masses. If in classical GTR only quadrupole gravitational radiation, considered as the consequence of fluctuations of the metrics, is possible, in the new version of GTR radiation may have dipole character, and it is similar to dipole electromagnetic radiation. Due to the division of metrics and gravitational field their changes may be non synchronous as the metrics may be influenced by other sources of energy-impulse. It should also be noted about the speeds of distribution of these changes, that they do not necessarily have the same values, though under the order of value they are close to the speed of light. Recent experiments with the measurement of the delay of the deviation of light from quasar in moving gravitational field of the Jove show that [6].

If classical STR refuses the decision of the problem of internal structure of electromagnetic field, the theory of long-range interaction of Newtonian gravitation as well as classical GTR with its concept of short-range interaction of gravitation are not capable either to throw light on the nature of the gravitational field. The formalism of these theories is adapted only to the description of consequences – the arising forces, prospective trajectories of motion etc., but cannot explain to us the real reasons. We should restore a true picture of the profound phenomena according to the light ripples on the surface of events. One of the clues to the mystery of gravitational field is the concept of gravitons, represented in [1]. Any two bodies will be as though drawn to each other owing to the effect of mutual shielding if they are in the cloud of penetrating them in all directions uncountable small-sized particles – gravitons. Calculations show, that in this case the gravitational force looks like the law of Newtonian gravitation, i.e. gravitons push bodies to each other proportionally to their masses and in inverse proportion to the square of distance.

Let's assume now, that the densest objects, which gravitation may create, are neutron stars, and so the maximal pressure in them creates the same density of energy that gravitation has. Then it becomes possible to estimate the factor of gravitons absorption in substance and to connect it with the gravitational constant to find the length of free running of gravitons according to the density of substance, and the stream of their energy through a single-unit area per unit of time. The section of gravitons interaction with substance appears to be so small, that only particles of neutrino type with energy about 1 keV can be gravitons. Supposing, that it is really the case, for the concentration of gravitons we find the value 1049 m-3. At last, we have an opportunity to understand the law of inertia, i.e. the absence of braking in bodies moving with uniform velocity in the gravitons flow. The matter is that on the one hand by virtue of Doppler effect, the movement of a body quadratically changes the total impulse received in a unit of time from the gravitons, coming from the opposite direction, – due to the increased frequency of impacts with gravitons, and due to the increase of their energy. But on the other hand, the more the energy of gravitons is, the less is the section of their interaction with substance and the less is the braking force. As a result at any uniform velocity of motion the force of braking does not arise and the law of inertia is carried out. Each transition from one uniform velocity to another and from one steady state to another demands energy expenses, so during such transition we feel resistance proportional to the mass of the body. As gravitons are responsible for the attraction of bodies, together with the effect of bodies’ inertia, it accounts for the proportionality of gravitational and inertial masses, observed in the experiment. In the concept of gravitons kinetic energy of a moving body may be calculated as the work necessary to change the speed of motion of this body concerning an equilibrium state of gravitons flows. Besides kinetic energy, each body and particles making it are characterized by the so-called energy of rest, the value of which is equal to the energy liberated at hypothetical full disintegration of particles of this body in the given coordinate system. The total energy of a body consists of the energy of rest and kinetic energy, and in practice it is calculated with the help of the body’s mass and momentum.

We may go further and connect more closely gravitational and electromagnetic fields. First, the equations of gravitational field constructed in [1], are similar to Maxwell equations for electromagnetic field. Second, calculations show that for a wide range of objects the ratio of their binding energy (gravitational energy) to their own electromagnetic energy is approximately equal to the same value, which is the ratio of the mass of proton to the mass of electron. It applies to degenerated objects such as nucleons and neutron stars, and to the energy of nuclear gravitation in relation to the energy of zero fluctuations of electromagnetic field in a black cavity with the envelope of nucleons, to the energy of rest of substance of the Metagalaxy in relation to the energy of background radiation, and to the relation of capacities of dipole gravitational nuclear radiation of proton to its appropriate electromagnetic radiation as a charge. All this helps us to arrive at the notion that electromagnetic radiation is the same as peculiar fluctuations of carrying them gravitons flows. In this case the role of ether is played by the medium consisting of quickly moving and all-penetrating gravitons. Besides electromagnetic radiation, there are also stationary electromagnetic fields, which need to be explained as well. It is obvious, that stationary gravitational field around a massive body depends on the invariance of interaction of the particles, making this body, with gravitons. Similarly, stationary electromagnetic field arises under the condition of the invariance of movement of the charged particles creating the field, and due to the special interaction of the charged particles with gravitons. In particular, we easily discover the influence of one charge on another, changing its type according to the type of charges. The possible explanation is that the charge of a body not only essentially increases or reduces the general factor of gravitons absorption, but also changes the configuration of their distribution in environmental space, which in case of large density of gravitons energy results in additional and significant in size electromagnetic force. The direction of force in this case may depend both on the direction of gravitons polarization in the field of cooperating charges, and on the spatial distribution of gravitons near charges of different types, on their concentration or, on the contrary, on divergence arising from the properties of charges.

Under such an approach the concept of fundamental force means its symmetry to cooperating bodies as consequence of the way of interaction. If a sole body is in isotropic coordinate system where streams of gravitons are counterbalanced in all directions, such a body will necessarily be at rest or travel without acceleration by the force of inertia. Due to the high penetrating ability of gravitons the fact of absence of acceleration is established inside the coordinate system connected with the body, with no reference to other coordinate systems. If the acceleration of the body in relation to isotropic coordinate systems exists, the coordinate system of the body is not inertial and in it necessarily there are forces of inertia. During the accelerated rectilinear movement the body under the action of compelling force and the force of inertia changes the form, gets flattened, and may even remain in this state after the removal of constraining force. When the acceleration is rotary, under the action of the moment of forces and the opposing inertial moment the body also changes the form (the sphere turns to the ellipsoid). In both cases after the removal of either force or moment of forces the body moves by the force of inertia –either rectilinearly, or rotates with constant angular velocity. However, even rotation with constant angular velocity still implies the presence of centripetal acceleration, so the system remains no inertial. No inertia of coordinate systems with gravitational fields proceeds from the fact that in them there is always the gravitational acceleration playing the role of acceleration for the similar force of inertia (for example, at rest of a body as related to the Earth its weight is discovered). Exactly in the same way, a coordinate system is in fact not inertial if it is connected with charges as between charges even at rest there are acceleration and force. Nevertheless, electrodynamics in the framework of STR describes perfectly all phenomena with charges. This fact convinces us that gravitational forces as well can be described by the appropriate equations directly in STR. In that case the basic role of GTR is limited to taking into account the dependence of the process and the speed of distribution of an electromagnetic wave on the presence of the sources of energy-impulse of any kind, specifying the results of spatiotemporal measurements, and by doing so describing the phenomena in a more correct way.

Consideration of the gravitational field as real physical field proved to be very fruitful in thermodynamics traditionally using the power approach. Here it was possible to deduce from the first principles the expression for heat and entropy increment, and also the analytical expression for returning the system to the balance of force in Le Chatelier-Braun principle [1]. Let us remind, that according to Le Chatelier-Braun principle of displacement of balance the system under external influence shows resistance to transition into a new equilibrium state, in which the system as though returns to its previous state. Entropy as the function of state turns out to be not merely the measure of irreversible dispersion of energy or the measure of probability of the realization of the certain macroscopic state, but may be expressed through energy gradients of electromagnetic and gravitational fields and the energy of substance, thus characterizing the structure of the system from the point of view of volumetric distribution of energy, and being the measure of linkage and interaction of the system’s particles. If we assume for the sake of simplicity that heat enters the system as electromagnetic quanta, the effective temperature of which under Wien law is proportional to the frequency of radiation, the entropy decrement will be proportional to quantity of the absorbed quanta bringing into the system certain order – directed movement of excited particles. In the isolated from external streams of substance systems it is still possible to consider the increment of internal entropy, arising due to transition of system from non equilibrium position in the state of rest, in which there is an equilibration of all forces.

In the monograph [11] the following law was formulated: "The change of the system organization is proportional to the change of internal and external streams of energy, movement and ordering, which together constitute the stream of system’s existence ". From the given law it is obvious that besides the laws of conservation of energy, momentum and angular momentum, it is also necessary to take into account the law of conservation of entropy. It is true that the increase of entropy in one system means the same its reduction in the other system, which is equivalent to the transfer of orderliness to space-time. A characteristic example here is the process of planets receiving sunlight and emitting thermal radiation from their surface. In the given process there is equality, balance of coming and leaving energies, but as temperatures of radiations are different, there is a difference in entropy. It means that planets receive negative entropy or negentropy, due to which the increase of internal entropy during the processes of relaxation and various movements on the planets with internal work being exercised on bodies are possible. It is necessary to include in the full balance of the stream of entropy the influence of gravitational field and the structural entropy of the planet’s substance besides electromagnetic radiation. In the stationary condition there are chemical transformations, circulation of substance in nature is taking place and life is supported, and the internal work on the planet under the action of the stream of solar energy is constantly compensated by the work of gravitational forces. The law of conservation of entropy as measure of orderliness for a full system can look as follows:

 

                    S = Sm + Sf  = const ,   or    dS = 0 ,

 

where Sm  is entropy of substance in view of the contribution brought by motion,

      Sf  – entropy of fields, including static and stationary components, and also components of entropy of fields traveling in space (for example, entropy of the stream of radiation).

The discovery of the law of conservation of entropy became possible just because gravitational field was then perceived as a real physical field, which together with electromagnetic field contributes to the systems’s ordering.

Let's move on to cosmology, the picture of the universe’ origin, presented to us now by the prevailing theory of Big Bang and its versions as models of the inflated or chaotic universe. The fundamental idea here is the existence in the very "beginning" of super dense and hot not so much substance as the mass-energy of the whole universe in very small volume as a field clot of particles such as photons, quarks, gluons, neutrinos. Further under the action of any instability the singular state collapses and explosive expansion slowing down in time begins, so there is a cooling of elementary particles and their subsequent linking in nucleons, and then in atoms of substance. After that the time comes for the formation by self-gravitation of gas clouds, the first stars, their congestions, and galaxies... Let’s not linger over the philosophical analysis of the problem (and the matter here is that the substitution of one mystery – singularity derived from nowhere, for another one – the origin of the universe is a fruitless tautology, and even periodic recurrence of process from singularity up to the maximal expansion with the subsequent opposite contraction in singularity is nothing but metaphysics). Our basic purpose will be criticism of the physical preconditions of the theory of Big Bang and presentation of the alternative theory.

Let's start with the theoretical basis of modern cosmology, traditionally including the transfer of non-Euclidean geometry of GTR space-time to the whole universe. It is supposed that it is possible to apply without restrictions the action of the laws established on the Earth and its vicinities to much more extensive areas. It is obvious that such statement is relative and should be checked in each particular case. If we accept the validity of GTR for smaller distances, from its equations non-stationarity of the universe, which should be either compressed or extended, will follow. The theory predicts the connection of space-time curvature with the density of mass-energy and describes the time evolution of the universe in several allowable models. At the same time hardly anyone pays attention to the fact that in all calculations the invariance of the gravitational constant is implicitly present. It is no wonder, as the classical GTR places the greatest emphasis not on physics, but on geometry, not interaction, but kinematics. But if we speak about the evolution of the universe from the point of its formation, we are obliged to take into account also the evolution of its gravitation force, since now it is not same as it was in the very beginning. As soon as we pass from the idealized "geometrical" gravitation to its real mechanism of graviton type, new questions immediately arise: How the evolution of the gravitational constant develops in time? If for gravitation gravitons are responsible, when and by what process did they generate? Similar questions are extremely important, because estimated density of gravitons energy stream is about 1,5·1033 J/m3, whereas the average density of substance in the universe is now only about 10-10 J/m3. But it is the latter density of energy that theorists in cosmological models operate, considering its evolution and not paying attention to a much more possible quantity. It is very debatable if we can trust cosmological conclusions of GTR in such conditions.

It is supposed, that the theory of Big Bang proves to be true since the discovery of redshift – the further from us galaxies are located, the greater shift their spectra have in relation to the spectra of laboratory light sources. It would seem that it is the direct proof of the galaxies’ running up, as the direct explanation of redshift may be derived from Doppler effect for the radiation of leaving sources. Then we should believe that close galaxies are going away from us at a small speed, and the farthest galaxies have the speeds close to the speed of light. And again we should try to penetrate into the essence of things before we apply the results of laboratory experiments to great objects. It is easy to notice that the explanation of redshift through galaxies’ running up from the point of view of classical STR stipulates rather a surprising phenomenon – the eternal, immutable photons flying in boundless open spaces of cosmos and carrying us the information on the far past of galaxies and stars, which had caused these photons. But are objects, which with movement do not lose energy, possible in reality? Most unlikely, and photons here can be no exception. Otherwise we should consider cosmological spaces as active medium, all time fueling with energy photons of various wavelengths that is equivalent to the new hypothesis demanding proof. If photons permanently lose energy at motion, the exponential law for the reduction of their energy and the increase of the length of wave is quite natural: . The distance, at which the energy of a photon will decrease in e = 2,718 times (e – the base of the natural logarithm), will be equal  s = c/H = (3 – 6) Gpc, here с – speed of light, H = (50 – 100) km / (c·Mpc) – Hubble constant [1]. Apparently, the distance s  is already close to the size of Metagalaxy that is the object, which we only may observe now as the representative of the whole universe. The ideas about the losses of energy of photons at their traveling in cosmological space cause doubts as to the reality of galaxies’ running up. This is also confirmed by the periodicity of redshift in different congestions, which may be explained by the periodicity of spatial arrangement of galaxies in these congestions, the observed exponential increase of redshift at large distances, and also the fact of small value of own velocities of the remote congestions of galaxies concerning background radiation [7].

One more discovery – of isotropic background microwave radiation – is used in the theory of the Big Bang for the substantiation of the hot past of the universe. If the universe is expanding, the average temperature of its radiation is falling. Background radiation is then a relict kept from the moment when radiation separated from the substance, heated up to the temperature about 4000 K. Careful measurements of heterogeneity of background radiation in this case might throw light on many large-scale processes in the universe. In any case high degree of isotropy of background radiation and the great density of its energy speak about spatial universality of the given kind of radiation and its important role in cosmogony.

In [1] an alternative cosmological theory is presented, which in contrast to the Big Bang does not demand initial singular state. It is supposed that Metagalaxy is not quickly expanding together with the galaxies, but is rather in a state close to the stationary one or to the slow change of its volume. It may even happen that it is simply getting compressed under the action of self-gravitation, as well as many other objects known to us. Still Metagalaxy is only a small part of the universe. We are of the opinion that both photons and gravitons lose energy at large distances, so stretching GTR and its conclusions to the whole universe becomes wrong.

As it was shown above, redshift cannot any longer serve as an unequivocal proof of the remote galaxies’ running up and the expansion of Metagalaxy. And what can be said then about relic background radiation? Here it is necessary to consider the global evolution of objects of the universe. Characteristic and mutually supplementing processes are: 1) Formation of particles and material bodies in opposite processes of flocking and crushing. 2) Formation of the static fields attached to substance, and traveling fields as radiation from particles and bodies. The substance and the field are inseparably connected with each other as all forces, including forces of inertia, arise under the action of fields. For flocking of substance in bodies fields are necessary, and the stability of bodies is exercised at balance of gravitational and electromagnetic forces when gradients of fields have equal values. In turn, field particles – photons, gravitons, neutrinos – not only cooperate with substance and lose thus mass-energy, but also are actively produced in multiple explosive processes and disintegrations of excited states of particles.

So, evolution of the universe is a continuous circuit of formation of objects ranging from weightless gas clouds up to super dense bodies with degenerated structure and quantum characteristics, and simultaneously a similar conveyor for the particles of fields reproduced at all spatial levels. At the same time the fields create conditions for the occurrence of particles of substance and bodies and on the contrary, plural interactions of particles derivate fields. Hence, to each level of substance structure its own set of particles’ sizes and masses and effectively acting fields correspond. On the scale ladder of objects it is possible to allocate such levels where quantities of fields reach maximum values. An example can be the level of elementary particles with very stable, almost eternal proton. Another example – the level of stars where there are neutron stars as the most dense and consequently indestructable space bodies. In both examples degenerated objects with quantum properties exist when the extremeness of states of substance is accompanied by the extremeness of appropriate fields, which allows drawing the comparison of objects with different dimensions on the base of the theory of similarity.

It would be quite logical to assume that gravitons in great numbers formed in processes at a lower spatial level than the level of elementary particles, but in spite of that they actively influence our much more scaled-up world. At the time when everywhere in Metagalaxy under the action of gravitation nucleons were formed background radiation appropriate to this process might arise. Here it is possible to give at least two possible mechanisms described in [1]. According to first of them, each nucleon at least once was subject to the initial beta-decay from neutron to proton with radiation of antineutrino, having approximately blackbody spectrum of energy (the same spectrum is characteristic for relic background radiation). If we consider that from these very antineutrinos relic radiation has arisen, the following formula for the average density of substance of the Metagalaxy turns out:  ρ = E·Mu / ε = 9·10-28 kg / m3, where E = 4,2·10-14 J/m3 – density of energy of background radiation with its average temperature 2,726 K, Mu = 1,66·10-27 kg – atomic mass unit, ε = 480,89 keV = 7,7·10-14 J – average energy of antineutrino at beta-decay of free neutron. The density of the substance of Metagalaxy, as is found from observation, really appears to be very close to the value ρ. In the other mechanism the process of nucleons’ formation is considered similar to the formation of a neutron star with the emission of enormous electromagnetic energy. The share of this energy in relation to energy of rest of the star is the same as the share of energy of the background radiation, coming in per one nucleon in space on average. From both mechanisms the close connection between the formation of nucleons and relic radiation, which does not demand initial singularity, follows. Instead of it natural formation of nucleons from the substance of environmental medium is alleged, similar to the evolution of neutron stars from the stage of accumulation of gas clouds in the big stars before burning out of these stars and the subsequent explosion of a super new star.

In the theory of Big Bang there are plenty of yet unsettled problems, which are absent or can be easily explained in our model of evolution. For example, why even at great distances the distribution of substance is approximately homogeneous and isotropic? Actually very remote areas of space all through prospective evolution might not be causally connected with each other because of the limited speed of interaction transfer. Another question concerns the "plane" character of space. Why is the average density of substance in Metagalaxy so close to the so-called critical density that in accordance with GTR space in its properties differs little from flat Euclidean space? To make this possible, their unbelievably exact concurrence even at the very beginning of expansion from singularity is required. If the expansion really is taking place, initial fluctuations and the rotation of the substance of future galaxies should decrease in its course. Return extrapolation in time results in the problem of unusually great fluctuations and huge initial whirlwinds of vague nature. At last, if once the universe was heated up to very high temperatures, and then during the expansion was cooled with the formation of the first elementary particles, where have antiparticles been since? In case of annihilation of particles and antiparticles substance could not have been formed at all. Therefore, again there is a problem, the offered solutions of which are as exotic as the theory of Big Bang itself.

There are many observant facts already, which obviously are not in line with the predictions of the theory of Big Bang or contradict it. For example, the angular size of the most extended anagalactic radiation sources decreases with the growth of redshift precisely how it should be expected in case of Euclidean universe [8]. The age of the oldest stars, spherical congestions and galaxies is approximately identical and ranges 17 – 26 billion years that exceeds the time until now usually given in the theory of Big Bang for evolution– up to 13 billion years. On the other hand, if Metagalaxy simply collapsed under the action of self-gravitation, the time of free fall would be equal to 85 billion years (here  γ – the gravitational constant, ρ – average density of substance of Metagalaxy). Apparently, this time quite will be quite sufficient for the formation of elementary particles from the environment (because of the increased speed of nuclear processes time in microcosm, understood as the stream of equivalent events, flows quickly), and also for the formation of stars and galaxies from the emerging hydrogen gas.

The high degree isotropy of background radiation testifies that in the past Metagalaxy was much more homogeneous, than now, so earlier it might have the increased sizes and reduced heterogeneity and congestion of substance. Thus the problem of the initial fluctuations required for galaxies creation is automatically solved. In contrast to expansion, at collapse the initial rotation of substance is always less by virtue of the law of conservation of the angular momentum that is physically clear and does not demand special explanations.

The problem of the observed ratio of hydrogen, helium and heavy elements in space finds its decision too. Helium and nuclei of heavier chemical elements might appear without any participation of Big Bang. They as well may be the result of substance processing in initial neutron stars of the galaxies formed from stars with the mass equal to about 10 – 16 solar masses [9]. In the recent article [10] through the analysis of super new star distribution with the large redshift not only the formula for the loss of energy by photons, but also the picture of the flat universe proves to be true.

By virtue of all stated we have every possible right to say that the theory of Big Bang claims the status of the biggest myth in the history of physics. This theory inspired so many problems and blind alleys of theoretical thought, that the only cardinal way to overcome them is to get rid of the theory altogether.

Let's consider now quantum mechanics. As studying the microcosm phenomena we always deal with the set of particles and similar corresponding interactions, in quantum mechanics it is accepted to describe not the real motion of a concrete particle, but probabilities for this or that particle to be in the certain state of motion or to have given energy. Amplitudes of probability are called in quantum mechanics wave functions by analogy with complex wave amplitudes in usual mechanics, and if the square of wave function is proportional to probability of event, the square of complex amplitude gives the intensity of a resulting wave. As well as complex amplitudes, wave functions meet the principle of superposition. The probabilistic approach in quantum mechanics was exhibited in the ratio of Heisenberg uncertainty for uncertainties of measurable physical variables, canonically connected with each other, and in Schrödinger equation for wave function. In particular, it is believed that the particle may be found in any place of space where its wave function is different from nil.

Due to the universal probabilistic-wave approach quantum mechanics has achieved remarkable success – many properties and structure of gases and firm bodies were explained (thermal capacity, ferromagnetism, superfluidity, superconductivity, some features of metals, dielectrics and semiconductors, laws of radiation), the structure of atoms and nuclides, the properties of nuclear particles, and nuclear reactions. At the same time quantum mechanics appeared unable to bring us to understanding its basic concepts, for example, the origin of the quantum of action as Planck constant, or the essence of spin and the charge of particles. Owing to the vague internal structure of electron the stability of atoms remains not clear until now; in fact it is simply postulated on the basis of Pauli principle of ban for electrons, indeterminancy principle and energy discreteness of nuclear levels. The essence of light dualism – simultaneous overlapping of its wave and corpuscular properties, quantization of light not only at the point of its absorption or emission by micro particles, but also at its distribution in space – demands further studying. The weak point of quantum mechanics is also its principal refusal to solve the problems connected with the description of particular movements inside a separate quantized action – its competence is limited only to operations with the variables describing the initial and final states of system. A characteristic consequence of it is the principle of indistinguishability of identical particles before and after their interaction.

In quantum mechanics it is accepted that to each particle with the certain state the wave of de Broglie may be compared as an appropriate wave function. Experiments with photons, electrons, nucleons, atoms and molecules diffraction confirm the presence of wave properties in particles, however this leaves the mechanism of the phenomenon in the shadow. One of the possible solutions of the problem of wave-corpuscle dualism of particles is given in [1]. If we consider that in each micro particle the internal electromagnetic fluctuations caused by the action of external excitation are possible, their simple recalculation with the help of Lorentzian transformation into a laboratory system results in the length of de Broglie wave becoming the spatial division between the peaks of maximum amplitude of these fluctuations. The length of wave appears to be proportional to Planck constant and inversely proportional to the speed of motion of particles and the energy of excitation. At diffraction on crystals, liquids and gases the energy of excitation of falling particles reaches maximum due to electromagnetic interaction with the molecules of substance, and the length of de Broglie wave becomes inversely proportional to the momentum of particles.

The description of phenomena in terms of probabilities has allowed to estimate in quantum mechanics the possible levels of energy in atom, to find their discreteness as the consequence of spatial quantization of electrons manifestation (orbital quantum number l, connected to the modulus of angular momentum; magnetic quantum number m, reflecting possible projections of the angular momentum to the allocated direction; the main quantum number n as the basic number specifying the levels of energy), and also as the consequence of internal properties of particles manifestation (the quantum number J , connected to the spin of particles). The formalism of the theory is entirely directed towards the quantitative description of experiments and does not provide any opportunity for deep penetration into the live essence of the nuclear phenomena – practically no attempts of modeling such processes as historical development of the components of atom – its nucleus and electrons, and their actual interaction are ever made. An effort to attribute this problem to the competence of the model of Big Bang now looks as fruitless as the model of Big Bang itself.

Narrowing the problem of stability of atom to electromagnetic and centrifugal forces does not thoroughly clear up the situation, as not clear is the self-occurrence of electric charges. The same can be said about mechanisms of chemical bond between atoms. The listed questions are raised in [1] where the formula for the origin of micro particles’ electric charge due to rotation of their own magnetic field is deduced. In the assumption, that the evolution of micro particles is similar to the evolution of planet-star systems, there is an analogy for proton and electron. The proton here is compared to the last stage of evolution of a massive star – a neutron star, whereas the rest of its planetary system which are eventually losing the orbital moment and breaking up in the powerful gravitational field of the star, are the analogue of the electron. Not all substance of the rest of planets falls on the neutron star as electromagnetic forces and orbital rotation prevent the fall of iron ferromagnetic particles from the nuclei of planets. So the model of the formation of the neutron star and the development of the rest of the nuclei of planets into a magnetized cloud around of the star help to explain with the help of analogy observed substance electro neutrality when there is a proton for every electron. It is interesting that if we measure the distance where planets should break up in the gravitational field of a neutron star (known as Roche limit) with the help of the theory of similarity for a hydrogen atom, it will appear that this distance precisely corresponds to an orbit with the first Bohr radius where electron is in the ground state! It is easy to check having compared the relation of Roche limit to the radius of a neutron star, and the relation of Bohr radius to the radius of proton.

Besides proton and electron, there are still a great number of elementary, so-called subnuclear particles and antiparticles. All of them may be placed in the appropriate classes: hadrons (mesons and baryons), leptons (electron, muon, tau-lepton and corresponding to them neutrino), photons, intermediate vector bosons, and gravitons. It is supposed that hadrons participate in all kinds of interactions – strong, electromagnetic, weak and gravitational, whereas leptons do not participate in strong interaction. Hadrons have the greatest variety of particles and may be as stable as proton, quasi stable because of rather long disintegration from electromagnetic and weak interactions, and also unstable with a small lifetime and disintegration from strong interaction.

Quantization of properties and discreteness of the states of particles due to the change of their spin, charge, mass, and internal structure allows to use the principles of symmetry and to unite particles in isotopic and unitary multiplets, and also in families along Regge trajectories. In view of it for the explanation of the whole variety of hadrons quark model is applied, according to which mesons consist of two quarks, and baryons – of three quarks. Characteristic of quarks as special sort of particles in the given model is that they can exist only inside hadrons, cannot be found beyond their limits in free state, and have fractional electric and baryon charges. Besides, they have color charges of three types and cooperate with each other with the help of gluons of eight types. Though quark model appeared to be convenient for the classification of hadrons, it cannot answer a number of important questions. If we cannot discover free quarks and gluons, how did they appear inside hadrons at the point of their formation as particles? The point is that each particle must have appeared some time and theoretically may some time be destroyed. The universal property of elementary particles to be born and destroyed when interacting with other particles does not help in this situation – we cannot consider quarks and existing on their basis hadronic substance eternal and only passing from one particle to another as separate complexes. The assumption that leptons cannot participate in strong interaction is also strange and it contradicts the experiments, in which high-energy leptons collided, and both hadrons and the strong interaction appropriate to them somehow arose. All things considered, the quark model seems to be as artificial construction, as the theory of Big Bang.

Let's say again that strong interaction is responsible for the stability of nuclei and the course of nuclear reactions, and also for the integrity and interaction of hadrons, and is short-term with a characteristic distance of 10-15 – 10-14 m. It is possible to notice that strong interaction for the same particle, for example, a proton, in usual treatment has rather an exotic form – inside the proton there should be quarks and gluons interaction, having the property of confinement, and outside, when the given proton is included in the structure of a nucleus – interaction with other nucleons, but already with the help of the exchange of mesons, basically pions. Such an abrupt change of the type of strong interaction at transition through the border of the surface of a proton seems extremely surprising and implausible. And why then in nuclei steady cluster structures and the pair correlations of nucleons resulting in the spectra of collective excitations are quite explicit?

Weak interaction, as is known, is responsible for any reactions with leptons participation, and is considered having even smaller radius of action than strong interaction. Such intensities of weak interactions as the speed of the course of reaction and the section of interaction increase quickly with the growth of the energy of reaction. In a standard model of electroweak interaction weak interaction is carried out with the help of intermediate vector bosons, which should have certain mass of rest, so that interaction had short-term character. It is supposed that bosons gaining mass is the consequence of the "spontaneous infringement of symmetry", whereas photons remain massless. The source of the infringement of symmetry can be discovered in specially designed by theorists hypothetical isodoublet scalar field with self-action and nonzero vacuum value. The measurement of the quark structures of hadrons in weak interaction is carried out by inclusion in weak currents of interaction members responsible for the transitions of quarks into each other.

Practically all modern theories, trying to explain the internal structure of the field and elementary particles, are worded in terms of quantum theory and are essentially quantum. Thus they almost do not pay attention to what the borders of Planck constant applicability as the quantum of action are, and do not care whether essentially probabilistic and quantum approaches can give a full picture of the phenomena. At the same time from the theory of similarity it follows that at each level of matter there is its own constant – a quantum of action of the basic carriers of mass, so taking a closer look into elementary particles we should find there qualitatively different substance in degenerated state with the reduced constant of action. The problem of quark and electroweak models includes the description of elementary particles distinctions from each other and the prediction of results of their interactions; however these models cannot give true presentation of particles’ structure and real interaction, and the interrelation between various types of interactions (we shall recollect how strongly formal theories of Newtonian gravitation and GTR differ from the intrinsic concept of gravitons). The probabilistic approach of the quantum theory of field and quantum chromo dynamics appears incomplete – justified in quantum electrodynamics, this method encountered an insoluble problem of divergence in the practiced perturbation theory in the analysis of strong interaction.

Are there other ways of penetration into the structure of the subnuclear particles, different from quantum approach, and not giving so formal results? As shown in [1], we have the right to apply even to such degenerated and having quantum properties objects as elementary particles, the methods of macroscopical physics. Resulting from the analogy with a neutron star, the gravitational energy of which is calculated precisely enough with the help of GTR, the concept of the nuclear gravitation for proton and other elementary particles is introduced, responsible for their integrity. The difference of nuclear gravitation from the usual one consists only in the replacement of the gravitational constant value. In result it becomes possible to find relation between the radius of proton and its gravitational energy (when binding energy is equal to the energy of rest). The force of nuclear gravitation also appears to be equal to the force of Coulomb attractions between a proton and an electron in hydrogen atom. A new result is the model of deuteron, in which the stability of two cooperating nucleons is provided by the action of two opposite directed forces – the force of nuclear gravitation, pulling nucleons together, and the force of magnetic pushing away, arising because of the presence of magnetic moments in nucleons, their rotation and prospective presence of superconducting state of the substance of nucleons, similar to that of neutron stars. When nucleons get closer to each other, the magnetic forces of pushing away in deuteron quickly grow, as it follows from the experiments with dispersion of particles on nucleons. In the given model strong interaction between elementary particles is not a special kind of interaction, but a result of the action between the particles of nuclear-gravitational and electromagnetic forces and forces of inertia from rotation. At the same distance total force depends both on the mass of particles, and on mutual orientation and quantities of the magnetic moments, orbital moments and spins of particles. Nucleons association in a nucleus results both in pair correlations, and in observable collective excitation of the most closely linked with each other nucleons.

We see that the use of the concept of nuclear gravitation enables us to describe several phenomena at once – not only the integrity of elementary particles, but also the stability of atoms, and bonds between cooperating elementary particles. In the given model annihilation of particles and antiparticles can be presented as follows: owing to the opposite direction of the magnetic moment at particles and antiparticles the magnetic force obviously cannot compensate the force of nuclear gravitation. It results in the collision of interacting particles and their subsequent destruction, thus energy of rotation of particles and part of binding energy transforms into electromagnetic quanta. For example, a proton’ and an antiproton’ annihilation may cause a gamma-ray quantum, and the rest of the substance usually transforms into pions.

If electromagnetic field is the first historical example of the physical Lorentz-invariant field, gravitational field should be considered (as we have shown above in the description of GTR) the second physical field of this kind. In the concept of nuclear gravitation strong interaction turns out to be a result of the action of electromagnetism and nuclear gravitation. And what can be said about weak interaction? The theory of similarity establishes the following analogies between the objects: hadrons correspond to neutron stars of different masses having respective spins and magnetic moments, and the stars are in various energy states; muons in mass correspond to white dwarfs, and electrons – to degenerated magnetized objects such as planets, breaking into clouds near neutron stars. Photons and neutrinos in this picture may correspond to flash-explosive emanation of the large portions of electromagnetic energy and to the directed streams of accelerated substance from space objects. Well known in nuclear physics process of disintegration of pion to muon and muonic neutrino, and further disintegration of muon to electron and neutrino (electronic and muonic) have parallels in the world of stars: a neutron star of 0,2 solar mass (analogue of pion) is unstable and first breaks into a white dwarf of 0,16 solar mass, and then into even less massive and dense object. The heaviest hadrons – resonances of Υ-type  – correspond to very hot massive neutron stars of 14 – 15 solar mass with a very short time of life and their subsequent disintegration. They might be formed at catastrophic collisions of neutron stars. Proceeding from the analogy with space objects, weak interaction at disintegrations of elementary particles is equivalent to the occurrence of instability of substance, which elementary particles contain, and to the objective transition to a new equilibrium state. In particular, processing of substance inside stars of the main sequence at thermonuclear reactions naturally results in the formation of white dwarfs, so similar process in the world of elementary particles would relate to the reactions of weak interaction. Another example is the increase of mass of the white dwarf over the limiting value and its transformation into a neutron star. Processes of transformation of substance may last long, which provides small speed of processes at weak interaction. It appears then that reactions of weak interaction take place not thanks to some special force, but result from the same electromagnetic and gravitational forces acting simultaneously – at the scale level of elementary particles, and at a deeper scale level of the substance making these particles. We see that such approach suggests the way to solve the problem of connecting together all four known types of "fundamental" interactions, and the given decision cardinally differs from other programs of "great unification", using as a rule gauge symmetry. Also the principle of energy-impulse equivalence of any type of interaction and origin, as potential sources of the change of metrics of space-time in GTR, becomes clearer and gets its theoretical basis. 

Summarizing all above-stated, it is possible to draw the following conclusion:

The existing paradigm of physical knowledge is obsolete and is subject to inevitable replacement on the basis of transition to substantive theoretical models of a deeper level.

Irreplaceable in such process of understanding is the theory of similarity based on succession and the philosophical law of double denying. The theory of similarity allows comparing the phenomena and the laws of micro and macrocosms with the help of transformation of SPФ-symmetry, where operation S designates the transformation of speeds, P corresponds to the transformation of the sizes, and Ф – to the transformation of masses [1]. As it turns out, with the transition from one scale level of matter to another the appropriate transformation of SPФ-symmetry leaves the equations of movement of bodies constant.

As a whole it is possible to notice that quite a lot of modern theories only describe the phenomena without touching upon the problem of modeling working mechanisms of these phenomena; moreover, they do not consider the genesis of the given mechanisms in their development. Thus we cannot proceed to the enrichment of our knowledge explaining the essence and the content, not just the form and the phenomenon. Advanced fractions of fundamental science should be oriented in this direction. Instead of it obviously insufficient and isolated attempts of separate scientists are observed. Moreover, many of them are not given due respect, they are considered odd fellows, who do not want to be satisfied with the existing state of affairs. Editions of scientific magazines frequently are not ready for rather difficult work with the authors of problematic articles on deep fundamental questions and consequently prefer to duplicate already approved works by well-known authorities, which do not present any special novelty. As a result separate out-of-date theories and no critically used principles of science constantly continue to reproduce new adherents, deliberately or not causing them to be going in the same circles. So what is waiting for us tomorrow – repetition of the past or a break-through in the unknown?

 

References

1. Fedosin S.G. FIZIKA I FILOSOFIYA PODOBIYA OT PREONOV DO METAGALAKTIK. (Physics and Philosophy of Similarity from Preons up to Metagalaxies). // Perm, Style-Mg, 1999. 544 p. (In Russian).

2. Fedosin S.G. SOVREMENNYE PROBLEMY FIZIKI. V POISKAKH NOVYKH PRINTSIPOV. (Contemporary Issues of Physics. In Search for the New Principles). // M.: Editorial URSS, 2002. 192 p. (In Russian).

3. Logunov A.A., Mestvirishvili M.A. Bases of the Relativistic Theory of Gravitation. – M.:  Moscow State University, 1986. (In Russian).

4. Logunov A.A. Lectures on the Theory of Relativity and Gravitation: Contemporary Analysis of Problems. – M.: Nauka, 1987. (In Russian).

5. Fedosin S.G. and Kim A.S. THE MOMENT OF MOMENTUM AND THE PROTON RADIUS // Russ. Phys. J. (Izvestiya Vysshikh Uchebnykh Zavedenii, Fizika), Т. 45, 2002, P. 534 – 538.

6. Kopeikin S.M. and Fomalont E.B. Aberration and the Speed of Gravity in the Jovian Deflection Experiment. – arXiv: astro-ph/gr-qc / 0311063 v1, 4 Nov. 2003.

7. Zel'dovich J.B., Sjunjaev R.A. Metagalactic Gas in Congestions of Galaxies, Microwave Background Radiation and Cosmology. – In “Astrophysics and Space Physics”. – M.: Nauka, 1982. (In Russian).

8. Miley G.K. // Monthly Notices of the Royal Astron. Society, 1971, V. 152, P. 477.

9. Rivs G. – in “Protostars and Planets”. Part 2. – M.: Mir, 1982. (In Russian).

10. Khaidarov K. VECHNAYA VSELENNAYA. (The Eternal Universe). // On the site www.n-t.org . (In Russian).

11. Fedosin S.G. OSNOVY SINKRETIKI. FILOSOFIYA NOSITELEJ. (Fundamentals of Syncretics. Philosophy of Carriers). M.: Editorial URSS, 2003. 464 p. (In Russian).

 

Source: http://sergf.ru/psfen.htm

Scientific site