An analysis of the $H_{0}$ tension problem in a universe with a viscous dark fluid

In this paper, two inhomogeneous single fluid models for the Universe, which are able to naturally solve the $H_{0}$ tension problem, are discussed. The analysis is based on a Bayesian Machine Learning approach that uses a generative process. The adopted method allows to constrain the free parameters of each model by using the model itself, only. The observable is taken to be the Hubble parameter, obtained from the generative process. Using the full advantages of our method, the models are constrained for two redshift ranges. Namely, first this is done with mock $H(z)$ data over $z\in [0,2.5]$, thus covering known $H(z)$ observational data, which are most helpful to validate the fit results. Then, aiming to extend to redshift ranges to be covered by the most recent ongoing and future planned missions, the models are constrained for the range $z\in[0,5]$, too. Full validation of the results for this extended redshift range will have to wait for the near future, when higher redshift $H(z)$ data become available. This makes our models fully falsifiable. Finally, our second model here is able to explain the BOSS reported value for $H(z)$ at $z=2.34$.


I. INTRODUCTION
The discovery of the accelerated expansion of our Universe is one of the greatest events of our epoch [1] - [10]. It requires the existence of some kind of dark energy, to be added to our working models of the Universe. However, over twenty years have elapsed and the nature of dark energy is still a mystery; although we hope that this could changed at any point, due to the high quality observational data expected from future collaborations and the corresponding space missions [11] - [14] (to mention a few). On the other hand, available observational data already provide some hints, partially shedding light on this dark energy problem. Nowadays, from the observational data alone (in a modelindependent way), it is possible to decide what are the directions to look for, and the possible candidates for dark energy to be introduced, in order to solve the acceleratedly expanding Universe problems (for dark energy models with fluids, see [15] - [29] and references therein). The most interesting aspect in all these tools to be mentioned here is, that different approaches give almost the same result, and it is practically impossible to take a final decision, discriminating among them. There are different reasons why the dark energy problem cannot be solved, one among the main being the presently existing tension between different observational data-sets. This shades reasonable doubts, leading to believe that the models in hand have real problems, and that these problems can only be fixed with new observational data covering extended redshift ranges. Indeed, it seems correct to say that only the data from higherredshift observations can solve the problems at hand, because we already have qualitatively good data covering the low-redshift behavior of our Universe. The mentioned tension problem seems to come from these low-redshift data and a significant part of the research seems to have neglected this fact and, at the same time, assumed that the error distribution is also Gaussian. Removing one or more of these assumptions may lead to different outcomes. However, the existing tools commonly used in the model analysis are too insensitive or either cannot be used, when one of the assumptions is changed or removed. On the other hand, at this moment it is even impossible to predict if the new data will resolve the tension between the data-sets or if the existing tension will be masked, but more complicated tasks to be solved will appear. Anyway, let us for a while forget about the tension issue and briefly summarize what we know about dark energy and the accelerated expansion problems.
We know that if General Relativity is the correct theory of gravity to describe the background dynamics of our late time Universe, then we need some energy source able to accelerate its expansion. On the other hand, we know that it should be an energy source behaving in such a way that it will not destroy the observable Universe full of well-studied, different objects. Moreover, if we take the cosmological constant model to be the dark energy model, then it soon becomes clear that we must be ready to perform modifications, because it suffers of very serious problems. Soon after the discovery of the cosmological constant problems it was natural to assume that the dark energy has a dynamical nature, and various interacting dynamical dark energy models where introduced and developed. Nowadays, intensive work in this direction is taking place, and there is the hope that the new problems may be eventually solved by some interacting dynamical dark energy model 1 . However, we should mention another interesting late time Universe related problem can be an efficient way of revealing the nature of dark energy. In the recent literature, there is an intensive discussion about a recent problem, known as the H 0 tension issue. It arose from the Planck and Hubble Space Telescope result reports, when the community noticed that, on top of the other problems of modern cosmology, there is also a tension between the caculated values of H 0 obtained from the two observational data sets. The problem is due to a reported huge difference between the observed H 0 values coming from the two mentioned projects. This could have two reasons, indicating either new physics, or a problem with the measurements. For the moment, it is very hard to estimate what to expect; however, one thing seems clear -if these results indicate new physics, then we have a very good starting point to clarify, constrain and select good extensions and modifications of General Relativity. A recent discussion on the problem drew up the borders separating good models from bad ones. However, it is too early to reach a final conclusion, since the measurement-related issues and their strong impact on the H 0 tension problem have not been excluded completely, as of now. It has been shown that some of the interacting dynamical models existing on the market can be used to solve the issue. On the other hand, a modification of General Relativity can also propose solutions [30] - [44] (to mention some references). In this paper we discuss another approach to address this issue.
Specifically, the main goal of our study is to address the H 0 tension problem as a good chance to construct new cosmological models with viscous/inhomogeneous fluids [45] - [55]. Indeed, nowadays, in the scientific literature there is a huge amount of works indicating the ways how the H 0 tension problem can be solved, or at least alleviated. However, up to our knowledge, there is no work connecting the H 0 tension problem solution with inhomogeneous single-fluid Universe models. It should be mentioned that a previous study shows that cosmic fluid viscosity/inhomogeneity could be very useful not only for modeling the accelerated expanding Universe, but also for modeling cosmic inflation. Therefore, it is natural to expect that viscosity and inhomogeneity of the cosmic fluid can be useful in this case too. However, an early study with existing models has shown that we needed to construct a new one, since the old models could not solve the H 0 tension issue. In this paper we will prove that, by using Bayesian Learning and probabilistic programming tools, we are able to construct new models of an inhomogeneous single fluid Universe were the H 0 tension problem disappears. In particular, we will present two new models able to explain both the accelerated expansion of the late time Universe and the transition to this phase. As we have mentioned, our study is based on a Bayesian Machine Learning approach, which actually does not require real observational data to be used for the analysis. The method employs a model based generative process, allowing to constrain the free parameters of the model. In our case, the observable used in the Bayesian Learning process is taken to be the Hubble parameter and, using the advantages of the method, we manage to constrain the models for two redshift ranges, each. Namely, first we constrain it with generative-process based H(z) data over z ∈ [0, 2.5], thus covering well-known H(z) observations. Those will be helpful to validate our fit results. On the other hand, taking into account the redshift ranges to be covered by ongoing and future planned missions, we constrain the models for z ∈ [0, 5]. The validation of our results for the second redshift range will be done in the future with to-be-measured higher redshift H(z) data.
Before ending this section, we recall the standard notation of FLRW cosmology (with 8πG = c = 1). In particular, the metric in this case has the form Moreover, the energy conservation law can be expressed aṡ On the other hand, the last equation is equivalent to the followinġ In the above three equations H, P and ρ are the Hubble parameter, the pressure and the energy density of the inhomogeneous fluid, respectively. The paper is organized as follows. The philosophy behind the method here used is discussed in Sect. II. In Sect. III, we present the models and discuss the constraints obtained, followed by an analysis of the consequences, interesting for the H 0 tension and for the accelerated expanding Universe issues. The model based generative process employed in the Bayesian Learning approach is built from Eqs. (2) and (3), by assuming new forms of P = P (ρ) to describe the inhomogeneous single fluid Universe. The model based generative process and the analysis leading to our final results has been performed by using PyMC3 2 . The final conclusions of our analysis are given in Sect. IV. Additionally, there is an Appendix with some details of the method.

II. BASIC IDEAS BEHIND BAYESIAN LEARNING
An important question nowadays is to understand how efficiently one can deal with the unprecedented scale and resolution data expected from future missions and collaborations. However, the extended scale and high resolution of the expected data should be first defined, in order to be sure that the new algorithm may be sufficient to do the task. But since this is a generic computing and data science problem, we leave any future discussion on this point to the specialists on this field, and we will just address the issues which, according to some estimations, fiel into the cosmology and astrophysics domains. It is well known that mathematical models are very useful in understanding nature and that conveniently adjusting their free parameters is very important and should be done in a very efficient way. In particular, it should be done also for cases when we have only a few observational data-points or the quality of the observations is not good, causing huge errors. In other words, the mathematical model at hand, first of all, should be made consistent with observations by adjusting the free parameters and this should be done efficiently. At the same time, often this must be done in practice for observational data-sets without relying too much on the quality of the data. Several years ago, this was really a very challenging task for computer science and, consequently, imposed severe restrictions on different fields as, among them, cosmology and astrophysics [56] - [62]. However, nowadays thanks to significant developments in different scientific fields the mentioned difficulties have been alleviated, to some extent. We already mentioned that the adjustment of model-free parameters is an important step in any study. On the other hand, the comprehensive analysis of the model includes also the estimation of the errors of the same parameters, still providing an observational data consistent behavior shown by a certain mathematical model. Moreover, if we have a well-studied model, it can be useful and extremely informative for designing next-generation experiments and observations. The increasing number of research in this direction seems to indicate that machine learning algorithms can be indeed used efficiently to perform the model fitting and overcome observational data issues, as the ones mentioned above. Before describing the basic ideas behind the approach used in this paper to study the cosmological models, let us briefly present typical machine learning procedures used to study cosmology and astrophysics, as discussed in the recent literature. In this discussion, we will not describe any specific procedure used to extract observational data and will not introduce a specific neural network used in the model analysis. We wish to keep the level as simple as possible. We would just like to mention that, in general, we follow these three steps: • It is obvious that, first of all, we should define the model. Usually in this step we get or define a set of equations/rules controlling the model behavior. In other words, the equations link the model parameters together, according to some rule allowing to study the behavior of the model at different regimes.
• Usually after this, we need to choose a set of data to find the free parameters. In physics, nearly in all cases the data would be obtained either from some experiment or observation, i.e. we will use data related to some real physical processes. Actually, another interesting situation to be mentioned is the case when the data may be simulated; but, at this moment, we cannot validate simulated data because the related experiment/observation setup is not operating or is still under construction. This is still intensively discussed in the recent literature, about future missions .
• Finally, when the first two steps have been done, we run a learning algorithm. In other words, we use data to determine the values for the unknown model parameters. Now what does it mean that we run a learning algorithm? This is the most interesting step, to be clarified, because we present the data in terms of input and output pairs and then run some optimization algorithms to get a final set of weights; i.e., we train the network allowing to determine free parameters.
Now, having defined the basic steps behind machine learning studies, intensively used in the literature, let us see what is behind the approach used in this paper. Our approach is just known as Bayesian Learning and has been implemented using PyMC3 probabilistic programming, a python-based framework, which in practice has proven to be very fast, useful, and able to be easily integrated with other python-based frameworks [63]. Indeed, a lot of effort has been put on the PyMC3 project before, so that we can mostly concentrate our attention here on the physics behind the problem under study. In order to better understand what is behind Bayesian Learning, we need to modify and adapt the above mentioned 3 steps to this specific situation and, as a consequence, we now have: • We need to define the model, to be used to provide a so-called generative process. It is needed to generate the data and this means that, by defining the model, we will define a sequence of steps describing how the data was created. It is clear that the generative process includes the unknown model parameters and uses explicit values of these parameters. Therefore, the crucial aspect here is the incorporation of our prior beliefs to the unknown parameters. Eventually, we need only prior information to get the posterior and to constrain the parameters, since we use learning algorithms we do not need, anymore, to evaluate the likelihood, as in the case of χ 2 analysis, up to now widely used to constrain cosmological models.
• The crucial step happens here, at this stage, when we envisage the data to be the data obtained from the generative process.
• Eventually, after running the learning algorithm, we update our belief about the parameters and get a brand new distribution over these parameters.
It should be mentioned that the Likelihood-Free inference methods allow us to perform Bayesian Inference using forward simulations only, with no reference to a likelihood function. This is particularly appealing for cosmological data analysis problems, where complex physical processes and instrumental effects can often be simulated; but incorporating them into a likelihood function and solving the inverse inference problem is much harder. Likelihood-Free methods generically require large datasets to be compressed down to a small number of summary statistics, in order to be scalable. On the other hand, Approximate Bayesian Computation (abc) approaches to Likelihood-Free inference draw parameters from the prior, and forward simulate mock data, accepting points whenever the simulated data fall inside some small-ball around the observed data. Together, massive data compression and density estimation Likelihood-Free Inference provides a framework for performing scalable Likelihood-Free inference from large cosmological datasets, even when forward simulations are computationally expensive. This opens the door to a new paradigm for principled, simulation-based Bayesian inference in cosmology and astrophysics, among other relevant research fields.
PyMC3 is one of the python-based frameworks providing a comprehensive set of pre-defined statistical distributions that can be used as model building blocks [63]. It uses Theano 3 , that is, a deep learning python-based library, to construct probability distributions and then access the gradient, in order to implement cutting edge inference algorithms. Of course there are other useful frameworks, as well; but we shall here concentrated our attention on PyMC3, because it allows to write down models using an intuitive syntax to describe a data generating process. It allows to fit the model using gradient-based MCMC algorithms for fast approximate inference, or to use Gaussian processes in order to build Bayesian nonparametric models. Indeed PyMC3 gives all necessary tools for the analysis, allowing to concentrate our attention on the real problem, only.
The purpose of this section was to provide the general philosophy behind the method used in this paper to study the H 0 tension problem, within the scheme of an inhomogeneous single fluid Universe. We have omitted any specific  (7), for z ∈ [0, 2.5] and z ∈ [0, 5], respectively. The results has been obtained from a Bayesian Learning approach, where the generative based process has been organized using Eq. (2) and Eq. (3), assuming new forms of P = P (ρ) to describe the inhomogeneous single fluid Universe. discussion on the mathematical framework behind deep learning algorithms and Bayesian Learning, since these can all be found in PyMC3 standard tutorials, which are endowed with numerous excellent examples demonstrating how the above ideas can actually be implemented within a practical problem.

III. TWO SINGLE-FLUID MODELS AND THEIR CORRESPONDING RESULTS
In this section we present our models of an inhomogeneous, single-fluid Universe, and discuss the results obtained. In order to simplify the presentation, we have organized it into two subsections. To summarize the fit results and make them easily readable, for both models we organize corresponding tables, Table I, for z ∈ [0, 2.5] and for z ∈ [0, 5], respectively.

A. Model 1
The first example of the inhomogeneous single fluid Universe we studied would be described by the following EoS where ω 0 , ω 1 and n are the free parameters of the model, while P and ρ are the pressure and the energy density of the inhomogeneous equation of state of the Universe, respectively. In order to organize the generative process based Bayesian Learning, we need to combine Eqs. (2) and (3) with Eq. (5), and take into account thatρ = −(1 + z)Hρ . In particular, from Eq. (2) we have that ρ = 3H 2 , thereforeρ = 6HḢ, which together with Eq. (3) and after some algebra will eventually yield the following differential equation: where the prime denotes the redshift derivative of the function. Namely the last equation describes the observable associated with our model and it namely has been used in the generative process. The performed study puts the constraints on the model free parameters and yields the results describe below. In particular, first of all, we observe that the Bayesian Learning puts very tight constraints on the parameters. This can be seen from Table I  Moreover, and first of all, in order to validate our results obtained from the Bayesian Learning approach, we compare the behavior of the Hubble parameter H(z) obtained from Eq. (6) with the available observational H(z) data. The result of this comparison for the best fit values of the model parameters is given in Fig. (2) (left-hand side plot). From this, we see that our model can indeed explain the low redshift H(z) observations very well, but most likely, some tension could arise with high redshift observations. It is only reliable for the data corresponding to z ∈ (2, 2.4]. The graphical behavior of the deceleration parameter and the equation of state parameter of the fluid, Eq. (5), can be found in Fig. (3) (top panel). In all cases, only the best fit values of the model parameters have been taken into account. Moreover, the purple curve has been taken to represent the case when z ∈ [0, 2.5], while the dashed red curve represents the case when z ∈ [0, 5]. It should be mentioned that the model can explain the accelerated expansion and the transition to this phase. Moreover, we can see that our fluid model, Eq. (5), during the evolution, will naturally evolve from a fluid with P > 0 to P < 0, i.e. the dynamical nature of the cosmic fluid is caused to the emergence of dark energy, which is responsible for the late time accelerated expansion of the Universe.
Before discussing how the future measurements of the expansion rate from z ∈ [0, 5] will affect the fit results, let us see what is the conclusion about the H 0 tension problem. Indeed, we can observe that the model can efficiently solve this problem, and the estimations of the P/ρ EoS and of the deceleration parameter q at z = 0, indicate that the suggested model is a viable cosmological model. On the other hand, the constraints obtained when z ∈ [0, 5] indicate that the H 0 value will, most likely, not be affected; however, the mean of the other parameters may indeed be affected, and significantly. The last may crucially affect the values of P/ρ and q, at z = 0. Subsequently, this will also affect the transition redshift.
To end this subsection, we would like to mention, again, that we have considered an inhomogeneous single-fluid model for the Universe, where the H 0 tension problem can indeed be solved. However, some tension can still raise with available high-redshift H(z) data. Anyway, in the next subsection we present another viable cosmological model  5]. The red dots represent the known observational H(z) data, and it is the same as in Ref. [43]. The left-hand side represents the comparison for the model given by Eq (5), while the right-hand side plot represents the comparison with the model given by Eq (5). In both cases, only the best fit values for the model free parameters obtained by the Bayesian Learning approach have been used.
that can also solve this H 0 tension, so that finally there is no tension with high-redshift values. Moreover, most likely, even with the new, high-redshift expansion rate observations, the model may still remain viable.

B. Model 2
Our second model is a modification of Model 1, Eq. (7). As will be seen below, it shows that, in addition to the H 0 tension issue, the problem with higher redshift H(z) data may be solved in an efficient way, too. To construct this second model, the first of its kind, that we know, we have used, in addition, the deceleration parameter, in order to parametrize the inhomogeneity of our fluid. In particular, this second model has the following form: where ω 0 , ω 1 and n are the model free parameters, while q, P and ρ are the deceleration parameter, the pressure and the energy density of the inhomogeneous fluid, respectively. Again, combining Eq. (2) with Eq (3) and taking into account thatρ = −(1 + z)Hρ , after some algebra, we get the following equation which describes our model in the generative process. The corresponding contour map, for z ∈ [0, 2.5], is represented in Fig. (4), while the contour map for z ∈ [0; 5] is depicted in Fig. (5), respectively. A detailed analysis of the model by using a Bayesian Learning approach shows that Similarly to the previous case, we have checked, first of all, the fitting results from the Bayesian Learning approach using available observational H(z) data. Specifically, the best fit values of the model parameters have been used in Eq. (8), to get the theoretical expansion rate in order to perform the comparison. The results of this comparison, for the best fit values of the model parameters, are depicted in Fig. (2) (right-hand side plot). We clearly see that the model can explain the low redshift H(z) observations very well, and that there is no tension with the higher redshift observations. In other words, the proposed model can explain the BOSS result, too. On the other hand, the graphical behavior of the deceleration parameter and the equation of state parameter of the fluid, Eq. (7), can be seen in Fig. (3) (bottom panel). In all cases, only the best fit values of the model parameters have been taken into account. Moreover, the purple curve represents the case when z ∈ [0, 2.5], while the dashed red curve, the case when z ∈ [0, 5]. It should be mentioned that the model can explain the late time accelerated expansion and the transition to this phase. Also, we can see that our fluid model, Eq. (5), during its evolution will naturally evolve from a fluid with P > 0 to one with P < 0, i.e., an evolving cosmic fluid may cause the emergence of the dark energy responsible for the late time accelerated expansion of our Universe. Now, if we take a closer look to the second half of Table I, we realize that the H 0 tension has been solved efficiently. Moreover, we also see that the future measurements of the expansion rate for higher redshift values in z ∈ [0, 5] will, most likely, significantly affect the parameters H 0 , ω 0 , and ω 1 . In particular, we see that the mean and the 1σ errors for H 0 and ω 0 could be affected considerably; however, only the mean of ω 1 might be seriously affected, too.
In order to continue our discussion, we decided to estimate the mean values of the P/ρ EoS and the deceleration parameter q at z = 0. Using the best fit values reported above and summarized in Table I, we have found that P/ρ = −1.057 and q = −1.085, when z ∈ [0, 2.5], while for the whole range z ∈ [0, 5], we encountered that P/ρ = −0.992 and q = −0.989. Subsequently this means that the transition redshift has been affected, too. Therefore, another interesting difference, to be mentioned, between Model 1, Eq. (5) and Model 2, Eq. (7), that will most likely appear when higher redshift H(z) measurements are performed, will be encoded in ω(z) = P/ρ. Indeed, Model 1, Eq. (5), predicts a significant deviation from the cosmological constant value ω Λ = −1. However, Model 2, Eq. (7), will still most likely mimic the cosmological constant. However, for the z ∈ [0, 2.5] redshift range, the two models still provide constraints on the dark energy which are in good agreement with the Planck2018 results. As we mentioned above, the validation of the results obtained for z ∈ [0, 5] from the Bayesian Learning approach are waiting to be validated when new high-redshift H(z) measurements become available.

IV. CONCLUSIONS
We have here studied two different inhomogeneous, single-fluid models of the Universe and shown that the H 0 tension problem can be effectively solved by using them. Specifically, we have considered the models with P = ω 0 + ω1 1+z ρ − AH n and P = ω 0 + ω1 1+z ρ − AqH n , respectively. Actually, the second model is an extension of the first one, where we take into account that, during the cosmic evolution, the nature of the inhomogeneity can be changed and, in addition, we assume that this can be done by using the deceleration parameter. It is known that the deceleration parameter has changed its sign during the evolution of our Universe. Moreover, it is known that this is not just a simple sign changing process, but that it encodes a very important physical process, which gives birth to dark energy. Latter, we have seen that our approach also softens a hard problem related to the high-redshift behavior of Model 1. Our study is based on a Bayesian Machine Learning approach, which actually does not require real observational data to be used to perform the analysis. The method uses a model based generative process, which allows to constrain the parameters of the model. In our case, the observable is taken to be the Hubble parameter and making use of the properties of the procedure, we constrain the models for two redshift ranges. Namely, first we have constrained the models for z ∈ [0, 2.5], which covers known H(z) observations. This will be helpful to validate our results in this case. On the other hand, considering the extended redshift range, to be covered by future collaborations, using the generative process based Bayesian Learning, we have constrained the models for z ∈ [0, 5].
The validation of our results for the second redshift range will have to wait a bit, until higher redshift H(z) data are available. Our study shows that inhomogeneous single fluid Universe models can indeed solve the H 0 tension problem, and that it comes from the mean of the H 0 parameter values. Indeed, the Bayesian Learning approach puts very tight constraints on the model parameters indicating in this way that the H 0 tension problem solution is only due to the mean value of H 0 . In this regard, both models have proven to be quite good, as compared with previously considered inhomogeneous fluid models. Our two models can solve the H 0 tension problem, however, only the one with P = ω 0 + ω1 1+z ρ − AqH n is favoured. This is due to the fact that this model can also explain the BOSS result for the H(z) value at z = 2.34 (for more discussion about this problem, see Ref. [20] and references therein) 4 . Another interesting difference between Model 1, Eq. (5), and Model 2, Eq. (7), is most likely to appear when high redshift H(z) measurements are considered, and it is encoded in ω(z) = P/ρ. In particular, Model 1, Eq. (5), predicts a significant deviation from the cosmological constant case, ω Λ = −1. However, Model 2, Eq. (7), will most likely still mimic the cosmological constant. Both models, for the z ∈ [0, 2.5] redshift range provide constraints on dark energy in well agreement with the Planck2018 results the higher redshifts in [4]. Again, the validation of the results obtained for z ∈ [0, 5] from the Bayesian Learning approach will have to wait until new high-redshift H(z) measurements become available.
Initial attempts to explore a way to solve the H 0 tension problem with classical forms of inhomogeneous fluids discussed often in the recent literature have not been very successful. However, the modifications presented above provide a very reasonable solution. In concluding, to be mentioned is that in this paper we have reported a new way to solve the H 0 tension problem and, at the same time, to make a prediction how the status of the models could change in the near future when new observational data for the expansion rate for z ∈ [0, 5] become available. This has been done using Bayesian Learning and probabilistic programming, employing a PyMC3 python-based framework. Since the methods used in this study are rather new, we decided to include details on them in an Appendix below. Playing with the form of the fluids considered, we have achieved some remarkable results, we plan to extend to consider more complicated cases. Progress on this will be reported in forthcoming papers.