http://sendshack.com/download/8qd05jh
HAROLD JEFFREYS SAIS OCKHAMS RASOR IS THE SIMPLIST MATHEMATICAL EXPLAINATION AND IS NOT THE MOST CORRECT OR PROBABLE. DIRAC EQUATION OF ANNIHILATION OF MATTER
jeudi 16 février 2012
ODD MAN OUT GAME BY RICHARD CEASER
http://sendshack.com/download/8qd05jh
EINSTEINS DICE LIES DAMN LIES AND STATS.
BAC LIM AND AMEY Ds CLOCK THEORY OF EVERYTHING.
SATURDAY, SEPTEMBER 3, 2011
NEWTONS END OF WORLD PREDICTION 2060
And 666 minutes is 111 years.
From the foundation of Israel on the 14th May 1948 at 16:00.
111 years input into the clock gives 2059 14 th May at 16:00.
ANd this rounded off is the same as Newtons prediction of 2060.
Based on the year 800 for the foundation of the catholic church and 1260 years to get 2060.
For the destruction of the catholic church.
And this is to the very second.39,960 seconds................that is 666 minutes.
The clock doesn't prove Newton right but Newton proves the clock right.
And the bible proves Newton and the clock to be right.
Just because the clock is 666 doesn't mean it is accurate.
And the chance of it not being accurate is very small.
SO the six minutes must be significant figures,,,,,,,,,,,,,,,,,,,,,,,From the 14 th of May 1948 111 years gives 2059 14th
may 16:00 which is 4 oclock......and an exact number
of seconds..................
That gives a difference in time on my clock of 666
minutes.................
So it predicts the destruction of the roman catholic
church by fire in the bible.
Using Newtons prediction of 800 + 1260=2060.
Rounded off to the nearest year................
You have the clock so just input 1948 14th may and
111 years into the clock dial.
If we take 12,60 from 741240 days we get 739,980 days
which is 2055 years 180 days................
That is the time of the appearance of the antichrist
according to my clock.................the time can be
worked out to the exact second..
I never expected this to be the case I thought my
clock was just an accurate clock and I liked it that
way but it became a thoery of everything and a
prophetic clock.
Enjoy the
things that are Gods handiwork....................
All that I have given you is Gods handiwork.
mercredi 15 février 2012
Diracs equation for particles.
dimanche 12 février 2012
DEVILS GIFT-NOT
PUBLIC DOMAIN NEWTONS CLOCK SWF.AND ANDROID FLASH
You have selected CC0 1.0 Universal.
You should include a reference to this on the web page that includes
the work in question.
Here is the suggested HTML:
http://purl.org/dc/terms/" xmlns:vcard="http://www.w3.org/2001/vcard-rdf/3.0#">
http://creativecommons.org/publicdomain/zero/1.0/">
http://i.creativecommons.org/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" />
To the extent possible under law,
STEVE A JEFFREY
has waived all copyright and related or neighboring rights to
NEWTONS CLOCK SWF FLA FILE.
This work is published from:
Australia.
Tips for marking your work can be found at
http://wiki.creativecommons.org/Marking. Information on the supplied HTML and
metadata can be found at http://wiki.creativecommons.org/CC_REL.
This email and tech support has been brought to you by the nonprofit folks at
Creative Commons. CC relies on donations to provide you with licenses and
services like this. Please consider a donation to our annual fund:
https://creativecommons.net/donate.
Thank you!
Creative Commons Support
info@creativecommons.org
DEVILS GIFT-NOT
samedi 11 février 2012
OCHAMS RASOR.
PUBLIC DOMAIN
To the extent possible under law, STEVE A JEFFREY has waived all copyright and related or neighboring rights toHAROLD JEFFREYS SIMPLE EVOLUTONARY THEORY OF EVERYTHING BLOG. This work is published from:Australia.
CREATIVE COMMONS FOR EINSTEIN DICE.
http://purl.org/dc/
href="http://creativecommons.
http://i.creativecommons.
To the extent possible under law,
href="bestofpoetrytomusic.com"
STEVE A JEFFREY
has waived all copyright and related or neighboring rights to
EINSTEINS DICE..
This work is published from:
content="AU" about="bestofpoetrytomusic.com
Australia.
CREATIVE COMMONS FOR EINSTEIN DICE.
http://purl.org/dc/
href="http://creativecommons.
http://i.creativecommons.
To the extent possible under law,
href="bestofpoetrytomusic.com"
STEVE A JEFFREY
has waived all copyright and related or neighboring rights to
EINSTEINS DICE..
This work is published from:
content="AU" about="bestofpoetrytomusic.com
Australia.
mercredi 8 février 2012
EX NIHILO- MOST COMPEX EXPLAINATION
mardi 7 février 2012
PEPSI FRIED CHICKEN
THE GRAND DESIGN
A semiclassical approach to the Dirac equation
Gravitation field in the Dirac equation
HAROLD JEFFREYS THEORY OF PROBABILTY AND OCKHAMS RASOR
samedi 4 février 2012
Evidence for the Big Bang
Introduction
a) Purpose of this FAQ
According to the welcome page of this archive, the talk.origins newsgroup is intended for debate about "biological and physical origins", and the archive exists to provide "mainstream scientific responses to the many frequently asked questions (FAQs) that appear in the talk.origins newsgroup". Many current FAQs deal with questions about biological and geological origins here on Earth. This page will take a broader view, focusing on the the universe itself.
Before beginning the examination of the evidence surrounding current cosmology, it is important to understand what Big Bang Theory (BBT) is and is not. Contrary to the common perception, BBT is not a theory about the origin of the universe. Rather, it describes the development of the universe over time. This process is often called "cosmic evolution" or "cosmological evolution"; while the terms are used by those both inside and outside the astronomical community, it is important to bear in mind that BBT is completely independent of biological evolution. Over the last several decades the basic picture of cosmology given by BBT has been generally accepted by astronomers, physicists and the wider scientific community. However, no similar consensus has been reached on ideas about the ultimate origin of the universe. This remains an area of active research and some of idea current ideas are discussed below. That said, BBT is nevertheless about origins -- the origin of matter, the origin of the elements, the origin of large scale structure, the origin of the Cosmic Microwave Background Radiation, etc. All of this will be discussed in detail below.
In addition to being a theory about the origins of the basic building blocks for the world we see today, BBT is also paradoxically one of the best known theories in the general public and one of the most misunderstood (and, occasionally, misrepresented). Given the nature of the subject matter, it is also frequently discussed with heavy religious overtones. Young Earth Creationists dismiss it as an "atheistic theory", dreamt up by scientists looking to deny the divine creation account from Genesis. Conversely, Old Earth Creationists (as well as other Christians) have latched onto BBT as proof of Genesis, claiming that the theory demonstrates that the universe had an origin and did not exist at some point in the distant past. Finally, some atheists have argued that BBT rules out a creator for the universe.
Detailed discussion of these religious arguments can be found in a number of other places (e.g. the book by Craig and Smith in the references). This FAQ will focus solely on the science: what the theory says, why it was developed and what is the evidence.
b) General outline
Many explanations of BBT start by presenting various astronomical observations, arguing that they lead naturally to the idea of an expanding, cooling universe. Here, we take a different approach: We begin by describing what BBT is not and correcting some common misconceptions about the theory. Once that is done, then we talk about what the theory is and what assumptions are made when describing a physical theory about how the universe operates. With that framework in place, we move to an examination of what BBT predicts for our universe and how that matches up against what we see when we look at the sky. The next step is to look at some of the most common objections to the theory as well disagreements between the theory and observations, which leads naturally into an examination of some of the alternative cosmological models. We finish with two more speculative topics: current ideas about very earliest stages of the universe and its ultimate origin and a discussion of what we might expect the next generation of cosmological experiments and surveys to tell us about BBT.
c) Further sources for information
As one might expect for a subject with a large public following, there is a huge body of literature on BBT in both printed media and the web. The range in level of this material is very large -- from advanced texts for graduate courses and beyond to popularizations for laymen. Likewise, the quality of explanation in these resources can vary considerably. In particular, some popularizations simplify the material to such an extent that it can be highly misleading. Finally, there are a number of diatribes against the standard cosmological model, filled with misunderstandings, misrepresentations and outright vitriol against BBT and cosmologists in general. We have tried to filter this huge array of information, highlighting those sources which accurately describe the theory and present it in the clearest manner possible. Apologies in advance to any valuable sources which were inadvertently overlooked and excluded.
For a serious, technical introduction to the subject, two books are particularly useful: Principles of Physical Cosmology by Peebles and The Early Universe by Kolb & Turner. These are written for advanced undergraduates and graduate students, so a fair knowledge of mathematics is assumed. For a less technical description of the early stages of the universe (with particular emphasis on nucleosynthesis and particle physics), the books by Fritzsch and Weinberg are very good and aimed at the general public.
While the aforementioned books are well-written, the material is somewhat dated, having been written before the observations and subsequent developments of the last few years (e.g. the accelerating expansion of the universe and inclusion of dark energy in the standard cosmological model). Newer texts like those written by Peacock, Kirshner and Livio include discussion of these topics. The first is at the level of Peebles and Kolb & Turner, while the second two are written for a general audience. Finally, a new book by Kippenhahn is highly recommended by this FAQ's author, with the caveat that it is only currently available in German.
On the web, the best known source of popularized information on the Big Bang is Ned Wright's cosmology tutorial. Dr. Wright is a professional cosmologist at the University of California, Los Angeles and his tutorial was used extensively in compiling this FAQ. He has also written his own Big Bang FAQ and updates his site regularly with the latest news in cosmology and addresses some of the most popular alternative models in cosmology.
The Wilkinson Microwave Anisotropy Probe pages at NASA have a very good description of the theoretical underpinnings of BBT aimed at a lay audience. Other well-written pages about BBT include the Wikipedia pages on the universe and the big bang. Finally, there is the short FAQ The Big Bang and the Expansion of the Universe at the Atlas of the Universe, which also corrects some of the most common misconceptions.
1) What is the Big Bang theory?
a) Common misconceptions about the Big Bang
In most popularized science sources, BBT is often described with something like "The universe came into being due to the explosion of a point in which all matter was concentrated." Not surprisingly, this is probably the standard impression which most people have of the theory. Occasionally, one even hears "In the beginning, there was nothing, which exploded."
There are several misconceptions hidden in these statements:
- The BBT is not about the origin of the universe. Rather, its primary focus is the development of the universe over time.
- BBT does not imply that the universe was ever point-like.
- The origin of the universe was not an explosion of matter into already existing space.
The famous cosmologist P. J. E. Peebles stated this succinctly in the January 2001 edition of Scientific American (the whole issue was about cosmology and is worth reading!): "That the universe is expanding and cooling is the essence of the big bang theory. You will notice I have said nothing about an 'explosion' - the big bang theory describes how our universe is evolving, not how it began." (p. 44). The March 2005 issue also contained an excellent article pointing out and correcting many of the usual misconceptions about BBT.
Another cosmologist, the German Rudolf Kippenhahn, wrote the following in his book "Kosmologie fuer die Westentasche" ("cosmology for the pocket"): "There is also the widespread mistaken belief that, according to Hubble's law, the Big Bang began at one certain point in space. For example: At one point, an explosion happened, and from that an explosion cloud travelled into empty space, like an explosion on earth, and the matter in it thins out into greater areas of space more and more. No, Hubble's law only says that matter was more dense everywhere at an earlier time, and that it thins out over time because everything flows away from each other." In a footnote, he added: "In popular science presentations, often early phases of the universe are mentioned as 'at the time when the universe was as big as an apple' or 'as a pea'. What is meant there is in general the epoch in which not the whole, but only the part of the universe which is observable today had these sizes." (pp. 46, 47; FAQ author's translation, all emphasizes in original)
Finally, the webpage describing the ekpyrotic universe (a model for the early universe involving concepts from string theory) contains a good recounting of the standard misconceptions. Read the first paragraph, "What is the Big Bang model?".
There are a number of reasons that these misconceptions persist in the public mind. First and foremost, the term "Big Bang" was originally coined in 1950 by Sir Fred Hoyle, a staunch opponent of the theory. He was a proponent of the competing "Steady State" model and had a very low opinion of the idea of an expanding universe. Another source of confusion is the oft repeated expression "primeval atom". This was used by Lemaitre (one of the theory's early developers) in 1927 to explain the concept to a lay audience, albeit one that would not be familiar with the idea of nuclear bombs for a few decades to come. With these and other misleading descriptions endlessly propagated by otherwise well-meaning (and not so well-meaning) media figures, it is not surprising that many people have wildly distorted ideas about what BBT says. Likewise, the fact that many in the public think the theory is rather ridiculous is to be expected, given their inaccurate understanding of the theory and the data behind it.
b) What does the theory really say?
Giving an accurate description of BBT in common terms is extremely difficult. Like many modern scientific topics, every such attempt will be necessarily vague and unsatisfying as certain details are emphasized and others swept under the rug. To really understand any such theory, one needs to look at the equations that fully describe the theory, and this can be quite challenging. That said, the quotes by Peebles and Kippenhahn should give one an idea of what the theory actually says. In the following few paragraphs, we will elaborate on their basic description.
The simplest description of the theory would be something like: "In the distant past, the universe was very dense and hot; since then it has expanded, becoming less dense and cooler." The word "expanded" should not be taken to mean that matter flies apart -- rather, it refers to the idea that space itself is becoming larger. Common analogies used to describe this phenomenon are the surface of a balloon (with galaxies represented by dots or coins attached to the surface) or baking bread (with galaxies represented by raisins in the expanding dough). Like all analogies, the similarity between the theory and the example is imperfect. In both cases, the model implies that the universe is expanding into some larger, pre-existing volume. In fact, the theory says nothing like that. Instead, the expansion of the universe is completely self-contained. This goes against our common notions of volume and geometry, but it follows from the equations. Further discussion of this question is found in the What is the Universe expanding into? section of Ned Wright's FAQ.
People often have difficulty with the idea that "space itself expands". An easier way to understand this concept is to think of it as the distance between any two points in the universe increasing (with some notable exceptions, as discussed below). For example, say we have two points (A and B) which are at fixed coordinate positions. In an expanding universe, we would find two remarkable things to be true. First, the distance between A and B is a function of time and second, the distance is always increasing.
To really understand what this means and how one would define "distance" in such a model, it is necessary to have some idea of what Einstein's theory of General Relativity (GR) is about -- another subject that does not easily lend itself to simple explanations. One of the most popular GR textbooks by Misner, Thorne & Wheeler summarize it thusly: "Space tells matter how to move, matter tells space how to curve." Of course, this statement omits certain details of the theory, like how space also tells electromagnetic radiation how to move (demonstrated most beautifully by gravitational lensing -- the deflection of light around massive objects), how space also curves in response to energy, and how energy can cause space to do much more than simply curve. Perhaps a better (albeit longer) way of describing GR would be something like: "Energy determines the geometry and changes in the geometry of the universe, and, in turn, the geometry determines the movement of energy".
So, given this, how does one get BBT from GR? The basic equations for BBT come directly from Einstein's GR equation under two key assumptions: First, that the distribution of matter and energy in the universe is homogeneous and, second, that the distribution is isotropic. A simpler way to put this is that the universe looks the same everywhere and in every direction. The combination of these two assumptions is often termed the cosmological principle. Obviously, these assumptions do not describe the universe on all physical scales. Sitting in your chair, you have a density that is roughly 1000 000 000 000 000 000 000 000 000 000 times the mean density of the universe. Likewise, the densities of things like stars, galaxies and galaxy clusters are well above the mean (although not nearly as much as you). Instead, we find that these assumptions only apply on extremely large scales, on the order of several hundred million light years. However, even though we have good evidence that the cosmological principle is valid on these scales, we are limited to only a single vantage point and a finite volume of the universe to examine, so these assumptions must remain exactly that.
If we adopt these seemingly simple assumptions, the implications for the geometry of the universe are quite profound. First, one can demonstrate mathematically that there are only three possible curvatures to the universe: positive, negative or zero curvature (these are also commonly called "closed", "open" and "flat" models). See these lectures on cosmology and GR and this discussion of the Friedman-Robertson-Walker metric (sometimes called the Friedman-Lemaitre-Robertson-Walker metric) for more detailed derivations. Further, the assumption of homogeneity tells us that the curvature must be the same everywhere. To visualize the three possibilities, two dimensional models of the actual three dimensional space can be helpful; the figure below from the NASA/WMAP Science Team gives an example. The most familiar model with positive curvature is the surface of a sphere. Not the full three dimensional object, just the surface (you can tell that the surface is two dimensional since you can specify any position with just two numbers, like longitude and latitude on the surface of the Earth). Zero curvature can be modeled as a simple flat plane; this is the classical Cartesian coordinates that most people will remember from school. Finally, one can imagine negative curvature as the surface of a saddle, where parallel lines will diverge from each other as they are projected towards infinity (they remain parallel in a zero curvature space and converge in a positively curved space).
There are more complicated examples of these geometries, but we will skip discussing them here. Those interested in reading more on this point can look at this description of topology of the universe.
The second main conclusion that we can draw from the cosmological principle is that the universe has no boundary and has no center. Obviously, if either of these statements were true, then the idea that all points in the universe are indistinguishable (i.e. the universe is isotropic) would be false. This conclusion can be counter-intuitive, particularly when considering a universe with positive curvature like that of a spherical shell. This space is clearly finite, but, as is also clear after a moment's thought, it is also possible to travel an arbitrarily large distance around the sphere without leaving the surface. Hence, it has no boundary. For the flat and negatively curved surfaces, it is clear that these cases must extend to infinite size. Remarkably, given the vast differences that these cases present for the geometry and size of the universe, determining which of these three cases holds for our universe is actually still an open question in cosmology.
c) Contents of the universe
As we said above, GR tells us that the matter and energy content of the universe determines both the present and future geometry of space. Therefore, if we want to make any predictions about how the universe changes over time, we need to have an idea of what types of matter and energy are present in the universe. Once again, applying the cosmological principle simplifies matters considerably. In fact, if the distribution of matter and energy is uniform on very large scales, then all we need to know is the density and pressure of each component. Even better, for most of the cases that are relevant for cosmology, the pressure and density tend to be related by a so-called "equation of state". Thus, if we know the density of a given component, then we know its pressure via the equation of state and can calculate how it will affect the geometry of the universe now and at any time in the past or future.
After a great deal of theoretical and observational work, there are essentially three broad categories of matter and energy that we need to consider
- Matter: In the normal course of life on Earth, we tend to think
of the relationship between pressure and density of matter as important, but
incomplete. From basic chemistry or physics classes, we learn that pressure
is also typically a function of temperature. Another way to think of
temperature is as a measure of the speed that matter is travelling, albeit
in an unordered, random manner (think of the air molecules in a balloon; they
move around rapidly inside the balloon, but the balloon itself remains
motionless). While these molecules may move quickly by our standards,
compared to the speed of light (which is what is relevant when we consider
GR) these particles are effectively motionless. To a very good
approximation, we can simply set the pressure for matter to zero; what we are
really saying is that the pressure is tiny compared to the energy density of
the matter.
In cosmological parlance, this class of matter is generically described as "cold matter", a term that would include stars, planets, asteroids, interstellar dust, and so on. Since we are limited to observing photons from the rest of the universe, the fact that much of this cold matter does not glow in any appreciable way means that we have to observe it indirectly, mainly by its gravitational effect on matter that we can see. This sort of dark matter (mainly planets, burned out stars and cold gas) is quite abundant in the universe.
In addition to this normal dark matter, there is also ample evidence that the universe contains a great deal of dark matter that is fundamentally different from the dark matter described above. While normal matter will glow if sufficiently heated, this dark matter is dark because it does not interact with light at all. This is contrary to our everyday experience, of course, but current quantum field theory predicts the existence of a number of particles that would fit this requirement (e.g. the "neutralino" predicted by supersymmetry or the "axion"; see below for more details).
Like in the case of the normal dark matter (which is generically called "baryonic dark matter" since it is mostly made of protons and neutrons, which belong to a particle group called "baryons"), we do not need to know the exact details of this dark matter in order to make cosmological predictions. All we do need to know is its equation of state. "Cold Dark Matter" would consist of massive, slow-moving particles, where "massive" is relative to the mass of particles like the proton and "slow" is relative to the speed of light. Like the cold baryonic matter, the pressure associated with these particles would be effectively zero. On the other hand, if the dark matter particles are very light, then they would tend to move very quickly and their associated pressure would no longer be negligible. This sort of dark matter is called "Hot Dark Matter". For completeness, one could also imagine a third, intermediate case ("Warm Dark Matter"). Finally, it is worth noting that, since it does not interact with light, the "temperature" of the dark matter is not going to have anything to do with the overall temperature of the universe; Hot Dark Matter remains hot no matter how cold the universe gets. As we will discuss later on, current observations indicate that the matter component of the universe is dominated by Cold Dark Matter, with small amounts of baryonic matter and little to no Warm or Hot Dark Matter. - Radiation: Strictly speaking, this category only includes electromagnetic radiation. However, Hot Dark Matter often gets grouped together with radiation since, as the particles are moving very close to the speed of light, they have essentially the same equation of state. For radiation, the pressure is equal to one third of the energy density. From observations, we know that radiation is not a significant part of the energy density budget of the universe today. However, because of the equation of state, the energy density of radiation scales inversely as the fourth power of the size of the universe. For example, if we go back in time to the point where the observable universe was half the size it is today, we would find that the energy density was 16 times the current value, while the energy density of matter was only 8 times its value today. The clear implication here is that, no matter what their values today, if we go back far enough in time, radiation will be the dominant source of energy density in the universe. This has enormous implications for both the creation of the light elements in the very early stages of the universe (also known as primordial nucleosynthesis) and the formation of the Cosmic Microwave Background Radiation (CMBR).
- The third component of the standard picture of BBT is also the one we
know the least about. The generic term for this piece is dark energy,
although this term covers a very diverse array of possibilities. From
quantum field theory, we know that all of space should be filled with energy,
even if there is no matter or radiation present. This energy is known by
various names: "zero-point energy", "zero-point fluctuations",
"vacuum energy", "vacuum fluctuations", etc. As some of the names imply,
this energy does not persist in the way that normal matter or radiation
does; instead the particles carrying it pop in and out of existence, as
predicted by Heisenberg's uncertainty principle. This sort of energy cannot be detected directly, but
measurements of, e.g. the Casimir effect, demonstrate that it does
exist.
Taking this as an indicator that this sort of energy exists, we can explore what effect this might have from a cosmological standpoint. Regardless of the expansion of the universe, the zero-point energy density remains constant and positive. This leads to the rather curious (and non-intuitive) conclusion that the pressure associated with dark energy is negative. If one plugs a component like this into the standard BBT equations, the effect of the negative pressure is larger than that of the positive energy density. As a result, in a universe driven by dark energy, the effect of its gravity is to accelerate the expansion of the universe, instead of slowing it down (as one would expect for a universe with just matter in it).
One also often hears the term "cosmological constant" associated with dark energy. In order to understand the reason for this, one has to know a bit about the history of applying GR to the whole universe. When Einstein first tried to do that, he found that it predicted the universe should either expand or contract. But in Einstein's times, the universe was thought to be static. So he looked again at the assumptions which he made in deriving the equations of GR. One of them was that an empty universe, i.e., one which contains no matter or energy, should have zero curvature ("flat" as mentioned above). Einstein found that if he dropped that assumption, an additional free parameter appeared in the equations of GR. If that parameter is set to a particular value, the equations indeed yield the static universe expected back then! Accordingly, he called that additional parameter the "cosmological constant".
Obviously, this was a rather ad hoc solution to an only apparent problem (made especially unnecessary when evidence began to show that the universe was not static). According to Gamow, Einstein later called this trick "his greatest blunder". That said, we now also know that empty space, without "ordinary" (or even exotic) matter and energy, still has to contain the vacuum fluctuations predicted by quantum field theory. In other words, even "empty" space still contains energy and therefore does not have to be flat. This (sort of) justifies using the cosmological parameter; in this interpretation, it would represent the "vacuum energy density" caused by quantum fluctuations, turning the cosmological constant into a particular type of dark energy. From this viewpoint, introducing the cosmological constant was not a blunder - more like accidentally discovering a necessary, even crucial additional parameter in the equations of GR and accordingly also the equations of the BBT.
d) Summary: parameters of the Big Bang Theory
Like every physical theory, BBT needs parameters. Drawing from what we have established so far, we have
- The curvature of space. As we discussed above, this is either positive (closed), negative (open) or zero (flat).
- The scale factor. One of the first things one notices when studying cosmology is that measuring the absolute value of any particular quantity can be extremely challenging. Rather, most of the quantities that cosmologists try to measure are actually ratios. The scale factor is the ratio between the current "size" of the universe and the size of the universe at some point in the past or future ("size" being defined as is appropriate for a given curvature). Obviously, this parameter is one today and less than one at any time in the past for an expanding universe.
- The Hubble Parameter. This is often confused with the "Hubble Constant". Partly, this is a relic from Hubble's original work showing the expansion of the universe, where it was just a fitting parameter to translate velocity into distance. In modern usage, that term only refers to the current value; in actuality this quantity varies over time. Formally, the Hubble parameter measures the rate of change of the scale factor at a given time (the derivative of the scale factor normalized by the current value). A simpler way to think about it is that the Hubble Parameter tells one how fast the universe is expanding at any particular moment.
- Deceleration Parameter. In a matter-only universe, the expansion of the universe would be slowed down by the self-gravitation of the matter, possibly even enough to cause the universe to collapse. This means that the expansion rate (the Hubble Parameter) would change and the deceleration parameter quantified that rate of change (the second derivative of the scale factor, for those keeping track). The first clue that Dark Energy was important to cosmology came from the discovery that the deceleration parameter was not negative (as expected), but actually positive. Hence, instead of slowing down, the expansion was actually accelerating. Ironically, this has led cosmologists to mostly ignore this parameter in favor of the next set of parameters.
- Component densities. Very simple here; just how much radiation, matter (baryonic and dark) and dark energy is there in the universe? These densities are usually expressed in ratios between the density in a given component and the density it would take to make the curvature of the universe flat. If one knows the values of these densities and the Hubble parameter at a particular time, then one can determine the value of the deceleration parameter; hence, the disappearance of that parameter from much of the cosmological literature in the last several years.
- Dark Energy Equation of State. As mentioned above, for radiation and matter the equations of state are determined by known physics. For dark energy, however, the data is still not up to the challenge of picking a preferred model. As such, most papers in the literature treat the dark energy equation of state as a free parameter (possibly varying with time, depending on the model) or explicitly choose a value as a prior constraint (see below).
This seems like a long list of parameters -- so many that one might argue that any theory with this many knobs might be tuned to fit any set of observations. However, as mentioned above, they are not really independent. Choosing a value for the Hubble parameter immediately affects the expected values for the densities and the deceleration parameter. Likewise, a different mix of component densities will change the way that the Hubble parameter varies over time. In addition, there is a wide variety of cosmological observations to be made -- observations with wildly different methodologies, sensitivities and systematic biases. A consensus model has to match all of the available data and, over the last decade in cosmology, combining these experiments has resulted in what has been called the "concordance model".
This basic picture is built on the framework of the so-called "Lambda CDM" model. The Lambda indicates the inclusion of dark energy in the model (specifically the cosmological constant, which implies an equation of state where the pressure is equal to -1 times the energy density). "CDM" is short for "cold dark matter". Thus, the name of the model incorporates what are believed to be the two most important components of the universe: dark energy and dark matter. The respective abundances of these two components and the third important component, baryonic (or "ordinary") matter, is shown in the pie chart below (provided by the NASA/WMAP Science Team):
As mentioned above, these values come from simultaneously fitting the data from a large variety of cosmological observations, which is our next topic.
2) Evidence
Having established the basic ideas and language of BBT, we can now look at how the data compares to what we expect from the theory. As we mentioned at the end of the last section, there is no single experiment that is sensitive to all aspects of BBT. Rather, any given observation provides insight into some combination of parameters and aspects of the theory and we need to combine the results of several different lines of inquiry to get the clearest possible global picture. This sort of approach will be most apparent in the last two sections where we discuss the evidence for the two most exotic aspects of current BBT: dark matter and dark energy.
a) Large-scale homogeneity
Going back to our original discussion of BBT, one of the key assumptions made in deriving BBT from GR was that the universe is, at some scale, homogeneous. At small scales where we encounter planets, stars and galaxies, this assumption is obviously not true. As such, we would not expect that the equations governing BBT would be a very good description of how these systems behave. However, as one increases the scale of interest to truly huge scales -- hundreds of millions of light-years -- this becomes a better and better approximation of reality.
As an example, consider the plot below showing galaxies from the Las Campanas Redshift Survey (provided by Ned Wright). Each dot represents a galaxy (about 20,000 in the total survey) where they have measured both the position on the sky and the redshift and translated that into a location in the universe. Imagine putting down many circles of a fixed size on that plot and counting how many galaxies are inside each circle. If you used a small aperture (where "small" is anything less than tens of millions of light years), then the number of galaxies in any given circle is going to fluctuate a lot relative to the mean number of galaxies in all the circles: some circles will be completely empty, while others could have more than a dozen. On the other hand, if you use large circles (and stay within the boundaries!), the variation from circle to circle ends up being quite small compared to the average number of galaxies in each circle. This is what cosmologists mean when they say that the universe is homogeneous. An even stronger case for homogeneity can be made with the CMBR, which we will discuss below.
b) Hubble Diagram
The basic idea of an expanding universe is the notion that the distance between any two points increases over time. One of the consequences of this effect is that, as light travels through this expanding space, its wavelength is stretched as well. In the optical part of the electromagnetic spectrum, red light has a longer wavelength than blue light, so cosmologists refer to this process as redshifting. The longer light travels through expanding space, the more redshifting it experiences. Therefore, since light travels at a fixed speed, BBT tells us that the redshift we observe for light from a distant object should be related to the distance to that object. This rather elegant conclusion is made a bit more complicated by the question of what exactly one means by "distance" in an expanding universe (see Ned Wright's Many Distances section in his cosmology tutorial for a rundown of what "distance" can mean in BBT), but the basic idea remains the same.
Cosmological redshift is often misleadingly conflated with the phenomenon known as the Doppler Effect. This is the change in wavelength (either for sound or light) that one observes due to relative motion between the observer and the sound/light source. The most common example cited for this effect is the change in pitch as a train approaches and then passes the observer; as the train draws near, the pitch increases, followed by a rapid decrease as the train gets farther away. Since the expansion of the universe seems like some sort of relative motion and we know from the discussion above that we should see redshifted photons, it is tempting to cast the cosmological redshift as just another manifestation of the Doppler Effect. Indeed, when Edwin Hubble first made his measurements of the expansion of the universe, his initial interpretation was in terms of a real, physical motion for the galaxies; hence, the units on Hubble's Constant: kilometers per second per megaparsec.
In reality, however, the "motion" of distant galaxies is not genuine movement like stars orbiting the center of our galaxy, Earth orbiting the Sun or even someone walking across the room. Rather, space is expanding and taking the galaxies along for the ride. This can be seen from the formula for calculating the redshift of a given source. Redshift (z) is related to the ratio of the observed wavelength (W_O) and the emitted wavelength of light (W_E) as follows: 1 + z = W_O/W_E. The wavelength of light is expanded at the same rate as the universe, so we also know that: 1 + z = a_O/a_E, where a_O is the current value of the scale factor (usually set to 1) and a_E is the value of the scale factor when the light was emitted. As one can see, velocity is nowhere to be found in these equations, verifying our earlier claim. More detail on this point can be found at The Cosmological Redshift Reconsidered. If one insists (and is very careful about what exactly one means by "distance" and "velocity"), understanding the cosmological redshift as a Doppler shift is possible, but (for reasons that we will cover next) this is not the usual interpretation.
As we mentioned previously, even after Einstein developed GR, the consensus belief in astronomy was that the universe was static and had existed forever. In 1929, however, Edwin Hubble made a series of measurements at Mount Wilson Observatory near Pasadena, California. Using Cepheid variable stars in a number of galaxies, Hubble found that the redshift (which he interpreted as a velocity, as mentioned above) was roughly proportional to the distance. This relationship became known as Hubble's Law and sparked a series of theoretical papers that eventually developed into modern BBT.
At first glance, assembling a Hubble diagram and determining the value of Hubble's Constant seems quite easy. In practice, however, this is not the case. Measuring the distance to galaxies (and other astronomical objects) is never simple. As mentioned above, the only data that we have from the universe is light; imagine the difficulty of accurately estimating the distance to a person walking down the street without knowing how tall they are or being able to move your head. However, using a combination of geometry physics and statistics, astronomers have managed to come up with a series of interlocking methods, known as the distance ladder, which are reasonably reliable. The TO FAQ on determining astronomical distances provides a thorough run-down of these methods, their applicability and their limitations.
Conversely, the other side of the equation, the redshift, is relatively easy to measure given today's astronomical hardware. Unfortunately, when one measures the redshift of a galaxy, that value contains more than just the cosmological redshift. Like stars and planets, galaxies have real motions in response to their local gravitational environment: other galaxies, galaxy clusters and so on. This motion is called peculiar velocity in cosmological parlance and it generates an associated redshift (or blueshift!) via the Doppler Effect. For relatively nearby galaxies, the amplitude of this effect can easily dwarf the cosmological redshift. The most striking example of this is the Andromeda galaxy, within our own Local Group. Despite being around 2 million light years away, it is on a collision course with the Milky Way and the light from Andromeda is consequently shifted towards the blue end of the spectrum, rather than the red. The upshot of this complication is that, if we want to measure the Hubble parameter, we need to look at galaxies that are far enough away that the cosmological redshift is larger than the effects of peculiar velocities. This sets a lower limit of roughly 30 million light years and even once we get beyond this mark, we need to have a large number of objects to make sure that the effects of peculiar velocities will cancel each other.
The combination of these two complications explain (in part) why it has taken several decades for the best measurements of Hubble's Constant to converge on a consensus value. With current data sets, the nearly linear nature of the Hubble relationship is quite clear, as shown in the figure below (based on data from Riess (1996); provided by Ned Wright).
As mentioned previously, the standard version of BBT assumed that the dominant source of energy density for the last several billion years was cold, dark matter. Feeding this assumption into the equations governing the expansion of the universe, cosmologists expected to see that the expansion would slow down with the passage of time. However, in 1998, measurements of the Hubble relationship with distant supernovae seemed to indicate that the opposite was true. Rather than slowing down, the past few billion years have apparently seen the expansion of the universe accelerate (Riess 1998; newer measurements: Wang 2003, Tonry 2003). In effect, what was observed is that the light of the observed supernovae was dimmer than expected from calculating their distance using Hubble's law.
Within standard BBT, there are a number of possibilities to explain this sort of observation. The simplest possibility is that the geometry of the universe is open (negative curvature). In this sort of universe, the matter density is below the critical value and the expansion will continue until the effective energy density of the universe is zero. The second possibility is that the distant supernovae were artificially dimmed as the light passed from their host galaxies to observers here on Earth. This sort of absorption by interstellar dust is a common problem with observations where one has to look through our own galaxy's disk, so one could easily imagine something similar happening. This absorption is usually wavelength dependent, however, and the two teams investigating the distant supernovae saw no such effect. For the sake of argument, however, one could postulate a "gray dust" that dimmed objects equally at all wavelengths. The final possibility is that the universe contains some form of dark energy (see sections 1c and 2n). This would accelerate the expansion, but could keep the geometry flat.
At redshifts below unity (z < 1), these possibilities are all roughly indistinguishable, given the precision available in the measurements. However, for a universe with a mix of dark matter and dark energy, there is a transition point from the domination of the former to the latter (just like the transition between the radiation- and matter-dominated expansion prior to the formation of the CMBR). Before that time, dark matter was dominant, so the expansion should have been decelerating, only beginning to accelerate when the dark energy density surpassed that of the matter. This so-called cosmic jerk implies that supernovae before this point should be noticeably brighter than one would expect from a open universe (constant deceleration) or a universe with gray dust (constant dimming). New measurements at redshifts well above unity have shown that this "jerk" is indeed what we see -- about 8 billion years ago our universe shifted from slowly decelerating to an accelerated expansion, exactly as dark energy models predicted (Riess 2004).
c) Abundances of light elements
As we mentioned previously, standard BBT does not include the beginning of our universe. Rather, it merely tracks the universe back to a point when it was extremely hot and extremely dense. Exactly how hot and how dense it could be and still be reasonably described by GR is an area of active research but we can safely go back to temperatures and densities well above what one would find in the core of the sun.
In this limit, we have temperatures and densities high enough that protons and neutrons existed as free particles, not bound up in atomic nuclei. This was the era of primordial nucleosynthesis, lasting for most of the first three minutes of our universe's existence (hence the title of Weinberg's famous book "The First Three Minutes"). A detailed description of Big Bang Nucleosynthesis (BBN) can be found at Ned Wright's website, including the relevant nuclear reactions, plots and references. For our purposes a brief introduction will suffice.
Like in the core of our Sun, the free protons and neutrons in the early universe underwent nuclear fusion, producing mainly helium nuclei (He-3 and He-4), with a dash of deuterium (a form of hydrogen with a proton-neutron nucleus), lithium and beryllium. Unlike those in the Sun, the reactions only lasted for a brief time thanks to the fact that the universe's temperature and density were dropping rapidly as it expanded. This means that heavier nuclei did not have a chance to form during this time. Instead, those nuclei formed later in stars. Elements with atomic numbers up to iron are formed by fusion in stellar cores, while heavier elements are produced during supernovae. Further information on stellar nucleosynthesis can be found at the Wikipedia pages and in section 2g below.
Armed with standard BBT (easier this time since we know the expansion at that time was dominated by the radiation) and some nuclear physics, cosmologists can make very precise predictions about the relative abundance of the light elements from BBN. As with the Hubble diagram, however, matching the prediction to the observation is easier said than done. Elemental abundances can be measured in a variety of ways, but the most common method is by looking at the relative strength of spectral features in stars and galaxies. Once the abundance is measured, however, we have a similar problem to the peculiar velocities from the previous section: how much of the element was produced during BBN and how much was generated later on during stellar nucleosynthesis?
To get around this problem, cosmologists use two approaches:
- Deuterium: Of the elements produced during BBN, deuterium has by far the lowest binding energy. As a result, deuterium that is produced in stars is very quickly consumed in other reactions and any deuterium we observe in the universe is very likely to be primordial. The downside of this approach is that primordial deuterium can also be destroyed in the outer layers of stars giving us an underestimate of the total abundance, but there are other methods (like looking in the Lyman alpha forest region of distant quasars) which avoid these problems.
- Look Deep: One can try to look at stars and gas clouds which are very far away. Thanks to the finite speed of light, the larger the distance between the object and observers here on Earth, the more ancient the image. Hence, by looking at stars and gas clouds very far away, one can observe them at a time when the heavy element abundance was much lower. By going far enough back, one would eventually arrive at an epoch where no prior stars had had a chance to form, and thus the elemental abundances were at their primordial levels. At the moment, we cannot look back that far. These objects would have very high redshifts, taking the light into the infrared where observations from the ground are made very difficult by atmospheric effects. Likewise, the great distance makes them extremely dim, adding to our problems. Both of these problems should be helped greatly when the James Webb Space telescope enters service. What we can do now is to observe older stars, measure their elemental abundances, and try to extrapolate backwards.
Like most BBT predictions, the primordial element abundance depends on several parameters. The important ones in this case are the Hubble parameter (the expansion speed determines how quickly the universe goes from hot and dense enough for nucleosynthesis to cold and thin enough for it to stop) and the baryon density (in order for nucleosynthesis to happen, baryons have to collide and the density tells us how often that happened). The dependence on both parameters is generally expressed as a single dependence on the combined parameter OmegaB h2 (as seen in the figure below, provided by Ned Wright).
As this figure implies, there is a two-fold check on the theory. First of all, measurements of the various elemental abundances should yield a consistent value of OmegaB h2 (the intersection of the horizontal bands and the various lines). Second, independent measurements of OmegaB h2 from other observations (like the WMAP results in 2e) should yield a value that is consistent with the composite from the primordial abundances (the vertical band). Both approaches were used in the past; before the precise results of WMAP for the baryon density, the former was used more often. For a detailed account of the state of knowledge in 1997, look at Big Bang Nucleosynthesis Enters the Precision Era.
One of the major pieces of evidence for the Big Bang theory is consistent observations showing that, as one examines older and older objects, the abundance of most heavy elements becomes smaller and smaller, asymptoting to zero. By contrast, the abundance of helium goes to a non-zero limiting value. The measurements show consistently that the abundance of helium, even in very old objects, is still around 25% of the total mass of "normal" matter. And that corresponds nicely to the value which the BBT predicts for the production of He during primordial nucleosynthesis. For more details, see Olive 1995 or Izotov 1997. Also look at the plot below, comparing the prediction of the BBT to that of the Steady State model (data taken from Turck-Chieze 2004, plot provided by Ned Wright).
Recent calculations as well as references to recent observations can be found in Mathews (2005). In earlier studies, there were some problems with galaxies which had apparently very low helium abundances (specifically I Zw 18); this problem was addressed and resolved in the meantime (cf. Luridiana 2003).
d) Existence of the Cosmic Microwave Background Radiation
Even though nuclei were created during BBN, atoms as we typically think of them still did not exist. Rather, the universe was full of a very hot, dense plasma made of free nuclei and electrons. In an environment like this, light cannot travel freely -- photons are constantly scattering off of charged particles. Likewise, any nucleus that became bound to an electron would quickly encounter a photon energetic enough to break the bond.
As with the era of BBN, however, the universe would not stay hot and dense enough to sustain this state. Eventually (after about 400,000 years), the universe cooled to the point where electrons and nuclei could form atoms (a process that is confusingly described as "recombination"). Since atoms are electrically neutral and only interact with photons of particular energies, most photon were suddenly able to travel much larger distances without interacting with any matter at all (this part of the process is generally described as "decoupling"). In effect, the universe became transparent and the photons around at that time have been moving freely throughout the universe since that time. And, since the universe has expanded a great deal since that time, the wavelengths of these photons have been stretched a great deal (by about a factor of 1000).
From this basic picture, we can make two very strong predictions for this relic radiation:
- It should be highly uniform. One of the basic assumptions of BBT is that the universe is homogeneous and, given the time between the beginning of the universe and decoupling, any inhomogeneities (like those expected from inflation) would not have much time to grow.
- It should have a blackbody spectrum. As we said before, prior to decoupling the universe was full of plasma and photons were constantly scattering off of all of the ionized matter. This makes the universe a perfect absorber; no photons could leave the universe, so they would put the whole universe (or at least that part that was causally connected) in thermal equilibrium. As such, we can actually describe the universe as having a unique temperature. In classical thermodynamics, photons emitted by a blackbody at a given temperature have a very specific distribution of energies and, as Tolman showed in 1934, a blackbody spectrum will remain a blackbody spectrum (albeit at a lower temperature) as it redshifts.
The existence of this relic radiation was first suggested by Gamow along with Alpher and Herman in 1948. Their initial predictions correctly stated that the temperature of the radiation, which would have been visible light at decoupling, would be shifted into the microwave region of the electromagnetic spectrum at this point. That, combined with the fact that the source of the radiation put it "behind" normal light sources like stars and galaxies, gave this relic its name: the Cosmic Microwave Background Radiation (CMBR or, equivalently, just CMB).
While they were correct in the broad strokes, the Gamow, Alpher & Herman estimates for the exact temperature were not so precise. The initial range was somewhere between 1 K and 5 K, using somewhat different models for the universe (Alpher 1949), and in a later book Gamow pushed this estimate as high as 50 K. The best estimates today put the temperature at 2.725 K (Mather 1999). While this may seem to be a large discrepancy, it is important to bear in mind that the prediction relies strongly on a number of cosmological parameters (most notably Hubble's Constant) that were not known very accurately at the time. We will come back to this point below, but let us take a moment to discuss the measurements that led to the current value (Ned Wright's CMB page is also worth reading for more detail on the early history of CMBR measurements).
The first intentional attempt to measure the CMBR was made by Dicke and Wilkinson in 1965 with an instrument mounted on the roof of the Princeton Physics department. While they were still constructing their experiment, they were inadvertently scooped by two Bell Labs engineers working on microwave transmission as a communications tool. Penzias and Wilson had built a microwave receiver but were unable to eliminate a persistent background noise that seemed to affect the receiver no matter where they pointed it in the sky, day or night. Upon contacting Dicke for advice on the problem, they realized what they had observed and eventually received the Nobel Prize for Physics in 1978. More detail about the discovery is available here.
Since then, measurements of the temperature and energy distribution of the CMBR have improved dramatically. Measuring the CMBR from the ground is difficult because microwave radiation is strongly absorbed by water vapor in the atmosphere. To circumvent this problem, cosmologists have used high altitude balloons, ballistic rockets and satellite-born experiments. The most famous experiment focusing on the temperature of the CMBR was the COBE satellite (COsmic Background Explorer). It orbited the Earth, taking data from 1989 to 1993.
COBE was actually several experiments in one. The DMR instrument measured the anisotropies in the CMBR temperature across the sky (see more below) while the FIRAS experiment measured the absolute temperature of the CMBR and its spectral energy distribution. As we mentioned above, the prediction from BBT is that the CMBR should be a perfect blackbody. FIRAS found that that this was true to an extraordinary degree. The plot below (provided by Ned Wright) shows the CMBR spectrum and the best fit blackbody. As one can see, the error bars, which are quite small, are actually 400 standard deviations. In fact, the CMBR is as close to a blackbody as anything we can create here on Earth.
In many alternative cosmology sources, one will encounter the claim that the CMBR was not a genuine prediction of BBT, but rather a "retrodiction" since the values for the CMBR temperature that Gamow predicted before the measurement differed significantly from the eventual measured value. Thus, the argument goes, the "right" value could only be obtained by adjusting the parameters of the theory to match the observed one. This misses two crucial points:
- Existence, not temperature, is the key. In the absence of BBT, there would be no reason to expect a uniform, long-wavelength background radiation in the universe. True, astronomers like Eddington predicted that we would see radiation from interstellar dust (absorbed starlight, re-radiated as thermal emission) or background stars. However, those models do not lead to the sort of uniformity we see in the CMBR, nor do they produce a blackbody spectrum (stars, in particular, have strong spectral lines which are noticeably absent in the CMBR spectrum). Similar predictions can be made for background radiation in other parts of the electromagnetic spectrum (x-ray background from distant supernovae and quasars, for example) and the distribution of those backgrounds is nowhere near as uniform as we see with the CMBR.
- This is how science works. No physical theory exists independent of free parameters that are determined from subsequent observation. This is true of Newtonian gravity and GR (Newton's constant), it is true of quantum mechanics and quantum electrodynamics (Planck's constant, the electron charge) and it is true of cosmology. As we mentioned above, the test of a theory is not that it meets one prediction. Instead, the true test is whether the model can match other observations once it has been calibrated against one data set.
A final test of the cosmological origins of the CMBR comes from looking at distant galaxies. Since the light from these galaxies was emitted in the past, we would expect that the temperature of the CMBR at that time was correspondingly higher. By examining the distribution of light from these galaxies, we can get a crude measurement of the temperature of the CMBR at the time when the light we are observing now was emitted (e.g. Srianand 2000). The current state of this measurement is shown in the plot below (provided by Ned Wright). The precision of this measurement is obviously not nearly as great as we saw with the COBE data, but they do agree with the basic BBT predictions for the evolution of the CMBR temperature with redshift (and disagree significantly with what one would expect for a CMBR generated from redshifted starlight or the like).
e) Fluctuations in the CMBR
As mentioned in the previous point, the temperature of the CMBR is extremely uniform; the differences in the temperature at different locations on the sky are below 0.001 K. Since matter and radiation were tightly coupled during the earliest stages of the universe, this implies that the distribution of matter was also initially uniform. While this matches our basic cosmological assumption, it does lead to the question of how we went from that very uniform universe to the decidedly clumpy distribution of matter we see on small scales today. In other words, how could planets, stars, galaxies, galaxy clusters, etc., have formed from an essentially homogeneous gas?
In studying this question, cosmologists would end up developing one of the most powerful and spectacularly successful predictions of BBT. Before describing the theory side of things, however, we will take a brief detour into the history of measuring fluctuations ("anisotropies" in cosmological terms) in the CMBR.
The first attempt to measure the fluctuations in the CMBR was made as part of the COBE (COsmic Background Explorer) mission. As part of its four year mission during the early 1990s, it used an instrument called the DMR to look for fluctuations in the CMBR across the sky. Based on the then-current BBT models, the fluctuations observed by the DMR were much smaller than expected. Since the instrument had been designed with the expected fluctuation amplitudes in mind, the observations ended up being just above the sensitivity threshold of the instrument. This led to speculation that the "signal" was merely statistical noise, but it was enough to generate a number of subsequent attempts to look for the signal.
With satellite observations still on the horizon, data for the following decade was mostly collected using balloon-borne experiments (see the list at NASA's CMBR data center for a thorough history). These high altitude experiments were able to get above the vast majority of the water vapor in the atmosphere for a clearer look at the CMBR sky at the expense of a relatively small amount of observing time. This limited the amount of sky coverage these missions could achieve, but they were able to conclusively demonstrate that the signal seen by COBE was real and (to a lesser extent) that the fluctuations matched the predictions from BBT.
In 2001, the MAP probe (Microwave Anisotropy Probe) was launched, later re-named to WMAP in honor of Wilkinson who had been part of the original team looking for the CMBR back in the 1960s. Unlike COBE, WMAP was focused entirely on the question of measuring the CMBR fluctuations. Drawing from the experience and technological advances developed for the balloon missions, it had much better angular resolution than COBE (see the image below from the NASA/WMAP Science Team). It also avoided one of the problems that had plagued the COBE mission: the strong thermal emission from the Earth. Instead of orbiting the Earth, the WMAP satellite took a three month journey to L2, the second Lagrangian point in the Earth-Sun system. This meta-stable point is beyond the Earth's orbital path around the Sun, roughly one tenth as far as the Earth is from the Sun. It has been there, taking data, ever since.
In the spring of 2003, results from the first year of observation were released - and they were astonishing in their precision. As an example, for decades the age of the universe had not been known to better than about two billion years. By combining the WMAP data with other available measurements, suddenly we knew the age of the universe to within 0.2 billion years. Across the board, parameters that had been known to within 20-30 percent saw their errors shrink to less than 10 percent or better. For a fuller description of how the WMAP data impacted our understanding of BBT, see the WMAP website's mission results. That page is intended for a layman audience; more technical detail can be found in their list of their first year papers.
So, how did this amazing jump in precision come about? The answer lies in understanding a bit about what went on between the time when matter and radiation had equal energy densities and the time of decoupling. A fuller description of this can be found at Wayne Hu's CMB Anisotropy pages and Ned Wright's pages. After matter-radiation equality, dark matter was effectively decoupled from radiation (normal matter remained coupled since it was still an ionized plasma). This meant that any inhomogeneities (arising essentially from quantum fluctuations) in the dark matter distribution would quickly start to collapse and form the basis for later development of large scale structure (the seeds of these inhomogeneities were laid down during inflation, but we will ignore that for the current discussion). The largest physical scale for these inhomogeneities at any given time was the then-current size of the observable universe (since the effect of gravity also travels at the speed of light). These dark matter clumps set up gravitational potential wells that drew in more dark matter as well as the radiation-baryon mixture.
Unlike the dark matter, the radiation-baryon fluid had an associated pressure. Instead of sinking right to the bottom of the gravitational potential, it would oscillate, compressing until the pressure overcame the gravitational pull and then expanding until the opposite held true. This set up hot spots where the compression was greatest and cold spots where the fluid had become its most rarefied. When the baryons and radiation decoupled, this pattern was frozen on the CMBR photons, leading to the hot and cold spots we observe today.
Obviously, the exact pattern of these temperature variations does not tell us anything in particular. However, if we recall that the largest size for the hot spots corresponds to the size of the visible universe at any given time, that tells us that, if we can find the angular size of these variations on the sky, then that largest angle will correspond to the size of the visible universe at the time of decoupling. To do this, we measure what is known as the angular power spectrum of the CMBR. In short, we find all of the points on the sky that are separated by a given angular scale. For all of those pairs, we find the temperature difference and average over all of the pairs. If our basic picture is correct, then we should see an enhancement of the power spectrum at the angular scale of the largest compression, another one at the size of the largest scale that has gone through compression and is at maximum rarefaction (the power spectrum is only sensitive to the square of the temperature difference so hot spots and cold spots are equivalent), and so on. This leads to a series of what are known as "acoustic peaks", the exact position and shape of which tell us a great deal about not only the size of the universe at decoupling, but also the geometry of the universe (since we are looking at angular distance; see 1b) and other cosmological parameters.
The figure below from the NASA/WMAP Science Team shows the results of the WMAP measurement of the angular power spectrum using the first year of WMAP data. In addition to the angular scale plotted on the upper x-axis, plots of the angular power spectrum are generally shown as a function of "l". This is the multipole number and is roughly translated into an angle by dividing 180 degrees by l. For more detail on this, you can do a Google search on "multipole expansion" or check this page. The WMAP science pages also provide an introduction to this way of looking at the data.
As with the COBE temperature measurement, the agreement between the predicted shape of the CMBR power spectrum and the actual observations is staggering. The balloon-borne experiments (particularly BOOMERang, MAXIMA, and DASI) were able to provide convincing detections of the first and second acoustic peaks before WMAP, but none of those experiments were able to map a large enough area of the sky to match with the COBE DMR data. WMAP bridged that gap and provided much tighter measurement of the positions of the first and second peaks. This was a major confirmation of not only the Lambda CDM version of BBT, but also the basic picture of how the cosmos transitioned from an early radiation-dominated, plasma-filled universe to the matter-dominated universe where most of the large scale structure we see today began to form.
f) Large-scale structure of the universe
The hot and cold spots we see on the CMBR today were the high and low density regions at the time the radiation that we observe today was first emitted. Once matter took over as the dominant source of energy density, these perturbations were free to grow by accreting other matter from their surroundings. Initially, the collapsing matter would have just been dark matter since the baryons were still tied to the radiation. After the formation of the CMBR and decoupling, however, the baryons also fell into the gravitational wells set up by the dark matter and began to form stars, galaxies, galaxy clusters, and so on. Cosmologists refer to this distribution of matter as the "large scale structure" of the universe.
As a general rule, making predictions for the statistical properties of large scale structure can be very challenging. For the CMBR, the deviations from the mean temperature are very small and linear perturbation theory is a very good approximation. By comparison, the density of matter in our galaxy compared to the mean density of the universe is enormous. As a result, there are two basic options: either do measurements on very large physical scales where the variations in density are typically much smaller or compare the measurements to simulations of the universe where the non-linear effects of gravity can be modeled. Both of these options require significant investment in both theory and hardware, but the last several years have produced some excellent confirmations of the basic picture.
As we mentioned in the last section, the process that led to the generation of the acoustic peaks in the CMBR power spectrum was driven by the presence of a tight coupling between photons and baryons just prior to decoupling. This fluid would fall into the gravitational potential wells set up by dark matter (which does not interact with photons) until the pressure in the fluid would counteract the gravitational pull and the fluid would expand. This led to hot spots and cold spots in the CMBR, but also led to places where the density of matter was a little higher thanks to the extra baryons being dragged along by the photons and areas where the opposite was true. Like with the CMBR, the size of these areas was determined by the size of the observable universe at the time of decoupling, so certain physical scales would be enhanced if you looked at the angular power spectrum of the baryons. Of course, once the universe went through decoupling, the baryons fell into the gravitational wells with the dark matter, but those scales would persist as "wiggles" on the overall matter power spectrum.
Of course, as the size of the universe expanded, the physical scale of those wiggles increased, eventually reaching about 500 million light years today. Making a statistical measurement of objects separated by those sorts of distances requires surveying a very large volume of space. In 2005, two teams of cosmologists reported independent measurements of the expected baryon feature. As with the CMBR power spectrum, this confirmed that the model cosmologists have developed for the initial growth of large scale structure was a good match to what we see in the sky.
The second method for understanding large scale structure is via cosmological simulations. The basic idea behind all simulations is this: if we were a massive body and could feel the gravitational attraction of all of the other massive bodies in the universe and the overall geometry of the universe, where would we go next? Simulations answer this question by quantizing both matter and time. A typical simulation will take N particles (where N is a large number; hence the term N-body simulation) and assign them to a three-dimensional grid. Those initial positions are then perturbed slightly to mimic the initial fluctuations in energy density from inflation. Given the positions of all of these particles and having chosen a geometry for our simulated universe, we can now calculate where all of these particles should go in the next small bit of time. We move all the particles accordingly and then recalculate and do it again.
Obviously, this technique has limits. If we assign a given mass to all of our particles, then measurements of mass below a certain limit will be strongly quantized (and hence inaccurate). Likewise, the range of length scales is limited: above by the volume of the chunk of the universe we have chosen to simulate and below by the resolving scale of our mass particles. There is also the problem that, on small scales at least, the physics that determines where baryons will go involves more than just gravity; gas dynamics and the effects of star formation makes simulating baryons (and thus the part of the universe we can actually see!) challenging. Finally, we do not expect the exact distribution of mass in the simulation to tell us any thing in particular; we only want to compare the statistical properties of the distribution to our universe. This article discusses these statistical methods in detail as well as providing references to the relevant observational data.
Still, given all of these flaws, efforts to simulate the universe have improved tremendously over the last few decades, both from a hardware and a software standpoint. White (1997) reviews the basics of the simulating structure formation as well as the observational tests one can use to compare simulations to real data. He shows results for four different flavors of models -- including both the then-standard "cold dark matter" universe and a universe with a cosmological constant. This was before the supernovae results were released, putting the lie to the claim that, prior to the supernovae data, the possibility that the cosmological constant was non-zero was ignored in the cosmological literature. A CDM universe was the front runner at the time, but cosmologists were well aware of the fact that the data was not strong enough to rule out several variant models.
The Columbi (1996) paper is a good example of this awareness as well. In this article, various models containing different amounts of hot and cold dark matter were simulated, as well as attempts to include "warm" dark matter (i.e. dark matter that is not highly relativistic, but still moving fast enough to have significant pressure). Their Figure 7 provides a nice visual comparison between observed galaxy distributions and the results of the various simulated universes.
In 2005, the Virgo Consortium released the "Millennium Simulation"; details can be found on both the Virgo homepage and this page at the Max Planck Institute for Astrophysics. Using the concordance model (drawn from matching the results of the supernovae studies, the WMAP observations, etc.), these simulations are able to reproduce the observed large scale galaxy distributions quite well. On small scales, there is still some disagreement, however (see below for a more detailed discussion).
g) Age of stars
Since the stars are a part of the universe, it naturally follows that, if BBT and our theories of stellar formation and evolution are more or less correct, then we should not expect to see stars older than the universe (compare 3d!). More precisely, the WMAP observations suggest that the first stars were "born" when the universe was only about 200 million years old, so we should expect to see no stars which are older than about 13.5 billion years. On the other hand, stellar evolution models tell us that the lowest-mass stars (those with a mass roughly 1/10 that of our Sun) are expected to "live" for tens of trillions of years, so there is a chance for significant disagreement.
Before delving into this issue further, some nomenclature is necessary. Astronomers generally assign stellar formation into three generations called "populations". The distinguishing characteristic here is the abundance of elements with atomic mass larger than helium (these are all referred to as "metals" in the astronomical literature and the abundance of metals as the star's "metallicity"). As we explained in section 2c, to a very good approximation primordial nucleosynthesis produced only helium and hydrogen. All of the metals were produced later in the cores of stars. Thusly, the populations of stars are roughly separated by their metal content; Population I stars (like our Sun) have a high metallicity, while Population II stars are much poorer in metals. Since the metal content of our universe increases over time (as stars have more and more time to fuse lighter elements into heavier ones), metallicity also acts as a rough indicator for when a given star was formed. The different stellar generations are also summarized in this article.
Although it may not be immediately obvious, the abundance of metals during star formation has a significant impact on the resulting stellar population. The basic problem of star formation is that the self-gravity of a given cloud of interstellar gas has to overcome the cloud's thermal pressure; clouds where this occurs will eventually collapse to form stars, while those where it does not will remain clouds. As a gas cloud collapses, the gravitational energy is transferred into thermal energy and the cloud heats up. In turn, this increases the pressure and makes the cloud less likely to collapse further. The trick, then, is to radiate away that extra thermal energy as efficiently as possible so that collapse may continue. Metals tend to have a more complex electron structure and are more likely to form molecules than hydrogen or helium, making them much more efficient at radiating away thermal energy. In the absence of such channels, the only way to get around this problem is by increasing the gravitational side of the equation, i.e., the mass of the collapsing gas cloud. Hence, for a given interstellar cloud, more metals will result in a higher fraction of low mass stars, relative to the stars produced by a metal-poor cloud.
The extreme case in this respect is the Population III stars. These were the very first generation of stars and hence they formed with practically no metals at all. As such, their mass distribution was skewed heavily towards the high mass end of the spectrum. Some of the details and implications of this state of affairs can be found in this talk about reionization and these two articles on the first stars.
Observing this population of stars directly would be a very good piece of evidence for BBT. Unfortunately, the life time of stars (which is to say the time during which they are fusing hydrogen in their cores into helium) decreases strongly with their mass. For a star like our Sun, the lifetime is on the order of 10 billion years. For the Population III stars, which are expected to have a typical mass around 100 times that of the Sun, this time shrinks to around a few million years (an instant, by cosmological standards). Therefore, we must look at regions of universe where the light we observe was first emitted near the time when these stars shone. This means that the light will be both dim and highly redshifted (z ~ 20). The combination of these two effects makes observations from the ground largely unfeasible, but may become possible when the James Webb space telescope begins service. First promising results were obtained just recently by the Spitzer infrared space telescope.
Like stars today, Population III stars formed heavy elements in their cores (by nuclear fusion), and even heavier elements when they went supernova. These metals were dispersed throughout space by the supernovae explosions and the Population II stars formed. With help from metal cooling, lower mass stars were able to form, low enough that they are still burning today. Population II stars are preferentially seen in globular clusters orbiting the galaxy and in the galactic bulge. By using the Hertzsprung-Russell diagram, astronomers can get a estimate of when the stars in a globular cluster (or other star cluster) formed. This is explained in more detail in the FAQ on Determining distances to astronomical objects or on this page about the Hertzsprung Russell Diagram and Stellar Evolution.
A second method of determining stellar age is by measuring the beryllium content in a star's outer layers. Applying this technique to the globular cluster NGC 6397, Pasquini (2004) found an age of 13.4 billion years, plus or minus 800 million years (more details can be found in this article). Other studies like Krauss (2003) and Hansen (2004) obtained similar results with related methods: 12.2 and 12.1 billion years, respectively, with errors on order 1 to 2 billion years.
The large uncertainties in these ages is partly due to the fact that these methods depend crucially on our theory of stellar development ("stellar evolution"), which in turn depends on our understanding of the nuclear reactions going on in stars. Despite the relatively low energies, the details for some of these reactions remain somewhat imprecise.
Recently, new results were obtained on the speed of a nuclear reaction chain which is quite important in stars, the so-called CNO cycle. This study (Imbriani 2004) revealed that the speed of this reaction is far slower than was previously assumed. This in turn implies that the stars are older than previously assumed, by something between 0.7 and 1 billion years. Using Pasquini's data, this implies that the oldest stars in the Milky Way are 14.1 to 14.4 billion years years old. This is older than the age of the universe determined from other measurements (compare the WMAP data, 2d); but one has to take into account the relatively large errors associated with these age determinations (see above). So these star ages are still consistent with the age of the universe determined in other ways.
As pointed out by Dauphas (2005), it is also possible to determine the age of the Milky Way without relying on assumptions about the details of the nuclear reactions going on in the stars. He used measurements of the uranium (U-238) and thorium (Th-232) abundance, both in the solar system and in low-metallicity halo stars to determine the age of our galaxy. His result was 14.5 billion years, with uncertainties of -2.2 and +2.8 billion years. Taking these errors margins into account, this is again nicely consistent with the age of the universe determined by WMAP.
One should also note that the age of the stars in distant galaxies can also be determined. To do this, one calculates theoretical models of what the spectrum of a galaxy looks like when the stars in it have a certain age (see Jimenez 2004), and compares these model predictions with the observed spectra of galaxies. Obviously, this is a somewhat complicated method with potential errors even greater than of the methods for determining the ages of stars in our neighbourhood.
Nevertheless, so far the results found are consistent with a universe with a finite age. In galaxies which are far away from us, which we should therefore see as they looked like when they still were very young, only young stars are found. For example, Nolan (2003) found that in two galaxies with redshifts around 1.5, the stars had ages of around 3-4 billion years at most. There was also a detailed study done on the star formation history of the universe, using observations of the ages of stars in distant galaxies, which showed that the rate of star formation was highest about 5 billion years ago (Heavens 2004).
h) Evolution of galaxies
Galaxies are also dynamic entities, changing over time. Like with large scale structure, the broad strokes of galaxy formation follow a path of "hierarchical clustering": small structures form very early on and these merge to form larger structures as time goes on. Within this larger framework, some galaxies will develop secondary features like spiral arms or bar-like structures, some of which will be transitory and some of which will persist.
This basic picture tells us that, if we look at very distant regions of the universe (i.e., galaxies with very high redshifts), we should see mainly small, irregular galaxies. For the most part, this is what we find (with some notable exceptions, as we will cover later). Starting in 1996, the Hubble Space Telescope took a series of very deep images: the Hubble Deep Field, the Hubble Deep Field South, and the Hubble Ultra Deep Field. As one would expect, the morphology of the few nearby galaxies in these images is quite a bit different from the very high redshift galaxies.
Another important indicator of galaxy evolution comes from quasars, specifically their redshift distribution. Quasars are generally believed to be powered by supermassive black holes at the centers of galaxies accreting matter; as dust and gas falls into the black hole, it heats up tremendously and emits a huge quantity of energy across a broad spectrum. For most true quasars, the amount of energy released during this process is a few orders of magnitude larger than all of the light emitted by the rest of the galaxy. In order for this sort of behavior to occur for some length of time, galaxies need to have a large quantity of dust and free gas near their cores. The bulk of observed quasars have redshifts near z ~ 2, which suggests that there was a particular epoch during the history of the universe when the conditions were right for a large fraction of galaxies. For steady-state models of the universe, this is hard to explain. On the other hand, BBT explains this quite neatly by noting that, in their early stages of formation, galaxies have a great deal of dust and free gas and galaxy collisions were also more common, which could serve as a mechanism for triggering quasar activity.
With that said, it should be noted that galaxy formation and evolution remains a very open question within BBT and not without controversy. See section 5d for more details.
i) Time dilation in supernova brightness curves
As explained in 2b, light traveling through the expanding universe undergoes redshift (i.e., the wavelength is stretched to larger values as the universe expands). Since the wavelength and frequency for a given photon are related inversely through the speed of light, which is a constant, it is obvious that as the wavelength increases the frequency must decrease. Likewise, if light from a distant galaxy varies with time (like we would expect for Cepheid variable stars or pulsars), then the time between these events is stretched (remember, frequency is inversely related to time). Thus, if we observe this galaxy from Earth, we will see a slower variation than an observer in that distant galaxy and the ratio between those times will be exactly equal to one plus the redshift of the galaxy.
While observing this time dilation with stars in distant galaxies is difficult, we can test it using supernovae in those galaxies. Type Ia supernovae, in particular, are known to have a characteristic signature, increasing in brightness rapidly and then slowly fading away over the course of several weeks. This signature varies somewhat depending on the exact chemical composition of the star before it undergoes its supernova explosion, but with careful monitoring we can compensate for this effect. This aspect was key to the supernovae measurements that gave the earliest indication of the existence of dark energy and has been the subject of many papers (for example, Leibundgut 1996, Riess 1997, Goldhaber 2001 and Knop 2003). These papers make it clear that correcting for the effects of redshift time dilation is critical for understanding the data. In particular, Goldhaber rules out a "no time dilation" model at 18 standard deviations. The plot below (from Ned Wright) demonstrates Goldhaber's findings.
j) Tolman tests
In addition to predicting that the wavelength of light should change as the universe expands (where the observed wavelength is stretched by a factor of (1+z) relative to the initial wavelength), the BBT also requires that the surface brightness of light sources decreases, but as the fourth power of (1+z). One important consequence of this effect is that thermal emission from a black body at a given temperature at some point in the history of the universe will still appear as a thermal spectrum later on, but at a temperature that is a factor of (1+z) lower (as we mentioned in 2d). Thus, by measuring the deviation of the observed CMBR spectrum from that of a perfect black body, we get a very powerful test of the idea that the expansion of the universe follows the basic picture of standard BBT. This measurement was carried out with the COBE satellite in the 1990s and the spectrum was found to match a black body to one part in 10,000 (Mather 1990, Fixsen 1996).
A number of attempts have been made to apply this test to other objects in the universe since Tolman worked out the surface brightness scaling in 1930. The major difficulty in applying this test to any particular object is that, in order to test the observed surface brightness against the expectation, one must first know the absolute brightness in the first place. The lack of such a "standard candle" in cosmology is keenly felt.
In 2001, a series of papers by Lubin attempted to apply this test to distant galaxies. This is a difficult task since galaxies are dynamic entities on the time scale of the universe. They undergo periods of star bursts (rapid formation of stars, usually in galactic disks), they merge with one another, the opacity of interstellar dust changes as the metal content increases, and their constituent stars change in luminosity as they age. Lubin's paper attempts to take all of this into account. After folding these effects into the expected scaling for the galaxy surface brightnesses, they find results that are consistent with what they expect from the galaxy evolution models. This is not as strong an indication that the Tolman relation holds as the CMBR temperature, but it is a positive sign that the variation from the strict relation is more or less understood. Indeed, the results were strong enough that "tired light" models were able to be ruled out using this method.
k) Sunyaev-Zel'dovich effect
The picture that was described in 2d involved the CMBR photons passing through the universe from the time of decoupling until we detect them here on Earth without interacting with anything along the way. While this is largely true, it does not hold for all photons. The regions around massive galaxy clusters are full of very hot, ionized gas. So hot, in fact, that the free electrons are moving at relativistic speeds. Since these are free ions, they can interact much more freely with photons (like during the plasma phase of the universe). When CMBR photons pass through this gas, about 1% of them will interact with the gas. Since the photons have a much lower energy than the electrons, the scattering will impart energy into the photons via the inverse Compton effect. The result is that the CMBR spectrum is distorted, with some of the photons shifted to higher energies than we would expect from a pure thermal spectrum. This is the thermal Sunyaev-Zel'dovich effect and when we look at the CMBR in the direction of these galaxy clusters we should expect to see the effects of this distortion ( this page also offers some more details).
As we can see from the observational data, this effect is clearly observed. Since this is indicative of the fact that the photons must have passed through the cluster to get to us, this is strong evidence that the CMBR is indeed a cosmological phenomenon and not locally produced. These observations can also be used to measure the value of the Hubble parameter. The precision of the measurement is somewhat limited since it depends on the details of the distribution of the hot gas within the cluster, but the results are consistent with what we see from other methods.
l) Integrated Sachs-Wolfe effect
In addition to the Sunyaev-Zel'dovich effect, photons from the CMBR can also be subtly affected by the Integrated Sachs-Wolfe effect. The basis for this effect is gravitational redshift, one of the most basic predictions from GR and first demonstrated experimentally by Pound and Rebka in 1960. The basic idea is that, as photons enter a gravitational potential well, they pick up extra energy and when they exit they lose energy. Hence, scientists refer to photons "falling into" and "climbing out of" gravitational wells.
As CMBR photons pass through the foreground large scale structure, they pass through many such gravitational wells. If the depth of the well is static (or rather if the depth of the well is increasing at the same rate as the expansion of the universe), then the net energy change is zero. All of the energy they gained falling in is lost climbing out. However, if the universe contains dark energy (or has an open geometry), then the universe expands faster than the gravitational wells around massive objects can grow. As a result, the CMBR photons do not lose all of the energy they gained falling into the potentials. This makes the CMBR look very slightly hotter in the direction of these potentials, which also contain the highest concentrations of galaxies.
Following the release of the WMAP data, studies done by Scranton (2003), Afshordi (2004), Boughn (2004), and Nolta (2004) measured this effect using galaxies selected in a number of different ways. The signal-to-noise in any one of the measurements was not very large. However, taken together (and combined with the WMAP observation that the geometry of the universe was best fit by a flat universe), they provide significant evidence that this effect is real and is best explained by the standard Lambda CMD model of BBT.
m) Dark Matter
A common complaint regarding the inclusion of dark matter in cosmology is that it is an "epicycle", analogous to the epicycles of the Ptolemaic geocentric models of the solar system. In this view, dark matter is a crutch invented to save a model that otherwise does not fit the data. While popular with BBT critics, this stance does not hold up under further scrutiny.
The origin of dark matter as an astronomical entity comes not from cosmology, but rather from the work of Zwicky and Oort in 1933 and 1940, respectively. Zwicky's studies of galaxies' velocities in large clusters convinced him that there must be more mass present in the clusters (in order to provide sufficient gravitational pull to keep the clusters from flying apart) than could be accounted for by the visible mass of the galaxies themselves. Likewise, Oort's measurement of the rotation curves of galaxies (essentially, the orbital velocity of stars around the galactic center plotted against the stars' radii) suggested that the mass interior to these stellar orbits indicated by simple Newtonian physics did not match the mass inferred by the light from the centers of those galaxies. Both of these observations were made well before modern cosmology had really taken shape and, hence, were independent of any need for dark matter to make cosmological measurements match the theory. More on the history of dark matter can be found here and in van den Bergh (1999).
Like the rest of cosmology, the current evidence for dark matter comes from a number of different observations:
- Like Oort's original observations, modern measurements of the rotation
curves for spiral galaxies indicate that there must be more mass in these
galaxies than we can directly see. The velocity of a star (or gas cloud)
in a roughly circular orbit around the center of a galaxy depends on the
mass interior to that orbit, as basic Newtonian mechanics tells us. Hence,
by measuring the velocity of stellar orbits at a number of radii, we can
turn that into a mass profile. Faber (1979) gives a
review of a number of such velocity measurements.
Two points are relevant here: First, the mass inferred from these measurements is invariably more than one would infer from looking at the visible matter in these galaxies. This was clear to Oort and remains so today. Second, the distribution of that dark matter is not the same as the visible matter. The stellar density in a spiral galaxy tends to fall off exponentially as one moves from the center to the edge in the plane of the disk. The mass profile inferred from the velocity curves, on the other hand, falls as the inverse cube of the radius (Prada 2003). This is not what we expect for baryons, which can lose gravitational energy via radiation and fall deeper into the gravitational potential well of the galaxy. For CDM, however, this option is not available (since the dark matter does not interact with photons) and hence it remains stuck at larger radii. Simulations of CDM verify this behavior, providing another clue that not only is dark matter present, but the majority of it is not made of baryons. - A similar game can be played with elliptical galaxies. These galaxies do not have the same simple orbital structure as spiral galaxies so the observation is somewhat different. Rather than measuring the velocity curves, we can look at the X-ray emission from these galaxies. X-rays are produced by extremely hot gas (temperatures in millions of degrees) surrounding these galaxies. As with the stars in the spiral galaxy, however, the mass of the galaxy must be sufficient to keep the particles in the gas gravitationally bound to the galaxy, so a mass can be inferred from a measurement of the X-ray temperature. Again, the mass measured in this manner invariably exceeds that expected by the amount of visible matter (cf Fabian 1986).
- In a similar fashion, one can also look at the motion of galaxies in clusters. Like stars in elliptical galaxies, the motions of galaxies in these clusters are not simple circular orbits. To get a measure of the kinetic energy in the galaxies, astronomers measure their velocity dispersion, essentially the variance of the observed velocities for galaxies in the cluster. If the galaxy cluster is relatively unperturbed (i.e. has not undergone a major merger with another galaxy cluster), then the virial theorem can be used to calculate the expected gravitational force necessary to hold together a galaxy cluster of a given velocity dispersion. As mentioned above, Zwicky's 1933 measurements of galaxy cluster velocity dispersions were the first indicator that the total mass of clusters must be considerably higher than just the visible matter and this remains true with modern measurements.
- As we mentioned in 2k, galaxy clusters are surrounded by a halo of extremely hot ionized gas. This means that we can use the same technique from our elliptical galaxy example above to get a mass measure for galaxy clusters and compare it to the visible mass. X-ray observations with the Chandra satellite have indeed revealed evidence for dark matter; see the press releases Chandra Discovers "Rivers Of Gravity" That Define Cosmic Landscape and Motions of Nearby Galaxy Cluster Reveal Presence of Hidden Superstructure.
- The large amount of mass contained in galaxy clusters also makes them an excellent source of gravitational lenses. One of the more startling predictions of GR, gravitational lensing is the deflection of light due to gravitational potentials. The confirmation of gravitational lensing by Eddington's 1919 expedition was one of the early important observations in favor of GR and lensing remains a powerful cosmological probe today. For particularly strong gravitational potentials (like galaxy clusters), light from sources behind the lens can actually travel multiple paths to observers on the other side of the lens. This results in distorted, arc-like images of the background object like those seen in this image of Abell 2218. The pattern and shape of these images is very sensitive to the mass (and mass distribution) of the lensing object, providing our cleanest measure of galaxy cluster masses and, once again, dark matter is necessary to bridge the gap between the observed and visible mass. A list of currently discovered gravitational lenses can be found on the CASTLES Survey website. This article, Scientists Map Dark Matter, Prove Einstein Right also explains this effect in some detail.
- Finally, we have the current cosmological concordance model. Measurements of distant supernovae, the CMBR anisotropies and large scale structure all point to a model which has a relatively large component of dark matter. Further, the latter two measurements are also able to differentiate between the amount of matter in normal baryonic form and that in non-baryonic matter. In the best fitting model they require about 5 parts of the latter for every part of the former.
A further review of these observations is provided in this page on Dark Matter.
So, given that we need a new sort of matter, one that does not interact with light in the way that normal matter does, a few questions are apparent: Is there a reasonable model that can provide possibilities for what this dark matter really is? And if so, why is it that we have not been able to observe it directly in laboratories here on Earth?
Before getting into those questions, it is important to recall that not all dark matter is non-baryonic. For these baryons, "dark" is a somewhat vague term. Occasionally, it is taken to mean that they do not give off light in the visible part of the electromagnetic spectrum; for example, warm interstellar and intergalactic gas, brown dwarfs, black holes, and neutron stars. Of these, only the first is currently beyond our abilities to observe directly; brown dwarfs give off light in the infrared, while black holes and neutron stars (or rather their environments) are strong sources of radio waves and X-rays. Taking in account the full electromagnetic spectrum available to astronomers, about half of the baryons in the universe can be called "dark matter" at the current time.
So, having addressed that, we return to the non-baryonic sector of dark matter. The current best bets for dark matter candidates come from particle physics, where current theories of supersymmetry supply a whole lots of possibilities. In the Minimal Supersymmetric Standard Model, each particle in the Standard Model has a super-partner particle of much greater mass. These particles would only exist in abundance at the very earliest stages of the universe, but the lightest of these particles would be stable against decay into lighter particles (since none exist) and, thus, remain in existence today. In scenarios like this, the lightest particle is typically the neutralino. An even more exotic, but widely discussed, possibility is the so-called "axion". Collectively, these particles are generally called WIMPs, short for "weakly interacting massive particle".
For many years, neutrinos were considered to be viable dark matter candidate (having the advantage that we definitely knew they existed). However, as more evidence accumulated from large scale structure and the CMBR, the possibility that neutrinos could explain the observations faded. In order to match the observations, the dark matter had to be cold, i.e. moving slowly relative to the speed of light. With their very small mass, neutrinos are very easy to accelerate to near light speed. Since they have so much kinetic energy, neutrinos do not easily collapse into relatively small gravitational potentials. If they were the dominant form of dark matter, they would smooth out the distribution of matter on small scales, in clear conflict with the strong small scale clustering we observe. Indeed, when we include information from WMAP, cosmologists find that neutrinos can comprise no more than 1.5% of the total energy density in the universe.
As the evidence for dark matter mounted and particle physics was able to provide a number of plausible candidates, a number of experiments have begun over the last several years to detect dark matter directly. So far the experiments have not been able to make a definite detection, but a great deal of theoretical parameter space remains uncharted. For a review of the current constraints, these two articles are worth reading.
Another exciting possibility on the horizon is the Large Hadron Collider. This experiment at CERN is expected to reach high enough energies to look for supersymmetric particles, the discovery of which would be an important indicator that our current theories about dark matter particles are a strong possibility. Of course, it is also possible that the LHC will find something entirely new and unexpected.
n) Dark Energy
In an epicycle rant, dark energy is quickly added to the list along with dark matter. Like with the dark matter case, calling dark energy an epicycle inserted to save BBT ignores a number of the facts of the case. Unlike dark matter, the only evidence for dark energy comes from purely cosmological measurements, but the existence of some sort of dark energy was part of GR and BBT from the very earliest days of the theory, hardly what one would expect for a parameter invented ad hoc to save a theory. Further, the evidence for dark energy comes from a wide variety of cosmological observations, each with their own independent errors and systematic biases. Additionally, there are theoretical arguments that this type of energy should exist.
First, we look at the observational evidence.
By the mid-1990s, a number of cosmological observations had reached sufficient precision that it was difficult to reconcile them with a universe dominated by dark matter. Roughly a decade and a half prior, Alan Guth and others suggested an addition to the then current picture of BBT: inflation. The motivation for inflation was to explain the horizon and flatness problems (basically, why is the universe so uniform and close to flat if we know that these are unstable solutions to the equations governing BBT; this is covered in more detail in 3e). Since that time inflation had become a standard part of BBT (and remains so today). One of the most generic inflation predictions was that the overall density of the universe should be very, very close to the critical value. Mid-1990s measurements of the matter density from galaxy clusters and other sources consistently preferred matter densities much lower to match the data. At the same time, measurements of the ages of the oldest stars were yielding ages that were inconsistent with the age of the universe based on a matter-only model. An open model, where the density was lower than the critical value, would alleviate these observational problems to a certain extent but would be difficult to square with inflation, which had been given a strong boost by the COBE CMBR measurements a few years prior. As it would turn out, dark energy solved all these disparate problems. The story is told in more detail in this article: Dark Energy: Just What Theorists Ordered.
While dark energy was a frequently mentioned possible solution to the state of problems by the late 1990s, few cosmologists were willing to make that leap without stronger evidence. For many cosmologists, that evidence came in the form of the 1998 supernovae results. Two teams, working independently and with largely disjoint sets of data, found that observations of distant supernovae were consistently dimmer than one would expect for a matter-only universe (see Riess 1998 and Perlmutter 1999). Indeed, they found that the expansion of the universe had been accelerating for the last several billion years, beyond the effect expected even for an open universe. The best fit to the data included a substantial dark energy component, enough to keep the universe's geometry flat while also matching the low matter density galaxy cluster measurements and resolving the age crisis. For more details see this page: Is there a nonzero cosmological constant?.
For those still reluctant to include dark energy in their models, the situation became more difficult with the release of the first year WMAP results. These observations revealed that the total density of the universe was very close to the critical value, putting the last nail into the open universe coffin. Having a detailed CMB map also allowed for a much cleaner measurement of the integrated Sachs-Wolfe effect, one of the key indicators of dark energy.
A good summary of the various lines of evidence supporting the existence of dark energy is also given by this webpage: Dark Energy.
While the current data is sufficient to indicate the need for something like dark energy, the details of dark energy are still largely unconstrained. We do not know what the equation of state for dark energy is, whether it remains constant or changes over time, whether the dark energy density remains constant across all space or if it clusters, etc. As with dark matter, however, a number of potential models from theoretical physics have been proposed, although the physics of dark energy are generally more speculative than for dark matter. They all match the current data, but, in general, make very different predictions for future observations. We will review a few of them briefly.
The most basic form of dark energy is a cosmological constant: a smooth, constant energy density everywhere in the universe with equation of state equal to -1. This sort of scalar field matches the basic picture of the vacuum from quantum field theory: even in the absence of particles, so-called "zero-point fluctuations" will fill all of space uniformly. Without a proper theory of quantum gravity, a precise calculation of the magnitude of this vacuum energy density is impossible (we would need to know the proper quantization of space and time to do so). In the absence of such a theory, the most obvious calculation (based on the Planck mass) gives a vacuum energy density roughly 120 orders of magnitude greater than energy density we infer from cosmological observations. This disconnect has been called "the worst prediction ever made in theoretical physics", with no small amount of justification.
To reconcile this discrepancy, one might imagine that a full accounting of the contributions from all of the different parts of the theory would largely cancel each other, leaving the small remnant vacuum energy density we observe today. A further discussion of this idea (and related ones) can be found here: What's the Energy Density of the Vacuum?
By relaxing the requirement that the density of dark energy remain constant over time, we arrive at the class of dark energy models called quintessence . The idea here is that, instead of relying on a slight asymmetry in particle physics to get our dark energy, we propose the existence of a (so far entirely hypothetical) type of field; recall that, in quantum field theory, "particles" and "fields" are largely the same thing. Like for the vacuum energy, the equation of state for this field is negative. However, since it is associated with a field rather than an innate part of spacetime, the energy density and the equation of state can change over time. Depending on the details of the model, this flexibility can help to explain the "cosmic coincidence problem": the fact that the energy density of dark energy and matter are nearly equal today puts us at a relatively rare point in the history of our universe, akin to just happening to be in the exact place where two transcontinental trains pass each other. The current data is sufficient to constrain very strong evolution in the equation of state, but smaller changes associated with some varieties of quintessence are still viable models.
In summary, while dark matter has a number of promising models and direct detection is a very real possibility in the near future, dark energy remains a mystery. Several models exist which explain the current data, but none of them are nearly as mature as the dark matter models. Future observations will be able to put greater constraints on both the current equation of state and its change over time, but testing these models in detail is extremely challenging. As with any area of current theoretical research, we will simply have to wait until more data is available and the theory has advanced before making more detailed statements.
z) Consistency
In the discussion above, we made frequent reference to the fact that many different sorts of cosmological observations are combined to produce the concordance Lambda CDM model that most cosmologists use today. This should not be interpreted as a set of observations all contingent on each other for mutual support, wherein the removal of one observation causes the entire structure to collapse. Rather, it is the case of finding intersections between mutual lines of evidence to locate the best overall solution. Even if future data shows that our interpretation of one line is incorrect, the others remain largely unaffected.
As an example, consider the WMAP team's Determination of Cosmological Parameters paper. The age of the universe obtained from the WMAP measurements is consistent with the observed stellar ages methods. The ratio of baryons to photons is consistent with the ratio of deuterium to helium predicted from primordial nucleosynthesis. The Hubble constant is consistent with measurements from distant supernovae, the Tully-Fisher relation and the surface brightnesses of galaxies. Likewise, the cosmological model from the WMAP measurements is consistent with measurements of large scale structure from surveys like the Sloan Digital Sky Survey (SDSS) and the Two-Degree Field Survey (2dF). If these individual results were not compatible with each other, then we would not see an improvement in the parameter constraints when we combine the data sets. The fact that we do see an improvement is evidence that the theory does, indeed, hold together.
3) Problems and Objections
This section will deal with a number of the common objections to BBT. These are not full-blown alternatives to BBT (we will cover that in the next section), but rather objections to either the fundamental basis for BBT or radical re-interpretations of the physical data.
a) "Something can not come out of nothing" - the first law of thermodynamics
The simple statement "something can not come out of nothing" is, in itself, not very convincing. From quantum field theory, we know that something does indeed come from nothing: to wit, "vacuum fluctuations". In the simplest case, an electron, a positron and a photon can appear effectively out of nowhere, exist for a brief time and then annihilate, leaving no net creation of mass or energy. Experimental support for this sort of effect has been found from a number of different experiments. See, for instance, the Wikipedia page for the Casimir effect.
The common point for all of these effects is that they do not violate any known conservation laws of physics (e.g., the conservation of energy, momentum, and charge). Something can indeed come out of nothing as long as these conservation laws permit this. But people often argue that the Big Bang theory violates the conservation of energy (which is essentially the first law of thermodynamics).
There are several valid counterarguments against this: first, as already pointed out, the BBT is not about the origin of the universe, but rather its development with time. Hence, any statement that the appearance of the universe "out of nothing" is impossible has nothing to do with what the BBT actually addresses. Likewise, while the laws of thermodynamics apply to the universe today, it is not clear that they necessarily apply to the origin of the universe; we simply do not know. Finally, it is not clear that one can sensibly talk about time "before the Big Bang". "Time" is an integral part of our universe (hence the GR term "spacetime") - so it is not clear how exactly one would characterize the energy before and after the Big Bang in a precise enough way to conclude it was not conserved.
Assuming we have some way to handle notions of time outside of our spacetime, the universe appearing out of nothing would only violate the first law of thermodynamics if the energy beforehand were different from the energy afterwards. Probably all people will agree that "nothingness" should have an energy of zero; so the law is only violated if the energy of the universe is non-zero. But there are indeed good arguments that the energy of the universe should be exactly zero!
This conclusion is somewhat counter-intuitive at first sight, since obviously all the mass and radiation we see in the universe has a huge amount of associated energy. However, this tally ignores the gravitational potential energy within the universe. In the Newtonian limit, we can get a feel for this contribution by considering the standard example of a rocket leaving the Earth, with a velocity great enough to "escape" from its gravitational field. Travelling farther and farther away from the earth, the velocity of the rocket becomes smaller and smaller, going to zero "at infinity". Hence the rocket has no energy left "at infinity" (neglecting its "rest energy" here, which is irrelevant for the argument). Applying conservation of energy, it follows that the energy of the rocket was also zero when it left Earth. But it had a high velocity then, i.e., large kinetic energy. It follows that the gravitational potential energy it had on the Earth was negative. For another explanation, see e.g. this post about Negative gravitational energy.
In a Nature article in 1973, E. Tryon sketched an argument that the negative gravitational potential energy of the universe has the same magnitude as the positive energy contained in its contents (matter and radiation), and hence the total energy of the universe is indeed zero (or at least close to zero).
Part of the difficulty here is that the concept of "gravitational energy" is essentially a Newtonian one. In GR, the principle of equivalence makes defining a gravitational energy that will be coherently viewed from all frames of reference problematic. Likewise, the idea of the "total energy of the universe" is difficult to define properly. Misner, Thorne and Wheeler (one of the standard texts on GR) discuss this at length in chapter 20 of their book.
Another approach is Wald's "Hamiltonian" or "Hamilton function" for GR as derived in his GR text. In classical physics, this function can (almost always) be interpreted as representing the total energy of a given system. Using this formalism, Wald shows that, for a closed universe, the Hamiltonian is zero. Similar arguments can be applied to the same effect for a flat universe, although for an open universe the formulation for the Hamiltonian ends up ill-defined.
Other efforts to deal with conservation of energy in GR have used so-called "pseudo-tensors". This approach was tried by Einstein, among many others. However, the current view is that proper physical models should be formulated using only tensors (see again Misner, Thorne and Wheeler, chapter 20), so this approach has fallen out of favor.
However, this leaves us with something of a quandary: in the absence of a proper definition of gravitational potential energy, the law of conservation of energy from classical mechanics clearly does not hold in GR. Thus, for any theory based on GR, like BBT, conservation of energy is clearly not something that can be held against it. Hence, the first law of thermodynamics argument becomes moot. For a more detailed discussion along these lines, see this FAQ page on energy conservation in GR.
b) The highly ordered universe today could not have come from an explosion - the second law of thermodynamics
This argument is a variation of the standard creationist canard regarding evolution creating order from disorder, in apparent violation of the second law of thermodynamics. The standard counter-argument, of course, is that that formulation only applies to isolated systems, unlike the Earth.
If we are talking about the universe, on the other hand, it is not clear that this rejoinder applies. After all, the universe, so far as we know, is the ultimate isolated system, with energy neither entering or exiting the system. However, applying this simple form of the second law to the universe has some complications.
The standard misconception of the Big Bang is that of an explosion of matter into already existing space. This is not the case. Rather, BBT holds that spacetime itself expanded. Obviously, any statements accompanied by the claim that the Big Bang exploded to create order need to be taken with a grain of salt.
Further, our everyday conceptions of "order" and "disorder" do not really apply to the physical quantity called "entropy". Indeed, as shown by Kolb & Turner, the entropy of the early universe was extremely low. This makes sense if one remembers that, in the very early stages of the universe, the distribution of matter and energy was very, very ordered, as demonstrated by the uniformity of the CMBR. As such, one could characterize the entire distribution of matter and energy in the universe with a single number (the temperature) to a very good approximation. Compare that to the universe we see now, filled with complicated, disorderly distributions of galaxies, stars and gas. The amount of entropy in these objects is enormous (recall our earlier discussion about the lack of coherent orbits for stars in elliptical galaxies and galaxies in galaxy clusters). Hence, the idea that the entropy of the universe has somehow decreased in violation of the second law of thermodynamics is largely nonsensical.
Ironically, however, this facile objection does lead to a much more serious question: Given that the entropy of the universe has only increased, how did it get such a low entropy when it came into being? At the current time, this is still an open question in cosmology. Obviously, many of the problems we outlined in the previous section regarding time before the Big Bang and the applicability of physical laws at the origin of the universe come into play here, but there is, as of yet, no simple answer.
c) Atheistic theory
As with evolution, BBT is often tagged by Young Earth Creationists as yet another theory invented out of thin air by atheists looking to deny that God created the universe and everything in it. Obviously, this is not a scientific argument by any stretch of the imagination, and, like the similar charge leveled at evolution, the claim is false on its face.
BBT is not only accepted by most mainstream Christian (and other religious) denominations, but also even by Old Earth Creationists like Hugh Ross. Some Christian philosophers even try to use the BBT as evidence for the existence of a creator - they point out, e.g., that this scientific theory agrees with the Bible on the point that the universe had a beginning, that light came first (although this is a crude misrepresentation of what the BBT actually says), etc. For articles containing discussions of this type of arguments, see, e.g., the page Physics and Religion.
Finally, it should be pointed out that Lemaitre, one of the originators of the BBT (the central equations of the BBT are often called the "Friedman-Lemaitre equations"), was actually a Jesuit priest!
d) Stars older than universe?
This is an outdated problem, but it still occasionally appears in some creationist and anti-BBT tracts. We addressed part of this in the dark energy section, but we will reiterate for clarity.
In the mid-1990s, the best estimates of the current Hubble parameter put the value around 80 km/s/Mpc -- not very far off the current best value around 72 km/s/Mpc and well within the margin of error. At the time, the default theoretical model based on the predictions of inflation and the CMBR observations from COBE was a flat, matter-dominated universe. Under this model, the values for the Hubble parameter gave an estimated age of the universe around 10 billion years. At the same time, age estimates for the oldest stars in our galaxy were between 13 and 18 billion years. This conflict was called the "Age Crisis".
Shortly thereafter, two improvements in the data resolved this apparent conundrum. First, the Hipparchos satellite provided better estimates for the distances to the stars used in the age measurements. These new distances were larger than the previous measurements which, in turn, meant that the stars in question were more luminous than previously believed. Factoring this into the age calculations brought the range of expected ages down by a few billion years. Secondly, the distant supernovae measurements and subsequent CMBR anisotropy measurements demonstrated the need for dark energy in the standard cosmological model. Including this extra term changed the age estimate for the universe, pushing it to the current value of 13.7 billion years. This combination of effects neatly resolved the Age Crisis.
h) Arp
Halton Arp is a professional astronomer, formerly associated with Palomar Observatory, who now works at the Max Planck Labs in Germany. Over the course of many years of observations (and a number of published papers), he has come to the conclusion that the redshift measured for many distant objects is not cosmological in nature. This goes beyond the peculiar velocities discussed earlier; in Arp's model redshifts are intrinsic and in no way related to distance.
The basis for this conclusion is that many pairs of galaxies (or galaxies paired with quasars) seem to indicate some manner of physical association, despite large differences in redshift (and hence distance, if we use standard BBT). For example, the arm of a spiral galaxy may appear to extend towards a nearby quasar or (as this story shows) a quasar may even appear to lie within a galaxy. Arp has published an entire catalog of these discordant redshift associations.
Arp's claims are supported by some other astronomers, most notably Gregory and Margaret Burbidge. Most astronomers, however, reject his claims, pointing out that his observations are explainable by chance superpositions of objects on the sky. Calculating the exact probability of a given set of superpositions can be quite difficult and Arp's supporters and detractors generally disagree on whether Arp's calculations along these lines are valid.
Recently, a study by Scranton et al (2005) may have shed some light on this controversy. Using data from the Sloan Digital Sky Survey, the positions of 200,000 quasars were correlated with the positions of 13 million galaxies. In Arp's model, galaxies and quasars are physically associated with each other and, hence, one would expect that correlating the two populations would look a great deal like correlating the galaxies with themselves. On the other hand, BBT tells us that the quasars are much more distant than the galaxies in this sample, so the cross-correlation due to actual gravitational clustering should be nearly zero. Instead, we should see an induced cross-correlation due to the gravitational lensing of the quasars by the foreground galaxies. This signal is much smaller than the one expected from Arp's model and it changes sign depending on the quasar population. When the SDSS researchers made the measurement, the results matched the expectation from BBT to a high statistical significance. More detail can be found in this article and this discussion.
i) Tifft
Another popular figure among people who dispute the BBT is William Tifft. His claim to fame is also about redshift. In contrast to Arp, he did not examine correlations between different objects. Rather, he claimed to have discovered a periodic structure in the redshifts: redshifts can not have any arbitrary value, but are "quantized". Thus, we would only expect to measure redshifts in integer multiples of the some fundamental value; see Tifft (1997) for a review. Like Arp's claim, this would cast a great deal of suspicion on the traditional interpretation of redshift. Like Arp, Tifft has his share of supporters, including some creationists. Tifft's claims show up in Barry Setterfield's article on The vacuum, light speed, and the redshift.
Unfortunately for Tifft's claim, the quantization scale for redshifts has continued to shrink as more data has become available. The initial value was 72.46 km/s. Further observations brought this down to 36.2 km/s, 8.05 km/s and finally 2.68 km/s. Scaled against the speed of light, this suggests a quantization in z of roughly 0.00001, which is slightly above (or even below) the precision for many common redshifts measurements.
The most likely explanation for Tifft's original measurements is the presence of large scale structure. Galaxies are not randomly distributed throughout the universe. Instead they are clustered in clusters, "walls" and "filaments" thanks to their mutual gravitational attraction. Likewise, this clustering gives way to large voids between these structures. If one were to look only at a long narrow beam through this structure (a "pencil-beam" survey -- as was done for much of the early redshift catalogs), one would naturally expect to see some "quantization" as a relic of this gravitational interaction. When astronomers were able to use a much larger, wider sample of galaxy redshifts, like the 2dF galaxy survey, they found no evidence of Tifft's quantization (Hawkins 2002). Some supporters of Tifft objected that the study looked at quasars instead of (nearby) galaxies, but that complaint looks a bit strange - after all, if redshift is quantized, it should be quantized everywhere, not just in our "neighbourhood".
4) Alternative cosmological models
Before delving into the alternatives, it should be stressed that no alternative to BBT has been devised that can explain the full range of observations covered by the current BBT. This is not to say that such a model is impossible, merely that it has yet to be found. In all of the cases discussed below, some subset of the current data is either ignored or deflected in some manner (e.g. the claim that the data is not cosmological, but only due to some as of yet undescribed local effects).
Our purpose in this section is not to definitively debunk each of these models (often that is a FAQ in and of itself). Rather we will merely describe each model and associated counter-arguments briefly and provide pointers to more detailed discussions.
a) Steady State and Quasi-Steady State
In 1948, H. Bondi, T. Gold, and F. Hoyle developed the Steady State model as an alternative to the Lemaitre-Friedman Big Bang model described decades earlier. This model held that the cosmological principle was valid not only for space, but also for time -- the universe had always looked and will always look the same as it does today. This model accepted the notion of an expanding spacetime (indeed the expansion was exponential), but the matter density was maintained at a constant level by continuous creation of matter. Further, the matter density was equal to the critical value necessary to keep the geometry of space flat.
At the time, this model was a viable alternative to standard BBT. It was consistent with the data available at the time and explained some issues with the standard picture that seemed to be problematic. However, subsequent observations like the light element abundance and the discovery of the CMBR cast serious doubt on the validity of the Steady State model. At this point, most cosmologists abandoned this model in favor of BBT.
Undaunted, Hoyle (along with Burbidge and Narlikar) updated the Steady State model in 1993, naming the extension the Quasi-Steady State model. As with the Steady State model, the universe has always existed. However, in this modification, the universe undergoes pulsations, alternatively expanding and contracting. The "bounce" at the end of each collapsing stage is caused by a field with negative energy density, somewhat analogous to dark energy in the standard BBT. This allows the model to incorporate more of the observational evidence than the previous version, but it fails on a number of points, including the accelerated expansion since detected. For more details, see Ned Wright's page on Errors in the Steady State and Quasi-SS Models.
b) MOND
MOND is short for "Modification Of Newtonian Dynamics". The fundamental idea behind MOND stems from the discrepancy between the amount and distribution of matter inferred from the visible light in spiral galaxies and that given by examining the velocities of stars in those galaxies. The standard interpretation of this data is that galaxies contain dark matter. MOND explains this data by changing the way that gravity works. Proposed by Milgrom (Milgrom 1983), MOND essentially says that for large distances (small forces), Newton's law of gravity is no longer valid, but has to be modified. Using that approach, it is indeed possible to explain the rotation curves of galaxies quantitatively.
While MOND explains the rotation curves of spiral galaxies, these are far from the only evidence of dark matter. As we detailed earlier, there is a wide variety of evidence for dark matter, not all of it related to dynamic measures like rotation curves (e.g. gravitational lensing, where the deflection of light in GR is twice that predicted by Newtonian dynamics and MOND). It is not clear at all if MOND can also explain all of these observations consistently (see Aguirre 2001 or Sanders 2002). More specifically, observations from 2002 with the Chandra X-ray Observatory found more direct evidence against MOND. The hot X-ray emitting gas around the galaxy NGC 720 forms an ellipsoidal cloud. This, in turn, requires an ellipsoidal gravitational potential well. While this is possible with dark matter, the uniform acceleration scale from MOND immediately leads to a spherical gravitational potential well.
From a theory standpoint, there are several problems as well. Fundamentally, MOND is an ad hoc modification to an otherwise well-supported theory of nature. This sort of approach may be appropriate as the phenomenological framework for a particular experiment or observations, but, in the general case, there is no reason for it to work. A case in point is that while this formulation works very well for spiral galaxies, where there is coherent motion of small mass objects around a large, centrally concentrated mass, it fails utterly for cases where the mass scales are more equalized, e.g., the galaxies in a cluster. Are the distances between galaxies large or small relative to the MOND scale? From what point do we define distance and acceleration? With this sort of handicap, making MOND into a fully relativistic theory has proven extremely difficult.
Recently, Bekenstein proposed a new theory (essentially an extension of GR) which could overcome several of the problems of Milgrom's model for MOND (Bekenstein 2004), but it is not yet clear how well it can address the available evidence. Work is still underway, but there is a short discussion in the article Was Einstein wrong? at the Preposterous Universe blog. Until the model proposed by Bekenstein is analyzed more thoroughly, MOND remains primarily a theoretical exercise in how far one can push alternatives to GR while still remaining consistent with the global data set (both cosmological evidence as well as GR tests within our solar system and galaxy).
c) Tired light
This is not a single model, but a collective term for various ideas attempting to explain the observed redshift via mechanisms other than cosmological expansion. There are a variety of approaches, but all of them seek to show that light naturally loses energy in one way or another as it travels over long distances.
In contrast to MOND, which generates the occasional journal paper, tired light models have essentially no support among professional cosmologists. This is mainly due to the reason that we have a great deal of evidence available showing that the universe indeed expands, following the predictions of GR, making an alternative explanation of the cosmological redshift largely irrelevant. Peebles addresses many of the observations contradicting tired light in his cosmology text, including, for example, the Tolman test. Evidence collected since that text was written, like the time dilation of supernovae light curves and the change of the CMBR temperature with redshift, only adds to the case against tired light. Ned Wright also has a specific FAQ refuting tired light.
d) Plasma cosmology
In 1991, Eric Lerner published the book The Big Bang Never Happened, in which he asserted several problems with the standard BBT and promoted an alternative theory, based on plasma physics. According to that model, the universe is infinitely old and undergoes cycles (similar to the quasi-steady-state theory of Hoyle et al., see 4a). In plasma cosmology, electromagnetism is the dominant force for galaxy and large scale structure formation, rather than gravity.
Lerner's arguments against the BBT fall well short of convincing. He claims that current ideas of structure formation (small density fluctuations grow through gravitational interactions with their local environment) can not explain the observed large-scale structure of the universe. This conveniently ignores computer simulations which demonstrate the ability of this mechanism to generate structure that statistically matches observations (see 2f). He also denies the existence of dark matter and dark energy, parroting the line that these are epicycles added onto the theory when it failed to meet expectations. This is clearly not true for either dark matter or dark energy. He also asserts that the values for the amount of dark matter are in constant flux. This ignores the fact that early measurements had relatively large error bars due to small sample sizes. As the available data has increased and precision improved, the values from a variety of methods have converged nicely. Lerner also points out that dark matter has yet to be directly detected on Earth, again ignoring the fact that particle accelerators and other direct detection methods have not yet achieved the energies or sensitivities thought necessary for a positive detection. The list goes on and on.
At the same time, Lerner's claims about the ability of the plasma cosmology model to describe the observations correctly are simply wrong. Ned Wright has written a rebuttal of many of Lerner's arguments, which can be found on the page Errors in the "The Big Bang Never Happened". It is not always directly on point, but contains enough information to make it clear that Lerner's arguments are simply unfounded. Lerner has replied to Wright's critique, but his arguments did not improve -- and he simply ignores several of Wright's arguments.
Like many creationists, Lerner also has a bad habit of citing scientific articles in support of his case when, in fact, they actually run contrary to his claims. A nice example for this is the article by Scranton et al (2003) which found evidence for the existence of dark energy by measuring the integrated Sachs-Wolfe effect. Lerner ignores the conclusions of the paper, claiming that it demonstrates a disagreement between BBT and observations. The ISW measurements are indeed inconsistent with a flat, matter-only universe, but match up very well with what would be expected from the Lambda CDM universe. Indeed, this measurement was an important check differentiating between the two models. Lerner plays a similar game with predictions regarding the sizes of voids in the local large scale structure. These are predicted to be much larger for a Lambda CDM universe than a matter-only universe and Lerner points to the latter as being in conflict with the data while ignoring that the former matches quite well.
e) Humphreys
There are also creationists who tried to replace the BBT with their own models, the most prominent among them being Dr. Russell Humphreys. His model, contained in his book Starlight and Time, is endorsed by several creationist organizations, including the Institute for Creation Research and Answers in Genesis - this despite the fact that it was widely disputed even among creationists and that the editors of the conference proceedings for the International Conference on Creationism in 1994 concluded that it apparently is faulty (see the article The current state of creation astronomy published by the ICR, towards the end).
The core of Humphreys' model is the abandonment of the cosmological principle. Instead, he proposes that the universe is not uniform, but rather has the shape of a sphere with a finite radius. In the model, the universe originated from a single point in the center of the sphere. In effect, Humphrey's model brings to life many of the common misconceptions about what BBT actually says about the origin of the universe.
Humphreys tries to apply GR to the resulting matter distribution, claiming that gravitational time dilation will cause time to pass faster the more distant one is from the center. If one postulates that the Earth is very near the center of the universe, Humphreys claims that this resolves a central problem for Young Earth Creationists: how to resolve the evidence for an ancient universe with their demand that the Earth was created within the last 10,000 years (give or take).
Humphreys models the center of the universe as a "white hole", the opposite of a black hole (instead of matter only flowing inwards, a white hole constantly emits matter and energy). He fails to explain why that white hole does not appear to exist anymore (we would notice the extremely strong X-ray flux, if nothing else), but that is far from the only problem with the model. In particular, Humphreys badly mangles the standard GR treatment for gravitational time dilation: in order for time to pass more rapidly far away from the Earth, we would need to be near a black hole, not a white hole. Humphreys tried to salvage his model by later claiming a time dilation within the white hole, but this was equally unworkable. It goes without saying that his model fails to explain a vast array of cosmological observations, e.g., the existence of the CMBR and its anisotropy, supernovae time dilation, the light element abundance and so forth.
As we mentioned earlier, Humphreys' model has been the subject of a great deal of criticism within creationist ranks, particularly from the Old Earth Creationist organization Reasons to Believe; see their article The Unraveling of Starlight and Time. Other valuable arguments from the creationist side of the fence can be found in the articles Starlight and Time is the Big Bang and Errors in Humphreys' cosmological model. More criticisms of Humphreys' model, and his replies thereto, are collected at Russell Humphreys answers Various Critics.
f) Gentry
Another creationist who tried to develop an alternative model is Dr. R. Gentry (otherwise well-known for his "polonium halo" arguments for a recent creation). His model and attempts to tie it to then-current observations are mostly contained in two articles: A New Redshift Interpretation and The Genuine Cosmic Rosetta.
As with Humphreys' model, Gentry postulates a spherical universe with the Earth near the center. According to Gentry, the universe does not expand, but dark energy within the universe leads to a real, physical movement of galaxies away from the center of the universe (and hence roughly Hubble-like distance-redshift relationship). To generate the CMBR, he surrounds the universe with a thin, opaque shell of hydrogen. Like Humphreys, Gentry's model is internally inconsistent as well as being in serious conflict with GR and a great deal of the observational evidence. A more detailed rebuttal of Gentry's model can be found in Carlip & Scranton (1998) and the talk.origins FAQ Debunking Robert Gentry's New Redshift Interpretation Cosmology. Although these articles are now a few years old, more recent observations have done nothing to improve the match to Gentry's model.
5) Open Questions
a) The origin of the universe
As should be clear by now, BBT is not about the origin of the universe, but rather its development with time. Creationists often portray this as a weakness, arguing that, if science cannot explain the origin, it follows that the universe was created! This bit of faulty logic aside, the ultimate origin of the universe remains a topic of on-going theoretical research, both from the standpoint of finding models which can explain the current evidence as well as generating unique predictions from these models for future observations. At the moment, most of these models remain highly speculative, but it is worthwhile to present some of the current possibilities, particularly in light of the name of the archive.
- One of the more popular options is "chaotic inflation", first described
by Linde in 1983. This is similar to the
inflationary scenario proposed by Guth (see below)
but, instead of happening within our universe, inflation continues
indefinitely with small universes "budding off" from the main inflating
region. These "bubble" universes are separate spacetimes unto themselves,
causally disconnected from each other due to the vast distances made possible
by the exponentially expanding inflation region. For more detail see
An Inflating Bubble in Chaotic Inflation or the relevant section in
Inflation for Beginners.
This model resolves the "What was before the Big Bang?" question in a similar manner as the infinite regression problem. Our universe was created by an inflation event in some other universe at some arbitrary time in the infinite past. Whether or not this is satisfying is left to the reader to decide. - String theory also proposes some solutions. One of the ideas currently under consideration is the possibility that, before the Big Bang, there existed something like a "mirror image" of our universe. Our universe started out (almost) infinitely dense and hot and will (probably) go on infinitely into the future, thinning out and cooling more and more. This was preceded by a universe which was essentially the same, but with the time coordinate reversed. The "Big Crunch" at the end of that universe then spawned our current universe.
- Another String Theory inspired model is the
"ekpyrotic" or
"cyclic universe" by P. Steinhardt, developed in 2002. It proposes
that space and time existed forever in the past and that the universe
undergoes an endless sequence of cycles. It periodically undergoes a
Big Crunch, followed by a Big Bang, with trillions of years in between each
Bang and Crunch. At no time in the cycle do the density or temperature ever
become infinite. These oscillations happen because our universe is actually
embedded in a higher dimensional space along with other such universes. These
universes expand independently according to GR, only to undergo a Crunch
as they approach each other in this high dimensional space. When these
universes collide and separate, they undergo a Bang and repeat the process.
As with any new idea, Steinhardt's model has undergone a great deal of criticism from other cosmologists. He has replied to many of these critiques, pointing out that many of the alleged problems were based on misunderstandings, but also admitting that there are still many unresolved issues left to explain.
The second and third ideas are explained in more detail in the Scientific American article The Myth of the Beginning of Time. This article also takes care to point out that although these ideas are mainly speculative at the moment, there are nevertheless ways to actually test them. In particular, these three models (chaotic inflation, mirror universe, or cyclic universe) each make unique predictions for the statistical properties of the CMBR. As good as it is, the available data from the WMAP probe is not sufficient yet to make a determination regarding which of of these three models is right (or if something else will be needed). However, the successors to WMAP (both ground-based missions and the Planck satellite) are already being built, so the next few years may shed more light on this question.
b) Flatness and horizon
It was already known in the 1970s that the density of our universe was relatively close to the critical density (current measurements put it around 30% of the critical density). At first sight, this may not appear to be a problem -- the matter density has to be something, so any value would seem equally valid. However, just as the matter density itself is not static, the value of the critical density changes over time as well. In particular, if the ratio of the matter density to the critical density is less than unity (an open universe), then as time passes that ratio will become smaller and smaller. The opposite holds true for a closed universe; instead of asymptoting to zero, the ratio diverges. Hence, if the density of the universe is currently close to unity, then it must have been even closer to earlier in the universe's history. And the further back in time one looked, the closer to unity it must have been.
While this is not an explicit conflict between data and theory per se (there is nothing in BBT that dictates the initial value of the matter density), the coincidence did look like a marvelous "fine-tuning" to many cosmologists. Since the ratio of the actual density to the critical density determines the curvature of the universe, and an agreement between both values means that the universe is flat, this became known as the "flatness problem" in cosmology.
At the same time, measurements of the CMBR were showing that the temperature on the sky was very uniform. While cosmologists expected the temperature to be roughly the same everywhere, the degree of uniformity was troubling. In order for two points in the universe to have the same temperature, they needed to be able to exchange photons. Since photons travel at a finite speed and the universe has a finite age, this puts an upper limit (the horizon) on the size of a region that can be expected to be at the same temperature at any given point in the history of the universe. Since they knew the redshift of the CMBR (and hence the time when it was emitted), cosmologists did not expect regions larger than about 300,000 light years to have the same temperature. Instead, they saw regions on the sky that would have been many, many million light years away from each other when the CMBR formed that were apparently in thermal equilibrium. This became known as the "horizon problem". As with the flatness problem, this is not a theory-breaking problem, but rather an apparent coincidence that seemed suspicious.
In 1981, Guth proposed a possible solution to both of these problems (and others) -- a hypothesis which is nowadays known as inflation (Guth 1981). The basic idea is that, at a very early time in the universe's history, the expansion of the universe was not slowing down, but instead accelerating at an exponential rate (recall that the expansion of the universe is not limited by the speed of light, while objects moving within the universe are). With this sort of expansion driving it, regions which were originally much smaller than the horizon would be quickly stretched to enormous size, while maintaining their initial thermal equilibrium. In addition, this rapid expansion would exponentially drive the curvature of the inflating region to zero, resulting in a density extremely close to the critical value no matter what its initial value.
For more details on this, see the Wikipedia article on Cosmic Inflation. We should reiterate here the fact that, while it was proposed as a solution to the flatness and horizon "problems", inflation makes a number of other quantitative predictions (in particular the shape of the CMBR anisotropy power spectrum) which have been borne out by observations.
c) Matter-antimatter asymmetry
In the very early universe, spacetime was filled with a very hot "soup" consisting of matter particles and interaction particles (quarks, electrons, positrons, photons, gluons, neutrinos, etc.). Matter and antimatter particles were created all the time from photons, gluons, etc., and disappeared again shortly thereafter. Only after the universe cooled down and thinned out was it possible for the matter and antimatter particles to survive and avoid annihilating each other immediately.
Herein lurks a major problem: according to quantum field theory, if all these reactions happened in thermal equilibrium, exactly equal amounts of matter and antimatter particles should have been produced by this process. But when we observe in our universe, we find only matter and virtually no antimatter! This problem became known as the "matter-antimatter asymmetry" in cosmology.
To solve this problem, we need some manner of asymmetry between matter and antimatter. In 1967, Sakharov pointed that to generate such an asymmetry, the reactions had to happen (at least partly) in a thermal non-equilibrium. Additionally, the so-called "CP symmetry" of particle physics had to be violated (i.e. matter and antimatter had to behave slightly differently, contrary to what quantum field theory said at the time) and "baryon number" could not be conserved (Sakharov 1967).
While this outlines what needed to happen, it is not a solution in and of itself. Modern solutions to this problem depend on the so-called "electroweak symmetry breaking": at sufficiently high temperatures like those found in the early universe, the electromagnetic force and the weak force are essentially indistinguishable, uniting into a single force. As the universe cooled (below a temperature of about 1015 Kelvin and after a time of about 0.1 billionth of a second), these forces separated into the distinct forces we see today. This "phase transition" led, at least locally, to the required thermal non-equilibrium. The violation of the CP symmetry is naturally built into the electroweak theory, and at high temperature the Standard Model of particle physics also allows non-conservation of the baryon number.
Not all of the details are cleared up yet (e.g., it is not yet clear if such models can explain the observed asymmetry at a quantitative level), but most particle physicists and cosmologists today are confident that this is the right track, and the problem of the matter-antimatter asymmetry will soon be solved satisfactorily.
d) "Small"-scale structure
As we have said before, current cosmological theory seems to work very well on very large scales. This appears to be the case from the largest scales we can examine in the universe all the way down to the size of moderately sized galaxy clusters. However, when we extrapolate the current theory below those scales, it is not clear that it completely matches the observations.
The first important data point in this area came from the Hubble Space Telescope, in the first Hubble Deep Field. This data set was made by focusing Hubble on a relatively sparsely populated patch of sky and looking deeper than astronomers ever had before. Surprisingly, astronomers found that, even at these very high redshifts corresponding to a very early point in the age of the universe, galaxies were discovered that already looked "mature". That is to say, they appeared nearly as massive as galaxies we see today and had already developed sub-structures like the spiral arms we see in galaxies like Andromeda. This article from ESO provides some detail; more can be found in Cimatti (2004). True, the stars that comprised these galaxies appeared very young and not in conflict with their age according to standard theory, but the presence of galaxies that had already gone through some considerable dynamic relaxation was something of a surprise. Of course, the Deep Field observations are not the only deep imaging we have. Newer observations with Hubble Ultra Deep Field suggest that most of the galaxies in the universe at that time were indeed small and not fully mature. This may mean that what Cimatti and others have seen were merely statistical outliers, but more data is needed to know for certain.
In the broader scope, galaxy formation remains a very difficult subject within cosmology. As with large scale structure, the primary theoretical tool for understanding the process is computer simulation. However, to accurately model a galaxy, one needs to understand how to deal with baryons. Unlike dark matter, which only interacts via gravity, baryons give off light, form stars, explode in supernovae and so on. Obviously, a simulation big enough to contain a galaxy is not going to be able to accurately simulate the formation of a single star. At best, a simulation will have a billion particles, meaning that each particle will represent a few thousand stars. This means that processes like stellar formation and evolution have to be handled via some manner of ad hoc prescription, tuned to match observations. It is obviously not the best of situations, but there is a great deal we can learn from the process and progress is being made.
In addition to the intra-galaxy dynamics, there is also the so-called "satellite problem." Large galaxies, like our Milky Way, are typically orbited by smaller irregular galaxies, like the Magellanic Clouds. These sorts of galaxies also show up in computer simulations around larger galaxies. However, instead of the dozen or so we can observe orbiting our galaxy, simulations would predict closer to 50. As this article suggests, there are possible solutions to this problem (mainly understanding whether or not the gas in these smaller galaxies will form stars or remain dark), but the jury is still out.
To summarize: despite the fact that BBT works well on large scales (galaxy clusters, large scale structure, the universe as a whole), some observations at smaller scales remain puzzling. It is not yet clear whether these questions will be cleared up by improving our ability to simulate these scales or if the theory will need revisions (minor or major).
6) Summary and outlook
Summarizing:
- The Big Bang Theory is a very well tested theory. A large quantity of data, coming from wildly different types of observations (see chapter 2) give an internally consistent picture of the geometry, composition and history of the universe. The primary set of parameters dictating the behavior of theory have been determined to a precision of 10% or better (which is remarkable given the constraints on making the observations) and all of the current best measurements agree within their range of error with these parameters.
- While objections and alternative models exist, they are either easily disproved by the data or unable to explain the full range of the data as well as the standard picture (and sometimes both). This does not stop proponents of these ideas from attacking BBT, but it is more than sufficient to convince the overwhelming majority of the cosmological community of the basic soundness of the model.
- As with any scientific theory, there are still issues that are either partially or completely unsolved. These are points of continuing research, and both further observations and advances in theoretical physics should shed light on them in the coming years.
The last decade has seen the development of the concordance model of cosmology. The next decade will continue this effort, exploring the the questions outlined in Chapter 5 as well as looking for more information on the nature and behavior of dark matter and dark energy. Work on the theoretical side of things will continue, of course; the following list gives a set of links to upcoming observations and experiments focusing on these topics:
- Infrared telescopes like the James Webb Space Telescope will look further back into time, searching for the first stars and galaxies; they will also help to improve our ideas about galaxy formation.
- The Planck satellite will measure the CMBR in greater detail than the WMAP satellite (testing inflation, string theory etc.).
- Large-scale surveys of galaxy distributions and supernovae will increase our knowledge about the distribution of dark matter and provide clues on the nature of dark energy; e.g. the Large Synoptic Survey Telescope, The Dark Energy Survey, and the Supernova/Acceleration Probe.
- Candidates for dark matter will probably (hopefully?) be found at particle accelerators like RHIC or LHC
References
N. Afshordi, Y.-S. Loh, and M. A. Strauss, Cross-correlation of the cosmic microwave background with the 2MASS galaxy survey: Signatures of dark energy, hot gas, and point sources, Phys. Rev. D 69 (2004) 083524 (astro-ph/0308260)
A. Aguirre, J. Schaye, and E. Quataert, Problems for Modified Newtonian Dynamics in Clusters and the Ly-alpha Forest?, Astrophys. J. 561 (2001) 550 (astro-ph/0105184)
R. A. Alpher and R. Herman, Remarks on the Evolution of an Expanding Universe,Phys. Rev. 75 (1949) 1089
H. C. Arp, Catalogue of discordant redshift associations (Apeiron, 2003)
J. D. Bekenstein, Relativistic gravitation theory for the MOND paradigm, Phys. Rev. D70 (2004) 083509 (astro-ph/0403694)
S. van den Bergh, The Early History of Dark Matter, Publ. Astron. Soc. Pac. 111 (1999) 657 (astro-ph/9904251)
H. Bondi and T. Gold, The Steady-State Theory of the Expanding Universe, Mon. Not. Roy. Astron. Soc. 108 (1948) 252
S. Boughn and R. Crittenden, A correlation between the cosmic microwave background and large-scale structure in the Universe, Nature 427 (2004) 45
A. Cimatti et al., Old Galaxies in the Young Universe, Nature 430 (2004) 184 (astro-ph/0407131)
S. Colombi, S. Dodelson, and L. M. Widrow, Large-Scale Structure tests of Warm Dark Matter, Astrophys. J. 458 (1996) 1 (astro-ph/9505029)
W. L. Craig and Q. Smith: Theism, Atheism, and Big Bang Cosmology (Oxford University Press, 1995)
N. Dauphas, The U/Th production ratio and the age of the Milky Way from meteorites and Galactic halo stars, Nature 435 (2005) 1203
S. M. Faber and J. S. Gallagher, Masses and mass-to-light ratios of galaxies, Ann. Rev. Astron. Astrophys. 17 (1979) 135
A. C. Fabian, P. A. Thomas, R. E. White III, and S. M. Fall, A lower limit to the binding mass of early-type galaxies, Mon. Not. Roy. Astron. Soc. 221 (1986) 1049
D. J. Fixsen et al., The Cosmic Microwave Background Spectrum from the Full COBE FIRAS Data Set, Astrophys. J. 473 (1996) 576 (astro-ph/9605054)
H. Fritzsch: The Creation of Matter: The Universe from Beginning to End (Basic Books, 1984)
G. Goldhaber et al., Timescale Stretch Parameterization of Type Ia Supernova B-band Light Curves, Astrophys. J. 558 (2001) 359 (astro-ph/0104382)
A. H. Guth, Inflationary universe: A possible solution to the horizon and flatness problems, Phys. Rev. D 23 (1981) 347
B. Hansen et al., Hubble Space Telescope observations of the white dwarf cooling sequence of M4, Astrophys. J. Suppl. Ser. 155 (2004) 551 (astro-ph/0401443)
E. Hawkins, S. J. Maddox, and M. R. Merrifield, No Periodicities in 2dF Redshift Survey Data, Mon. Not. Roy. Astron. Soc. 336 (2002) L13 (astro-ph/0208117)
A. Heavens, B. Panter, R. Jimenez, and J. Dunlop, The star-formation history of the Universe from the stellar populations of nearby galaxies, Nature 428 (2004) 625 (astro-ph/0403293)
F. Hoyle, A New Model for the Expanding Universe, Mon. Not. Roy. Astron. Soc. 108 (1948) 372
F. Hoyle, G. Burbidge, and J. V. Narlikar, A quasi-steady state cosmological model with creation of matter, Astrophys. J. 410 (1993) 437
G. Imbriani et al., The bottleneck of the CNO burning and the age of the Globular Clusters, Astr. and Astrophysics 420 (2004) 625 (astro-ph/0403071)
Y. Izotov, T. X. Thuan, and V. A. Lipovetsky, The Primordial Helium Abundance: Systematic Effects and a New Determination, Astrophys. J. Suppl. 108 (1997) 1
A. Jenkins et al., The Virgo consortium: simulations of dark matter and galaxy clustering, in: Dark and Visible Matter in Galaxies. ASP Conference Series 11 (1997) 348 (ed. Massimo Persic and Paolo Salucci)
R. Jimenez, J. MacDonald, J. S. Dunlop, P. Padoan, and J. A. Peacock, Synthetic stellar populations: single stellar populations, stellar interior models and primordial protogalaxies, Mon. Not. Roy. Astron. Soc. 349 (2004) 240 (astro-ph/0402271)
R. Kippenhahn: Kosmologie fur die Westentasche (Piper, 2003)
R. P. Kirshner: The Extravagant Universe: Exploding Stars, Dark Energy, and the Accelerating Cosmos (Princeton University Press, 2002)
R. A. Knop et al., New Constraints on Omega_M, Omega_Lambda, and w from an Independent Set of Eleven High-Redshift Supernovae Observed with HST, Astrophys.J. 598 (2003) 102 (astro-ph/0309368)
E. W. Kolb and M. S. Turner: The early universe (Perseus Books Group, 1993)
L. M. Krauss and B. Chaboyer, Age estimates of globular clusters in the Milky Way: constraints on cosmology, Science 299 (2003) 65
B. Leibundgut et al., Time Dilation in the Light Curve of the Distant Type Ia Supernova SN 1995K, Astrophys. J. 466 (1996) L21 (astro-ph/9605134)
E. Lerner, The Big Bang Never Happened (Random House, 1991)
A. D. Linde, Chaotic inflation, Phys. Lett. B 129 (1983) 177
M. Livio: The Accelerating Universe: Infinite Expansion, the Cosmological Constant, and the Beauty of the Cosmos (Wiley, 2000)
L. M. Lubin and A. Sandage, The Tolman Surface Brightness Test for the Reality of the Expansion; part I: Astron. J. 121 (2001) 227 (astro-ph/0102213); part II: Astron. J. 121 (2001) 2289 (astro-ph/0102214); part III: Astron. J. 122 (2001) 1071 (astro-ph/0106563); part IV: Astron. J. 122 (2001) 1084 (astro-ph/0106566)
V. Luridiana, A. Peimbert, M. Peimbert, and M. Cervino, The effect of collisional enhancement of Balmer lines on the determination of the primordial helium abundance, Astrophys. J. 592 (2003) 846 (astro-ph/0304152)
J. C. Mather et al., A preliminary measurement of the cosmic microwave background spectrum by the Cosmic Background Explorer (COBE) satellite, Astrophys. J. 354 (1990) L37
J. C. Mather, D. J. Fixsen, R. A. Shafer, C. Mosier, and D. T. Wilkinson, Calibrator Design for the COBE Far-Infrared Absolute Spectrophotometer, Astrophys. J. 512 (1999) 511 (astro-ph/9810373)
G. J. Mathews, T. Kajino, and T. Shima, Big Bang Nucleosynthesis with a New Neutron Lifetime, Phys. Rev. D71 (2005) 021302 (astro-ph/0408523)
M. Milgrom, A modification of the Newtonian dynamics as a possible alternative to the hidden mass hypothesis, Astrophys. J. 270 (1983) 365; A modification of the Newtonian dynamics - Implications for galaxies, ibid. 371; A Modification of the Newtonian Dynamics - Implications for Galaxy Systems, ibid. 384
C. W. Misner, K. S. Thorne, and J. A. Wheeler: Gravitation (W. H. Freeman, 1973)
L. Nolan, J. S. Dunlop, R. Jimenez, A. F. Heavens, F stars, metallicity, and the ages of red galaxies at z > 1, Mon. Not. Roy. Astron. Soc. 341 (2003) 464 (astro-ph/0103450)
M. R. Nolta, E. L. Wright et al., First Year Wilkinson Microwave Anisotropy Probe Observations: Dark Energy Induced Correlation with Radio Sources, Astrophys. Journal 608 (2004) 10 (astro-ph/0305097)
K. A. Olive and G. Steigman, On the abundance of primordial helium, Astrophys. J. Suppl. 97 (1995) 49 (astro-ph/9405022)
J. H. Oort, Astrophys. J. 91 (1940) 273
L. Pasquini, P. Bonifacio, S. Randich, D. Galli, and R.G. Gratton, Beryllium in turnoff stars of NGC6397: early Galaxy spallation, cosmochronology and cluster formation, Astr. and Astrophysics 426 (2004) 651 (astro-ph/0407524)
J. A. Peacock: Cosmological Physics (Cambridge University Press, 1999)
P. J. E. Peebles: Principles of Physical Cosmology (Princeton University Press, 1993)
S. Perlmutter et al., Measurements of Omega and Lambda from 42 High-Redshift Supernovae, Astrophys. J. 517 (1999) 565 (astro-ph/9812133)
F. Prada et al., Observing the dark matter density profile of isolated galaxies, Astrophys. J. 598 (2003) 260 (astro-ph/0301360)
A. G. Riess, W. Press, and R. Kirshner, A Precise Distance Indicator: Type Ia Supernova Multicolor Light Curve Shapes, Astrophys. J. 473 (1996) 88 (astro-ph/9604143)
A. G. Riess et al., Time Dilation from Spectral Feature Age Measurements of Type Ia Supernovae, Astron. J. 114 (1997) 722 (astro-ph/9707260)
A. G. Riess et al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, Astron. J. 116 (1998) 1009-1038 (astro-ph/9805201)
A. G. Riess et al., Type Ia Supernova Discoveries at z>1 From the Hubble Space Telescope: Evidence for Past Deceleration and Constraints on Dark Energy Evolution, Astrophys. J. 607 (2004) 665 (astro-ph/0402512)
A. D. Sakharov. Pis'ma Z. Eksp. Teor. Fiz. 5 (1967) 32; English Translation: JETP Lett. 5 (1967) 24
R. H. Sanders, Clusters of galaxies with modified Newtonian dynamics, Mon. Not. Roy. Astron. Soc. 342 (2003) 901 (astro-ph/0212293)
R. Scranton et al., Physical Evidence for Dark Energy (astro-ph/0307335)
R. Scranton et al., Detection of Cosmic Magnification with the Sloan Digital Sky Survey, to be published in the Astrophys. J. (astro-ph/0504510)
R. Srianand, P. Petitjean, and C. Ledoux, The cosmic microwave background radiation temperature at a redshift of 2.34, Nature 408 (2000) 931 (astro-ph/0012222)
W. G. Tifft, Redshift Quantization in the Cosmic Background Rest Frame, J. Astrophys. Astr. 18 (1997) 415
R. C. Tolman, Proc. N. A. S. 16 (1930) 511
R. C. Tolman, Relativity, Thermodynamics, and Cosmology (Clarendon Press, 1934)
J. L. Tonry et al., Cosmological Results from High-z Supernovae, Astrophys. J. 594 (2003) 1 (astro-ph/0305008)
E. P. Tryon, Is the Universe a Vacuum Fluctuation?, Nature 246 (1973) 396
S. Turck-Chieze et al., Surprising Sun, Phys. Rev. Lett. 93 (2004) 211102 (astro-ph/0407176)
R. M. Wald: General Relativity (University Of Chicago Press, 1984)
L. Wang, G. Goldhaber, G. Aldering, and S. Perlmutter, Multicolor Light Curves of Type Ia Supernovae on the Color-Magnitude Diagram: a Novel Step Toward More Precise Distance and Extinction Estimates, Astrophys. J. 590 (2003) 944-970 (astro-ph/0302341)
S. Weinberg: The First Three Minutes: A Modern View of the Origin of the Universe (2nd edition, Basic Books, 1993)
S. D. M. White, Large-Scale Structure at High Redshift, in The early Universe with the VLT (ed. J. Bergeron), Springer (1997) 219
E. L. Wright et al., Interpretation of the cosmic microwave background radiation anisotropy detected by the COBE Differential Microwave Radiometer, Astrophys. J. 396 (1992) L13
F. Zwicky, Helv. Phys. Acta 6 (1933) 110
by Björn Feuerbacher and Ryan Scranton