Collapsing the wave- Quantum mechanics

jueves, 9 de septiembre de 2010


Publicado por Eternidad en 14:36 No hay comentarios:
Enviar por correo electrónicoEscribe un blogCompartir en XCompartir con FacebookCompartir en Pinterest
Entradas más recientes Inicio
Suscribirse a: Entradas (Atom)

Archivo del blog

  • ►  2012 (2)
    • ►  agosto (1)
    • ►  marzo (1)
  • ▼  2010 (1)
    • ▼  septiembre (1)

Datos personales

Mi foto
Eternidad
I'm a writer and a cryonicist
Ver todo mi perfil

,

Why Quantum Mechanics is Quantum

One, if not the defining property of quantum mechanics is that it stuff is quantized. This means that it comes in discrete, countable chunks, like 1, 2 and 3. This chunkiness arises, weirdly enough, from the fact that particles behave like waves. In this post I will show you how this happens in a pretty rigorous way and I will even give you a taste of what the real mathematics look like.

You can imagine a quantum particle as a wave. In fact (and not going into the intricacies of field theory) the equation governing the behavior of quantum particles is a wave equation: the particle-like behavior comes later, when we make a measurement. So if you imagine a quantum particle as a wave you’re being as faithful to the mathematics as you can be.

Before we delve into the bizarreness of quantum waves, we can take a look at normal waves. In each wave there is something that propagates. For example, for the waves in a pond what propagates is a certain height of the water. In the case of sound waves, what propagates is a difference in pressure.wave_props

For each wave in the world, we have three essential properties: their amplitude, wavelength and their frequency. The wavelength is just the distance between two consecutive crests. The frequency is how many times per second these crests go through a certain region. See the picture for more details. The amplitude of a wave in water is just its height; in the case of sound, it is a pressure difference. What’s important about amplitude is that its square is equal to the wave’s intensity, which in sound waves means how loud they are.

Quantum waves are just like any other wave: they have a wavelength, a frequency and an amplitude. The difference is that what propagates is a little abstract. The amplitude of a quantum wave is just a (complex) number that we just call “amplitude.” However, when we square it we get the probability that a particle is at that point. That is, what propagates in a quantum wave is an amplitude, of which the probability is the square.

Let’s look at an example: imagine the picture above is a quantum wave. Since both the parts with negative and positive amplitudes will have a positive squares, the probability of finding the particle there is high. On the other hand, the probability of finding the particle in the middle points (halfway between a peak and a valley) is zero.

The frequency of a wave (quantum or not) is related to its energy: the more frequency, the more energy. This makes sense: it’s not the same to be hit by on sea wave each minute than by a thousand in a second. In quantum mechanics, this also holds. In fact, the Energy is directly proportional to the frequency:

E = h x f (where h is the Planck constant)

Now let’s see how the “quantized” behavior arises from the wave-like properties of matter.

Imagine I have a certain particle in a box. Because 3D boxes are too complicated, I will imagine it’s a 1D box. Since it’s not a particle but a wave, it should look something like this:

Particle in a box wavefunctions

Particle in a box wavefunctions (Photo credit: Wikipedia)

Now notice the following: since the height of the line represents the probability of finding the particle there, the height has to be zero at the walls. That’s because there’s a wall, so the particle can’t be there. This severely restricts our choices of wave: we can only have waves such that their extremes on both sides of the box are zero. Hence the different shapes depicted above. Those are all our choices!

Now, as you can see, the higher we go, the higher the frequency and the lower the wavelength. This means that, the higher we go, the higher the energy! Also, those are the only energies allowed: we can’t have other energies because they would require different wave shapes and we can’t have that, because the wave has to be zero in the extremes.

Ta-dah! And that’s why we say energy is quantized. Almost the exact same happens to electrons in atoms, for example.

Now we can use the number of peaks and valleys as a label for our energies. For example, the lowest energy has only one peak, so we call it E1; The second one has both a peak and a valley, so we call it E2. And so on. We could go on forever! But the particle has to have an energy that is either E1, E2 or En, where n is any number.

Now here comes the math part, so feel free to tune out. If you don’t, though, you’ll get the enormous reward of being able to understand the mathematical notation of quantum mechanics. We simply use this:

=|E_{1}>" title="|\phi>=|E_{1}>" class="latex">

All this means that the particle has energy E1. That is: “the particle’s state (the Greek letter phi) is such that its energy is number 1.”

Now, it turns out that a particle doesn’t have to have an energy of E1 or E2. Since this is quantum mechanics, a particle can have many energies at once. When we perform a measurement, though, we will only obtain one of the values. For example, this equation:

=A_{1}|E_{1}>+A_{2}|E_{2}>+A_{5}|E_{5}>" title="|\phi>=A_{1}|E_{1}>+A_{2}|E_{2}>+A_{5}|E_{5}>" class="latex">

Means that the particle has energies 1, 2 and 5 simultaneously. The numbers A1 before are the amplitudes: they tell us how likely it is that we will get energy 1, 2 or 5. The probability of finding the particle in state n is equal to An squared. Therefore, the sum of all the As squared must equal one (in physics we express probabilities in fractions of one, not in percentages). Now, you will remember that we can have any energy En for any n up to infinity. We can, then, picture each state as a vector. For example, the one with energy 1 will be:

(1, 0, 0, 0, 0, …) (and so on to infinity.) (Notice how only the first slot is filled)

The one with energies 1, 2 and 5 will be:

(1, 1, 0, 0, 1, 0, …) (and so on to infinity.) (Notice how only the slots 1, 2 and 5 are filled, one for each energy)

Hence what I said in this post: in quantum mechanics, each state is an infinite-dimensional vector. One dimension for each possible value of the energy.

Again, I’ll leave it here, before things become too long or too frustrating. Feel free to ask any questions in the comments section. It feels lonely if you don’t.

PS Note for physicists: yes, I omitted a lot of stuff. Yes, I didn’t explain eigenvectors, eigenvalues and how you could express things using different bases. Yes, I didn’t normalize the vectors when I put them in parentheses form and I forgot the different amplitudes. Yes, I was not thorough. But I think this time the lack of thoroughness was thoroughly justified: if I had been more precise I’d just have lost most people halfway through. And the basic idea is still there.


Quantum Mechanics
for Beginners



Hub

Quantum Mechanics
Chaos Theory
Scripture Theory
Introduction
Uncertainty Principle
The Atom
Quantum Foam
The Standard Model
The Universe expands?

/

Quantum Mechanics for Beginners

- An Introduction -






How the Princess began to Feel the Pea.

Science is exciting because it is always in trouble. No matter how excellent a theory is, it always misses some point or other. Even our most precious ideas about the universe are not able to explain everything; there's always a blind spot. And when the hopeful folks zoom in on that blind spot it pretty much always turns out to be a lot larger than anybody thought, and all of us a mere bunch of naive beginners.

At the end of the eighteenth century the blind spot of regular mechanics (=the library of dogmas that teach the ins and outs of objects moving and colliding) covered the behavior of very small objects, such as electrons, and the behavior that light caused when it hit small things like electrons.

Light had been a mystery for centuries. Some experiments proved beyond the shadow of a doubt that light was waves. Some other experiments proved beyond the shadow of a doubt that light was particles. The truth about light was obviously hidden and it wasn't until 1900 that people began to understand that there was something very weird about the world of the small. Something that required a complete revision of understanding.

It was decided that the world of the very small was governed by rules that were different from the rules that governed the world we can see, and regular (or classical) mechanics begat Quantum Mechanics. And that unanticipated breach in mechanics spawned this very important rule:

Hold that thought (1):

Individual quantum particles are subjected to a completely different law than the law to which large objects made from quantum particles are subjected.

The introduction of the quantum

The Quantum Mechanical era commenced in 1900 when Max Planck postulated that everything is made up of little bits he called quanta (one quantum; two quanta). Matter had its quanta but also the forces that kept material objects together. Forces could only come in little steps at the time; there was no more such a thing as infinitely small.

Albert Einstein took matters further when he successfully described how light interacts with electrons but it wasn't until the 1920's that things began to fall together and some fundamental rules about the world of the small where wrought almost by pure thought. The men who mined these rules were the arch beginners of Quantum Mechanics, the Breakfast Club of the modern era. Names like Pauli, Heisenberg, Schrödinger, Born, Rutherford and Bohr still put butterflies in the bellies of those of us who know what incredible work these boys - as most of them where in their twenties; they were rebels, most of them not even taken serious - achieved. They were Europeans, struck by the depression, huddled together on tiny attics peeking into a strange new world as once the twelve spies checked out the Promised Land. Let all due kudoes abound.

Believing the unbelievable

One of the toughest obstacles the early Quantum Mechanics explorers had to overcome was their own beliefs in determinism. Because the world of the small is so different, people had to virtually reformat the system of logic that had brought them thus far. In order to understand nature they had to let go of their intuition and embrace a completely new way of thinking. The things they discovered where fundamental rules that just were and couldn't really be explained in terms of the large scale world. Just like water is wet and fire is hot, quantum particles display behavior that are inherent to them alone and can't be compared with any material object we can observe with the naked eye.

One of those fundamental rules is that everything is made up from little bits. Material objects are made up of particles, but also the forces that keep those objects together. Light, for instance, is besides that bright stuff which makes things visible, also a force (the so-called electromagnetic force) that keeps electrons tied to the nuclei of atoms, and atoms tied together to make molecules and finally objects. In Scriptures Jesus is often referred to as light, and most exegetes focus on the metaphorical value of these statements. But as we realize that all forms of matter are in fact 'solidified' light (energy, as in E=mc2) and the electromagnetic force holds all atoms together, the literal value of Paul's statement "and He is before all things, and in Him all things hold together (Col 1:17)" becomes quite compelling.

Particles are either so-called real particles, also known as fermions, or they are force particles, also known as bosons.

Quarks, which are fermions, are bound together by gluons, which are bosons. Quarks and gluons form nulceons, and nucleons bound together by gluons form the nuclei of atoms.

The electron, which is a fermion, is bound to the nucleus by photons, which are bosons. The whole shebang together forms atoms. Atoms form molecules. Molecules form objects.

Everything that we can see, from the most distant stars to the girl next door, or this computer you are staring at and yourself as well are made up from a mere 3 fermions and 9 bosons. The 3 fermions are Up-quark, Down-quark and the electron. The 9 bosons are 8 gluons and 1 photon.

Like so:

QuantaQuantum Mechanics for Beginners, an Introduction
AtomsQuantum Mechanics for Beginners, an Introduction
MoleculesQuantum Mechanics for Beginners, an Introduction
Objects
Quantum Mechanics for Beginners, an Introduction
Quantum Mechanics for Beginners, an Introduction
Quantum Mechanics for Beginners, an Introduction
Quantum Mechanics for Beginners, an Introduction

But the 3 fermions that make up our entire universe are not all there is. These 3 are the survivors of a large family of elementary particles and this family is now known as the Standard Model. What happened to the rest? Will they ever be revived?
We will learn more about the Standard Model a little further up. First we will take a look at what quantum particles are and in which weird world they live.

(If you plan to research these matters more we have written out the most common quantum phrases in a table for your convenience. Have a quick look at it so that you know where to find it in case you decide you need it).
Go to the next chapter:
Big Rules for Small Particles


Summary 1: Quantum Mechanics for Beginners; an Introduction.

  • Everything is made up from little chunks called quanta.
  • Small particles are completely different animals than large objects.
  • The visible universe is made up of 3 fermions and 9 bosons (not counting gravity)
  • The matter that makes up the visible universe is part of a larger family of particles called the Standard Model.

turn page turn page
home

This project depends on gifts from readers like you.
Please help.





Max Planck, introduction of the quantum
Max Planck;
The introduction of the quantum.

Albert Einstein, King of Beginners
Albert Einstein;

Wave function

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Not to be confused with Wave equation.
Some trajectories of a harmonic oscillator (a ball attached to a spring) in classical mechanics (A–B) and quantum mechanics (C–H). In quantum mechanics (C–H), the ball has a wave function, which is shown with real part in blue and imaginary part in red. The trajectories C,D,E,F, (but not G or H) are examples of standing waves, (or "stationary states"). Each standing-wave frequency is proportional to a possible energy level of the oscillator. This "energy quantization" does not occur in classical physics, where the oscillator can have any energy.

A wave function or wavefunction in quantum mechanics describes the quantum state of a particle and how it behaves. Typically, its values are complex numbers and, for a single particle, it is a function of space and time. The laws of quantum mechanics (the Schrödinger equation) describe how the wave function evolves over time. The wave function behaves qualitatively like other waves, like water waves or waves on a string, because the Schrödinger equation is mathematically a type of wave equation. This explains the name "wave function", and gives rise to wave–particle duality.

The most common symbols for a wave function are ψ or Ψ (lower-case and capital psi).

Although ψ is a complex number, |ψ|2 is real, and corresponds to the probability density of finding a particle in a given place at a given time, if the particle's position is measured.

The SI units for ψ depend on the system. For one particle in three dimensions, its units are m–3/2. These unusual units are required so that an integral of |ψ|2 over a region of three-dimensional space is a unitless probability (i.e., the probability that the particle is in that region). For different numbers of particles and/or dimensions, the units may be different, determined by dimensional analysis.[1]

The wave function is central to quantum mechanics, because it is a fundamental postulate of quantum mechanics. It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics—topics that continue to be debated even today.

Contents

  • 1 Historical background
  • 2 Mathematical introduction
    • 2.1 Wavefunctions as multi-variable functions – analytical calculus formalism
    • 2.2 Wave functions as an abstract vector space – linear/abstract algebra formalism
      • 2.2.1 Introduction to vector formalism
    • 2.3 Requirements
    • 2.4 Information about quantum systems
  • 3 Definition (single spin-0 particle in one spatial dimension)
    • 3.1 Position-space wavefunction
    • 3.2 Momentum-space wavefunction
    • 3.3 Relation between wavefunctions
    • 3.4 Example of normalization
  • 4 Definition (other cases)
    • 4.1 Many spin-0 particles in one spatial dimension
    • 4.2 One spin-0 particle in three spatial dimensions
      • 4.2.1 Position space wavefunction
      • 4.2.2 Momentum space wavefunction
      • 4.2.3 Relation between wavefunctions
    • 4.3 Many spin-0 particles in three spatial dimensions
    • 4.4 One particle with spin in three dimensions
    • 4.5 Many particles with spin in three dimensions
  • 5 Wavefunction symmetry and antisymmetry
  • 6 Normalization invariance
  • 7 Wavefunctions as vector spaces
    • 7.1 Basis representation
    • 7.2 Finite dimensional Hilbert spaces
      • 7.2.1 Closure relation in the discrete bases
      • 7.2.2 Normalization in discrete bases
      • 7.2.3 Application to one spin-½ particle (neglect spatial freedom)
    • 7.3 Infinite dimensional vectors
      • 7.3.1 Closure relation in continuous bases
      • 7.3.2 Normalization in continuous bases
    • 7.4 Application to position, momentum and spin state spaces
      • 7.4.1 One spin-0 particle in one dimension
      • 7.4.2 One spin-0 particle in three dimensions
      • 7.4.3 One spin particle in three dimensions
    • 7.5 Time dependence
    • 7.6 Wave function collapse
  • 8 Ontology
  • 9 Examples
  • 10 See also
  • 11 References
  • 12 Further reading
  • 13 External links

Historical background

Quantum mechanics
Introduction
Glossary · History
Background[show]
Fundamental concepts[hide]
  • Complementarity
  • Decoherence
  • Nonlocality
  • Quantum state
  • Superposition
  • Tunnelling
  • Uncertainty
  • Wave function
  • Symmetry in quantum mechanics
Experiments[show]
Formulations[show]
Equations[show]
Interpretations[show]
Advanced topics[show]
Founders[show]
  • v
  • t
  • e

In the 1920s and 1930s, there were two divisions (so to speak) of theoretical physicists who simultaneously founded quantum mechanics: one for calculus and one for linear algebra. Those who used the techniques of calculus included Louis de Broglie, Erwin Schrödinger, Paul Dirac, Hermann Weyl, Oskar Klein, Walter Gordon, Douglas Hartree and Vladimir Fock. This hand of quantum mechanics became known as "wave mechanics". Those who applied the methods of linear algebra included Werner Heisenberg, Max Born, Wolfgang Pauli and John Slater. This other hand of quantum mechanics came to be called "matrix mechanics". Schrödinger subsequently showed that the two approaches were equivalent.[2] In each case, the wavefunction was at the centre of attention in two forms, giving quantum mechanics its unity.

De Broglie could be considered the founder of the wave model in 1925, owing to his symmetric relation between momentum and wavelength: the De Broglie equation. Schrödinger searched for an equation that would describe these waves, and was the first to construct and publish an equation for which the wave function satisfied in 1926, based on classical energy conservation. Indeed it is now called the Schrödinger equation. However, no one, even Schrödinger and De Broglie, were clear on how to interpret it. What did this function mean?[3] Around 1924–27, Born, Heisenberg, Bohr and others provided the perspective of probability amplitude.[4] This is the Copenhagen interpretation of quantum mechanics. There are many other interpretations of quantum mechanics, but this is considered the most important – since quantum calculations can be understood.

In 1927, Hartree and Fock made the first step in an attempt to solve the N-body wave function, and developed the self-consistency cycle: an iterative algorithm to approximate the solution. Now it is also known as the Hartree–Fock method.[5] The Slater determinant and permanent (of a matrix) was part of the method, provided by Slater.

Interestingly, Schrödinger did encounter an equation for which the wave function satisfied relativistic energy conservation before he published the non-relativistic one, but it led to unacceptable consequences; negative probabilities and negative energies, so he discarded it.[6]:3 In 1927, Klein, Gordon and Fock also found it, but taking a step further: incorporated the electromagnetic interaction into it and proved it was Lorentz-invariant. De Broglie also arrived at exactly the same equation in 1928. This wave equation is now known most commonly as the Klein–Gordon equation.[7]

By 1928, Dirac deduced an equation from the first successful unification of special relativity and quantum mechanics, as applied to the electron – now called the Dirac equation. He found an unusual character of the wavefunction for this equation: it was not a single complex number, but a spinor.[5] Spin automatically entered into the properties of the wavefunction. Although there were problems, Dirac was capable of resolving them. Around the same time Weyl also found his relativistic equation, which also had spinor solutions. Later other wave equations were developed: see Relativistic wave equations for further information.

Mathematical introduction

Wavefunctions as multi-variable functions – analytical calculus formalism

Multivariable calculus and analysis (study of functions, change etc.) can be used to represent the wavefunction in a number of situations. Superficially, this formalism is simple to understand for the following reasons.

  • It is more directly intuitive to have probability amplitudes as functions of space and time. At every position and time coordinate, the probability amplitude has a value by direct calculation.
  • Functions can easily describe wave-like motion, using periodic functions, and Fourier analysis can be readily done.
  • Functions are easy to produce, visualize and interpret, due to the pictorial nature of the graph of a function (i.e. curves, Contour lines, and surfaces). When the situation is in a high number of dimensions (say 3-d space) – it is possible to analyze the function in a lower dimensional slice (say a 2-d plane) or contour plots of the function to determine the behaviour of the system within that confined region.

Although these functions are continuous, they are not deterministic; rather, they are probability distributions. Perhaps oddly, this approach is not the most general way to represent probability amplitudes. The more advanced techniques use linear algebra (the study of vectors, matrices, etc.) and, more generally still, abstract algebra (algebraic structures, generalizations of Euclidean spaces etc.).

Wave functions as an abstract vector space – linear/abstract algebra formalism

The set of all possible wave functions (at any given time) forms an abstract mathematical vector space. Specifically, the entire wave function is treated as a single abstract vector:

\psi(\mathbf{r}) \leftrightarrow |\psi\rangle

where |ψ⟩ is a column vector written in bra–ket notation. The statement that "wave functions form an abstract vector space" simply means that it is possible to add together different wave functions, and multiply wave functions by complex numbers (see vector space for details). (Technically, because of the normalization condition, wave functions form a projective space rather than an ordinary vector space.) This vector space is infinite-dimensional, because there is no finite set of functions which can be added together in various combinations to create every possible function. Also, it is a Hilbert space, because the inner product of wave functions Ψ1(x) and Ψ2(x) can be defined as

\langle \Psi_1 | \Psi_2 \rangle \equiv \int\limits_{-\infty}^\infty d x \, \Psi_1^*(x)\Psi_2(x) ,

where * denotes complex conjugate.

There are several advantages to understanding wave functions as elements of an abstract vector space:

  • All the powerful tools of linear algebra can be used to manipulate and understand wave functions. For example:
    • Linear algebra explains how a vector space can be given a basis, and then any vector can be expressed in this basis. This explains the relationship between a wave function in position space and a wave function in momentum space, and suggests that there are other possibilities too.
    • Bra–ket notation can be used to manipulate wave functions.
  • The idea that quantum states are vectors in a Hilbert space is completely general in all aspects of quantum mechanics and quantum field theory, whereas the idea that quantum states are complex-valued "wave" functions of space is only true in certain situations.

Introduction to vector formalism

Given an isolated physical system, the allowed states of this system (i.e. the states the system could occupy without violating the laws of physics) are part of a Hilbert space H. Some properties of such a space are

  • If |ψ⟩ and |ϕ⟩ are two allowed states, then a|ψ⟩ + b|ϕ⟩ is also an allowed state, provided |a|2 + |b|2 = 1. (This condition is due to normalization, see below.)
  • There is always an orthonormal basis of allowed states of the vector space H.

Physically, the nature of the inner product is dependent on the basis in use, because the basis is chosen to reflect the quantum state of the system.

When the basis is a countable set { | εi⟩ } and orthonormal, that is

\langle \varepsilon_i | \varepsilon_j \rangle = \delta_{ij},

then an arbitrary vector |ψ⟩ can be expressed as

| \psi \rangle = \sum_i c_i | \varepsilon_i \rangle,

where the components are the (complex) numbers ci = ⟨εi|ψ⟩ This wave function is known as a discrete spectrum, since the bases are discrete.

When the basis is an uncountable set, the orthonormality condition holds similarly,

\langle \varepsilon | \varepsilon_0 \rangle = \delta \left ( \varepsilon - \varepsilon_0 \right ),

then an arbitrary vector | \psi \rangle can be expressed as

| \psi \rangle = \int d \varepsilon \psi(\varepsilon) | \varepsilon \rangle .

where the components are the functions ψ(ε) = ⟨ε|ψ⟩ This wave function is known as a continuous spectrum, since the bases are continuous.

Paramount to the analysis is the Kronecker delta, δij, and the Dirac delta function, δ(ε − ε0), since the bases used are orthonormal. More detailed discussion of wave functions as elements of vector spaces is below, following further definitions.

Requirements

Continuity of the wavefunction and its first spatial derivative (in the x direction, y and z coordinates not shown), at some time t.

The wavefunction must satisfy the following constraints for the calculations and physical interpretation to make sense:[8]

  • It must everywhere be finite.
  • It must everywhere be a continuous function, and continuously differentiable (in the sense of distributions, for potentials that are not functions but are distributions, such as the dirac delta function).
    • As a corollary, the function would be single-valued, else multiple probabilities occur at the same position and time, again unphysical.
  • It must everywhere satisfy the relevant normalization condition, so that the particle/system of particles exists somewhere with 100% certainty.

If these requirements are not met, it's not possible to interpret the wavefunction as a probability amplitude; the values of the wavefunction and its first order derivatives may not be finite and definite (with exactly one value), i.e. probabilities can be infinite and multiple-valued at any one position and time – which is nonsense, as it does not satisfy the probability axioms. Furthermore, when using the wavefunction to calculate a measurable observable of the quantum system without meeting these requirements, there will not be finite or definite values to calculate from – in this case the observable can take a number of values and can be infinite. This is unphysical and not observed when measuring in an experiment. Hence a wavefunction is meaningful only if these conditions are satisfied.

Information about quantum systems

Main articles: Quantum state and Operator (physics)

Although the wavefunction contains information, it is a complex number valued quantity; only its relative phase and relative magnitude can be measured. It does not directly tell anything about the magnitudes or directions of measurable observables. An operator extracts this information by acting on the wavefunction ψ. For details and examples on how quantum mechanical operators act on the wave function, commutation of operators, and expectation values of operators; see Operator (physics).

Definition (single spin-0 particle in one spatial dimension)

Standing waves for a particle in a box, examples of stationary states.
Travelling waves of a free particle.
The real parts of position and momentum wave functions Ψ(x) and Φ(p), and corresponding probability densities |Ψ(x)|2 and |Φ(p)|2, for one spin-0 particle in one x or p dimension. The wavefunctions shown are continuous, finite, single-valued and normalized. The colour opacity (%) of the particles corresponds to the probability density (not the wavefunction) of finding the particle at position x or momentum p.

Position-space wavefunction

For now, consider the simple case of a single particle, without spin, in one spatial dimension. (More general cases are discussed below). The state of such a particle is completely described by its wave function:

\Psi(x,t),

where x is position and t is time. This function is complex-valued, meaning that Ψ(x, t) is a complex number.

If the particle's position is measured, its location is not deterministic, but is described by a probability distribution. The probability that its position x will be in the interval [a, b] (meaning a ≤ x ≤ b) is:

P_{a\le x\le b} = \int\limits_a^b d x\,|\Psi(x,t)|^2

where t is the time at which the particle was measured. In other words, |Ψ(x, t)|2 is the probability density that the particle is at x, rather than some other location.

This leads to the normalization condition:

\int\limits_{-\infty}^\infty d x \, |\Psi(x,t)|^2  = 1,

because if the particle is measured, there is 100% probability that it will be somewhere.

Momentum-space wavefunction

Main article: Momentum space

The particle also has a wave function in momentum space:

\Phi(p,t)

where p is the momentum in one dimension, which can be any value from −∞ to +∞, and t is time. If the particle's momentum is measured, the result is not deterministic, but is described by a probability distribution:

P_{a\le p\le b} = \int\limits_a^b d p \, |\Phi(p,t)|^2 ,

analogous to the position case.

The normalization condition is also similar:

 \int\limits_{-\infty}^{\infty} d p \, \left | \Phi \left ( p, t \right ) \right |^2 = 1.

Relation between wavefunctions

The position-space and momentum-space wave functions are Fourier transforms of each other, therefore both contain the same information, and either one alone is sufficient to calculate any property of the particle. For one-dimension:[9]

\begin{align} \Phi(p,t) & = \frac{1}{\sqrt{2\pi\hbar}}\int\limits_{-\infty}^\infty d x \, e^{-ipx/\hbar} \Psi(x,t)\\  &\upharpoonleft \downharpoonright\\ \Psi(x,t) & = \frac{1}{\sqrt{2\pi\hbar}}\int\limits_{-\infty}^\infty d p \, e^{ipx/\hbar} \Phi(p,t). \end{align}

Sometimes the wave-vector k is used in place of momentum p, since they are related by the de Broglie relation

p = \hbar k,

and the equivalent space is referred to as k-space. Again it makes no difference which is used since p and k are equivalent – up to a constant. In practice, the position-space wavefunction is used much more often than the momentum-space wavefunction.

Example of normalization

A particle is restricted to a 1D region between x = 0 and x = L; its wave function is:

L \\ \end{align} " src="http://upload.wikimedia.org/math/f/4/8/f48d6a0cb79db94e038fbce197df664c.png">.

To normalize the wave function we need to find the value of the arbitrary constant A; solved from

 \int\limits_{-\infty}^{\infty} dx \, |\Psi|^2  = 1 .

From Ψ, we have |Ψ|2;

 | \Psi  | ^2 = A^2 e^{i(kx - \omega t)} e^{-i(kx - \omega t)} =A^2 ,

so the integral becomes;

 \int\limits_{-\infty}^0 dx \cdot 0  + \int\limits_0^L dx \, A^2 + \int\limits_L^\infty dx \cdot 0 = 1 ,

therefore the constant is;

A^2 L = 1 \rightarrow A = \frac{1}{\sqrt{L}} .

The normalized wave function (in the region) is then given by;

 \Psi (x,t) = \frac{1}{\sqrt{L}} e^{i(kx-\omega t)}, \quad 0 \leq x \leq L.

Definition (other cases)

Many spin-0 particles in one spatial dimension

Travelling waves of two free particles. Top is position space wavefunction, bottom is momentum space wavefunction, with corresponding probability densities.

The previous wavefunction can be generalized to incorporate N particles in one dimension:

\Psi(x_1,x_2,\cdots x_N, t),

The probability that particle 1 is in an x-interval R1 = [a1,b1] and particle 2 in interval R2 = [a2,b2] etc., up to particle N in interval RN = [aN,bN], all measured simultaneously at time t, is given by:

P_{x_1\in R_1,x_2\in R_2 \cdots x_N\in R_N} = \int\limits_{a_1}^{b_1} d x_1 \int\limits_{a_2}^{b_2} d x_2  \cdots \int\limits_{a_N}^{b_N} d x_N | \Psi(x_1 \cdots x_N,t)|^2

The normalization condition becomes:

\int\limits_{-\infty}^\infty d x_1 \int\limits_{-\infty}^\infty d x_2 \cdots \int\limits_{-\infty}^\infty d x_N |\Psi(x_1 \cdots x_N,t)|^2 = 1.

In each case, there are N one-dimensional integrals, one for each particle.

One spin-0 particle in three spatial dimensions

Position space wavefunction

The electron probability density for the first few hydrogen atom electron orbitals shown as cross-sections. These orbitals form an orthonormal basis for the wave function of the electron. Different orbitals are depicted with different scale.

The position-space wave function of a single particle in three spatial dimensions is similar to the case of one spatial dimension above:

\Psi(\mathbf{r},t)

where r is the position in three-dimensional space (r is short for (x, y, z)), and t is time. As always Ψ(r, t) is a complex number. If the particle's position is measured at time t, the probability that it is in a region R is:

P_{\mathbf{r}\in R} = \int\limits_R d^3\mathbf{r} \, \left |\Psi(\mathbf{r},t) \right |^2

(a three-dimensional integral over the region R, with differential volume element d3r, also written "dV" or "dx dy dz"). The normalization condition is:

\int\limits_{{\rm all \, space}} \left | \Psi(\mathbf{r},t)\right |^2 d ^3\mathbf{r} = 1,

where the integrals are taken over all of three-dimensional space.

Momentum space wavefunction

There is a corresponding momentum space wavefunction for three dimensions also:

\Phi(\mathbf{p},t)

where p is the momentum in 3-dimensional space, and t is time. This time there are three components of momentum which can have values −∞ to +∞ in each direction, in Cartesian coordinates (x, y, z).

The probability of measuring the momentum components px between a and b, py between c and d, and pz between e and f, is given by:

P_{p_x\in[a,b],p_y\in[c,d],p_z\in[e,f]} = \int\limits_e^f dp_z\,\int\limits_c^d d p_y \, \int\limits_a^b d p_x \, \left | \Phi \left ( \mathbf{p}, t \right ) \right |^2 ,

hence the normalization:

 \int\limits_{{\rm all \, space}} d^3\mathbf{p} \, \left | \Phi \left ( \mathbf{p}, t \right ) \right |^2 = 1.

analogous to space, d3p = dpxdpydpz is a differential 3-momentum volume element in momentum space.

Relation between wavefunctions

The generalization of the previous Fourier transform is[10]

\begin{align} \Phi(\mathbf{p},t) & = \frac{1}{\sqrt{\left(2\pi\hbar\right)^3}}\int\limits_{{\rm all \, space}} d ^3\mathbf{r} \, e^{-i \mathbf{r}\cdot \mathbf{p} /\hbar} \Psi(\mathbf{r},t) \\  &\upharpoonleft \downharpoonright\\ \Psi(\mathbf{r},t) & = \frac{1}{\sqrt{\left(2\pi\hbar\right)^3}}\int\limits_{{\rm all \, space}} d^3\mathbf{p} \, e^{i \mathbf{r}\cdot \mathbf{p} /\hbar} \Phi(\mathbf{p},t) . \end{align}

Many spin-0 particles in three spatial dimensions

When there are many particles, in general there is only one wave function, not a separate wave function for each particle. The fact that one wave function describes many particles is what makes quantum entanglement and the EPR paradox possible. The position-space wave function for N particles is written:[5]

\Psi(\mathbf{r}_1,\mathbf{r}_2 \cdots \mathbf{r}_N,t)

where ri is the position of the ith particle in three-dimensional space, and t is time. If the particles' positions are all measured simultaneously at time t, the probability that particle 1 is in region R1 and particle 2 is in region R2 and so on is:

P_{\mathbf{r}_1\in R_1,\mathbf{r}_2\in R_2 \cdots \mathbf{r}_N\in R_N} = \int\limits_{R_1} d ^3\mathbf{r}_1 \int\limits_{R_2} d ^3\mathbf{r}_2\cdots \int\limits_{R_N} d ^3\mathbf{r}_N |\Psi(\mathbf{r}_1 \cdots \mathbf{r}_N,t)|^2

The normalization condition is:

\int\limits_{{\rm all \, space}} d ^3\mathbf{r}_1 \int\limits_{{\rm all \, space}} d ^3\mathbf{r}_2\cdots \int\limits_{{\rm all \, space}} d ^3\mathbf{r}_N |\Psi(\mathbf{r}_1 \cdots \mathbf{r}_N,t)|^2 = 1

(altogether, this is 3N one-dimensional integrals).

For N interacting particles, i.e. particles which interact mutually and constitute a many-body system, the wavefunction is a function of all positions of the particles and time, it can't be separated into the separate wavefunctions of the particles. However, for non-interacting particles, i.e. particles which do not interact mutually and move independently, in a time-independent potential, the wavefunction can be separated into the product of separate wavefunctions for each particle:[8]

\Psi = \phi(t)\prod_{i=1}^N\psi(\mathbf{r}_i) = \phi(t)\psi(\mathbf{r}_1)\psi(\mathbf{r}_2)\cdots\psi(\mathbf{r}_N).

One particle with spin in three dimensions

For a particle with spin, the wave function can be written in "position-spin-space" as:

\Psi(\mathbf{r},s_z,t)

where r is a position in three-dimensional space, t is time, and sz is the spin projection quantum number along the z axis. (The z axis is an arbitrary choice; other axes can be used instead if the wave function is transformed appropriately, see below.) The sz parameter, unlike r and t, is a discrete variable. For example, for a spin-1/2 particle, sz can only be +1/2 or −1/2, and not any other value. (In general, for spin s, sz can be s, s – 1,...,–s.) If the particle's position and spin is measured simultaneously at time t, the probability that its position is in R1 and its spin projection quantum number is a certain value sz = m is:

P_{\mathbf{r}\in R,s_z=m} = \int\limits_{R} d ^3\mathbf{r} |\Psi(\mathbf{r},t,m)|^2

The normalization condition is:

\sum_{\mathrm{all\, }s_z} \int\limits_{{\rm all \, space}} |\Psi(\mathbf{r},t,s_z)|^2 d ^3\mathbf{r} = 1.

Since the spin quantum number has discrete values, it must be written as a sum rather than an integral, taken over all possible values.

Many particles with spin in three dimensions

Likewise, the wavefunction for N particles each with spin is:

\Psi(\mathbf{r}_1, \mathbf{r}_2 \cdots \mathbf{r}_N, s_{z\,1}, s_{z\,2} \cdots s_{z\,N}, t)

The probability that particle 1 is in region R1 with spin sz1 = m1 and particle 2 is in region R2 with spin sz2 = m2 etc. reads (probability subscripts now removed due to their great length):

P = \int\limits_{R_1} d ^3\mathbf{r}_1 \int\limits_{R_2} d ^3\mathbf{r}_2\cdots \int\limits_{R_N} d ^3\mathbf{r}_N \left | \Psi\left (\mathbf{r}_1 \cdots \mathbf{r}_N,m_1\cdots m_N,t \right ) \right |^2

The normalization condition is:

 \sum_{s_{z\,N}} \cdots \sum_{s_{z\,2}} \sum_{s_{z\,1}} \int\limits_{{\rm all \, space}} d ^3\mathbf{r}_1 \int\limits_{{\rm all \, space}} d ^3\mathbf{r}_2\cdots \int\limits_{{\rm all \, space}} d ^3 \mathbf{r}_N \left | \Psi \left (\mathbf{r}_1 \cdots \mathbf{r}_N,s_{z\,1}\cdots s_{z\,N},t \right ) \right |^2 = 1

Now there are 3N one-dimensional integrals followed by N sums.

Again, for non-interacting particles in a time-independent potential the wavefunction is the product of separate wavefunctions for each particle:[8]

\Psi = \phi(t)\prod_{i=1}^N\psi(\mathbf{r}_i,s_{z\,i}) = \phi(t)\psi(\mathbf{r}_1,s_{z\,1})\psi(\mathbf{r}_2,s_{z\,2})\cdots\psi(\mathbf{r}_N,s_{z\,N}).

Wavefunction symmetry and antisymmetry

In quantum mechanics there is a fundamental distinction between identical particles and distinguishable particles. For example, any two electrons are fundamentally indistinguishable from each other; the laws of physics make it impossible to "stamp an identification number" on a certain electron to keep track of it.[11] This translates to a requirement on the wavefunction: For example, if particles 1 and 2 are indistinguishable, then:

\Psi \left ( \mathbf{r},\mathbf{r'},\mathbf{r}_3,\mathbf{r}_4,\cdots \right ) = \left ( -1 \right )^{2s} \Psi \left ( \mathbf{r'},\mathbf{r},\mathbf{r}_3,\mathbf{r}_4,\cdots \right )

where s is the spin quantum number of the particle: integer for bosons (s = 1, 2, 3...) and half-integer for fermions (s = 1/2, 3/2...).

The wavefunction is said to be symmetric (no sign change) under boson interchange and antisymmetric (sign changes) under fermion interchange. This feature of the wavefunction is known as the Pauli principle.

Normalization invariance

It is important that the properties associated with the wave function are invariant under normalization. If normalization of a wave function changed the properties, the process becomes pointless as we still cannot yield any information about the particle associated with the non-normalized wave function.

All properties of the particle, such as momentum, energy, expectation value of position, associated probability distributions etc., are solved from the Schrödinger equation (or other relativistic wave equations). The Schrödinger equation is a linear differential equation, so if Ψ is normalized and becomes AΨ (A is the normalization constant), then the equation reads:

 \hat{H} (A\Psi) = i\hbar\frac{\partial }{\partial t}(A\Psi) \rightarrow \hat{H} \Psi = i\hbar\frac{\partial }{\partial t}\Psi

which is the original Schrödinger equation. That is to say, the Schrödinger equation is invariant under normalization, and consequently associated properties are unchanged.

Wavefunctions as vector spaces

Discrete components Ak of a complex vector |A⟩ = ∑k Ak|ek⟩, which belongs to a countably infinite-dimensional Hilbert space; there are countably infinitely many k values and basis vectors |ek⟩.
Continuous components ψ(x) of a complex vector |ψ⟩ = ∫dx ψ(x)|ψ⟩, which belongs to an uncountably infinite-dimensional Hilbert space; there are uncountably infinitely many x values and basis vectors |x⟩.
Components of complex vectors plotted against index number; discrete k and continuous x. Two probability amplitudes out of infinitely many are highlighted.
Main article: Quantum state

As explained above, quantum states are always vectors in an abstract vector space (technically, a complex projective Hilbert space). For the wave functions above, the Hilbert space usually has not only infinite dimensions, but uncountably infinitely many dimensions. However, linear algebra is much simpler for finite-dimensional vector spaces. Therefore it is helpful to look at an example where the Hilbert space of wave functions is finite dimensional.

Basis representation

A wave function describes the state of a physical system |ψ⟩, by expanding it in terms of other possible states of the same system – collectively referred to as a basis or representation |εi⟩. In what follows, all wave functions are assumed to be normalized.

An element of a vector space can be expressed in different bases elements; and so the same applies to wave functions. The components of a wave function describing the same physical state take different complex values depending on the basis being used; however, just like elements of a vector space, the wave function itself is not dependent on the basis chosen. Choosing a new coordinate system does not change the vector itself, only the representation of the vector with respect to the new coordinate frame, since the components will be different but the linear combination of them still equals the vector.

Finite dimensional Hilbert spaces

A wave function ψ with n components describes how to express the state of the physical system |ψ⟩ as the linear combination of n basis elements |εi⟩, (i = 1, 2...n). Following is a breakdown of the used formalism.

In bra–ket notation, the quantum state of a particle can be written as a ket;

 | \psi \rangle  = \sum_{i = 1}^n c_i | \varepsilon_i \rangle  = c_1 | \varepsilon_1 \rangle + c_2 | \varepsilon_2 \rangle + \cdots c_n | \varepsilon_n \rangle  = \begin{bmatrix} \langle \varepsilon_1 | \psi \rangle \\ \vdots \\ \langle \varepsilon_n | \psi \rangle \end{bmatrix}  = \begin{bmatrix} c_1 \\ \vdots \\ c_n \end{bmatrix} .

The basis here is orthonormal:

 \langle \varepsilon_i | \varepsilon_j \rangle = \delta_{ij},

where δij is the Kronecker delta. The corresponding bra is the Hermitian conjugate – the transposed complex conjugate matrix (into a row matrix/row vector):

 \begin{align} \langle \psi | = | \psi \rangle^{*} & = \begin{bmatrix} \langle \varepsilon_1 | \psi \rangle & \cdots & \langle \varepsilon_n | \psi \rangle \end{bmatrix}^{*} = \begin{bmatrix} \langle \varepsilon_1 | \psi \rangle^{*} & \cdots & \langle \varepsilon_n | \psi \rangle^{*} \end{bmatrix} \\  & = \begin{bmatrix} c_1 & \cdots & c_n \end{bmatrix}^{*} = \begin{bmatrix} c_1^{*} & \cdots & c_n^{*} \end{bmatrix} \end{align}

Kets are analogous to the more elementary Euclidean vectors, although the components are complex-valued. The state can be expanded in any convenient basis of the Hilbert space. Simple examples can be found from a two-state quantum system, two energy eigenstates:

 | \psi \rangle = \psi_1 | E_1 \rangle + \psi_2 | E_2 \rangle.

and two spin states (up or down):

 | \psi \rangle = \psi_+ | \uparrow_z \rangle + \psi_{-} | \downarrow_z \rangle ,

(see below for details of this frequent case). In these examples, the particle is not in any one definite or preferred state, but rather in both at the same time – hence the term superposition. The relative chance of which state occurs is related to the (moduli squares of the) coefficients.

Projecting the initial state |ψ⟩ onto the particular state the system collapses to |ε⟩, gives the complex number;

 \begin{align} \langle \varepsilon_q | \psi \rangle & = \langle \varepsilon_q | \left ( \sum_{i = 1}^n c_i | \varepsilon_i \rangle \right ) \\   & = c_1 \langle \varepsilon_q | \varepsilon_1 \rangle  + c_2 \langle \varepsilon_q | \varepsilon_2 \rangle  + \cdots + c_q \langle \varepsilon_q | \varepsilon_q \rangle  + \cdots c_n \langle \varepsilon_q | \varepsilon_n \rangle \\ & = c_q \,, \end{align}

so the modulus squared of this gives a real number;

 |c_q|^2 = { | \langle \varepsilon_q | \psi \rangle | }^2 \,,

the probability of state |εq⟩ occurring. The sum of the probabilities of all possible states must sum to 1 (see normalization using kets below), implying the constraint:

 \sum_i | c_i |^2 = 1

Closure relation in the discrete bases

Taking the state above

 |\psi\rangle = \sum_{i=1}^n c_i | \varepsilon_i \rangle = \sum_{i=1}^n \langle \varepsilon_i | \psi \rangle| \varepsilon_i \rangle = \left(\sum_{i=1}^n | \varepsilon_i \rangle \langle \varepsilon_i | \right ) | \psi \rangle \,

we obtain the closure relation:

 \sum_{i=1}^n | \varepsilon_i \rangle \langle \varepsilon_i | = 1 .

The equality to unity means this is an identity operator (its action on any state leaves it unchanged). Suppose we have another wavefunction in the same basis:

 | \chi \rangle = \sum_{j = 1}^n z_j | \varepsilon_j \rangle = z_1 | \varepsilon_1 \rangle + z_2 | \varepsilon_2 \rangle + \cdots z_n | \varepsilon_n \rangle = \begin{bmatrix} \langle \varepsilon_1 | \chi \rangle \\ \vdots \\ \langle \varepsilon_n | \chi \rangle \end{bmatrix} = \begin{bmatrix} z_1 \\ \vdots \\ z_n \end{bmatrix} .

then the inner product can be obtained:

 \langle \chi | \psi \rangle = \langle \chi | 1 | \psi \rangle = \langle \chi | \left( \sum_{i=1}^n | \varepsilon_i \rangle \langle \varepsilon_i | \right) | \psi \rangle = \sum_{i=1}^n \langle \chi | \varepsilon_i \rangle \langle \varepsilon_i | \psi \rangle = \sum_{i=1}^n z_i^{*} c_i.

Normalization in discrete bases

The norm or magnitude of the state vector |ψ⟩ is:

 \|\psi\|^2 = \langle \psi | \psi \rangle = \sum_{j=1}^n | c_j |^2  .

which says the projection of a complex probability amplitude onto itself is real. The sum of all probabilities of basis states occurring must be unity:

 \frac{1}{\|\psi\|^2}\langle \psi | \psi \rangle = \frac{1}{\|\psi\|^2}\sum_{j=1}^n | c_j |^2 = 1 \,,

so the normalized state |ψN⟩ in all generality is:

 | \psi_N \rangle = \frac{1}{\sqrt{\langle \psi|\psi\rangle}} | \psi \rangle

Compare the similarity with Euclidean unit vectors a in elementary vector calculus:

 \mathbf{\hat{a}} = \frac{1}{\sqrt{\mathbf{a}\cdot\mathbf{a}}}\mathbf{a}

The parallels are identical: the magnitude of the vector, geometric or abstract, is reduced to 1 by dividing by its magnitude.

Application to one spin-½ particle (neglect spatial freedom)

A simple and important case is a spin-½ particle, but for this instance ignore its spatial degrees of freedom. Using the definition above, the wave function can now be written without position dependence:

\Psi \left ( s_z,t \right ),

where again sz is the spin quantum number in the z-direction, either +1/2 or −1/2. So at a given time t, Ψ is completely characterized by just the two complex numbers Ψ(+1/2,t) and Ψ(–1/2,t). For simplicity these are often written as Ψ(+1/2,t) ≡ Ψ+ ≡ Ψ↑, and Ψ(–1/2,t) ≡ Ψ– ≡ Ψ↓ respectively. This is still called a "wave function", even though in this situation it has no resemblance to familiar waves (like mechanical waves), being only a pair of numbers instead of a continuous function.

Using the above formalism, the two numbers characterizing the wave function can be written as a column vector:

\vec \psi = \begin{bmatrix} c_1 \\ c_2 \end{bmatrix}

where c1 = Ψ+ and c2 = Ψ−. Therefore the set of all possible wave functions is a two dimensional complex vector space. If the particle's spin projection in the z-direction is measured, it will be spin up (+1/2 ≡ ↑z) with probability |c1|2, and spin down (–1/2 ≡ ↓z) with probability |c2|2.

In bra–ket notation this can be written:

 \begin{align} | \psi \rangle & = c_1 | \uparrow_z \rangle + c_2 | \downarrow_z \rangle \\  & = \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} = \begin{bmatrix} \Psi_{+} \\ \Psi_{-} \end{bmatrix} = \begin{bmatrix} \langle \uparrow_z | \psi \rangle \\ \langle \downarrow_z | \psi \rangle \end{bmatrix} \end{align},

using the basis vectors (in alternate notations)

 | \uparrow_z \rangle \equiv | + \rangle for "spin up" or sz = +1/2,
 | \downarrow_z \rangle \equiv | - \rangle for "spin down" or sz = –1/2.

The normalization requirement is

|c_1|^2+|c_2|^2 = 1,

which says the probability of the particle in the spin up state (↑z, corresponding to the coefficient c1) plus the probability in the spin down (↓z, corresponding to the coefficient c2) state is 1.

To see this explicitly for this case, expand the ket in terms of the bases:

 | \psi \rangle =  c_1| \uparrow_z \rangle + c_2| \downarrow_z \rangle ,

implying

\langle \psi | =  c_1^{*} \langle \uparrow_z | + c_2^{*} \langle \downarrow_z | ,

taking the inner product (and recalling orthonormality) leads to the normalization condition:

 \begin{align} \langle \psi | \psi \rangle & = \left ( c_1| \uparrow_z \rangle + c_2| \downarrow_z \rangle \right ) \left ( c_1^{*} \langle \uparrow_z | + c_2^{*} \langle \downarrow_z | \right ) \\ & =  c_1| \uparrow_z \rangle \left ( c_1^{*} \langle \uparrow_z | + c_2^{*} \langle \downarrow_z | \right ) + c_2| \downarrow_z \rangle \left ( c_1^{*} \langle \uparrow_z | + c_2^{*} \langle \downarrow_z | \right ) \\ & = c_1 c_1^{*} \langle \uparrow_z | \uparrow_z \rangle + c_1 c_2^{*} \langle \downarrow_z | \uparrow_z \rangle + c_2 c_1^{*} \langle \uparrow_z | \downarrow_z \rangle + c_2 c_2^{*} \langle \downarrow_z | \downarrow_z \rangle \\ & = |c_1|^2+|c_2|^2 \\  & = 1 \end{align}

Infinite dimensional vectors

States can have countably infinitely many components;

 \left | \psi \right \rangle  = \sum_{i = 1}^\infty c_i \left | \varepsilon_i \right \rangle  = c_1 \left | \varepsilon_1 \right \rangle + c_2 \left | \varepsilon_2 \right \rangle + \cdots  = \begin{bmatrix} \left \langle \varepsilon_1 | \psi \right \rangle \\ \vdots \\ \left \langle \varepsilon_n | \psi \right \rangle \\ \vdots \end{bmatrix}  = \begin{bmatrix} c_1 \\ \vdots \\ c_n \\ \vdots \end{bmatrix} .

with corresponding bra as before:

 \begin{align} \langle \psi | = | \psi \rangle^{*} & = \begin{bmatrix} \langle \varepsilon_1 | \psi \rangle & \cdots & \langle \varepsilon_n | \psi \rangle & \cdots \end{bmatrix}^{*} = \begin{bmatrix} \langle \varepsilon_1 | \psi \rangle^{*} & \cdots & \langle \varepsilon_n | \psi \rangle^{*} & \cdots \end{bmatrix} \\  & = \begin{bmatrix} c_1 & \cdots & c_n & \cdots \end{bmatrix}^{*} = \begin{bmatrix} c_1^{*} & \cdots & c_n^{*} & \cdots \end{bmatrix} \end{align}

They can also have an uncountably infinite number of components. The collection of all states |ψ⟩ is a continuum of states. While finite or countably infinite basis vectors are summed over a discrete index, uncountably infinite basis vectors are integrated over a continuous index (a variable of a function). In what follows, all integrals are with respect to the basis variable ε (a real number or vector, not complex-valued), over the required range. Usually this is just the real line or subset intervals of it. The state |ψ⟩ is given by:

 | \psi \rangle = \int d \varepsilon | \varepsilon \rangle \psi(\varepsilon) \,,

with corresponding bra:

 \langle \psi | = \int d \varepsilon \langle \varepsilon | {\psi(\varepsilon)}^{*} \,,

and again the basis here is orthonormal:

 \langle \varepsilon | \varepsilon' \rangle = \delta (\varepsilon-\varepsilon')\,.

As with the discrete bases, a symbol ε is used in the basis states, two common notations are |ε⟩ and sometimes |ψε⟩. A particular basis ket may be subscripted |ε0⟩ ≡ |ψε0⟩ or primed |ε′⟩ ≡ |ψε′⟩.

The components of the state |ψ⟩ are still ⟨ε|ψ⟩, the projection of the state onto some basis is a function;

 \langle \varepsilon_0 | \psi \rangle = \langle \varepsilon_0 | \left( \int d \varepsilon | \varepsilon \rangle \psi(\varepsilon) \right) =  \int d \varepsilon \langle \varepsilon_0 | \varepsilon \rangle \psi(\varepsilon) = \int d \varepsilon \delta( \varepsilon_0 - \varepsilon ) \psi(\varepsilon) = \psi(\varepsilon_0) \,,

This time

 | \psi(\varepsilon) |^2 = | \langle \varepsilon | \psi \rangle |^2

is the probability density function of measuring the observable ε, so integrating this with respect to ε between a ≤ ε ≤ b gives:

 P_{a \leq \varepsilon \leq b} =  \int_a^b d\varepsilon | \psi(\varepsilon) |^2 =   \int_a^b d\varepsilon| \langle \varepsilon | \psi \rangle |^2 \,,

the probability of finding the system with ε between ε = a and ε = b.

Closure relation in continuous bases

Taking the state above

 |\psi\rangle = \int d \varepsilon \, | \varepsilon \rangle \psi(\varepsilon) = \int d \varepsilon \,  | \varepsilon \rangle \langle \varepsilon | \psi \rangle  = \left(\int d\varepsilon \,  | \varepsilon \rangle \langle \varepsilon | \right) | \psi \rangle \,

we obtain the closure relation:

 \int d\varepsilon \, | \varepsilon \rangle \langle \varepsilon | = 1 \,

Also the inner product can be obtained:

 \langle \chi | \psi \rangle = \langle \chi | 1 | \psi \rangle = \langle \chi| \left( \int d \varepsilon | \varepsilon \rangle \langle \varepsilon | \right) | \psi \rangle = \int d \varepsilon \langle \chi | \varepsilon \rangle \langle \varepsilon | \psi \rangle = \int d \varepsilon \chi(\varepsilon)^{*} \psi(\varepsilon) .

Normalization in continuous bases

Taking the inner product;

 \langle \psi | \psi \rangle = \int d \varepsilon \, | \psi(\varepsilon) |^2  = \|\psi\|^2 .

This integral is the total probability of all basis states occurring, so it must be 1 as before:

 \frac{1}{\|\psi\|^2}\langle \psi | \psi \rangle = \frac{1}{\|\psi\|^2} \int d \varepsilon | \psi(\varepsilon) |^2 = 1

hence

 | \psi_N \rangle = \frac{1}{\sqrt{\langle \psi|\psi\rangle}} | \psi \rangle

Kets are much easier to normalize than the above procedure; solving the equation after evaluating the normalizing integral.

Application to position, momentum and spin state spaces

The following are illustrated in position space. For the momentum space, the equations need only the replacement x → px in 1d or r → p in 3d. Of course, they can be generalized for more than one particle, requiring multiple sums or integrals for each particle, as shown previously.

One spin-0 particle in one dimension

For a spinless particle in one spatial dimension (the x-axis or real line), the state |ψ⟩ can be expanded in terms of a continuum of states; i.e. |ε⟩ ≡ |ψε⟩ → |x⟩ ≡ |ψx⟩, corresponding to each x.

If the particle is confined to a region R (a subset of the x-axis), the state is:

 | \psi \rangle = \int\limits_R d x \, | x \rangle \langle x | \psi \rangle  = \int\limits_R d x \, \psi(x) | x \rangle

leading to the closure relation

 1 = \int\limits_R d x \, | x \rangle \langle x |

and the inner product as stated at the beginning of this article (in that case R = (−∞, ∞)):

 \langle \chi | \psi \rangle = \int\limits_R d x \, \langle \chi | x \rangle \langle x | \psi \rangle = \int\limits_R d x \, \chi(x)^{*} \psi(x) \,..

The "wavefunction" described previously is simply a component of the complex state vector. Projecting |ψ⟩ onto a particular position state |x0⟩, where x0 is in R:

 \langle x_0 | \psi \rangle = \int\limits_R d x \, \langle x_0 | x \rangle \psi(x) = \int\limits_R d x \, \delta( x_0 - x )  \psi(x) = \psi(x_0) \,.

One spin-0 particle in three dimensions

The generalization of the previous result is straightforward. In three dimensions, |ψ⟩ can be expanded in terms of a continuum of states with definite position, so |ε⟩ ≡ |ψε⟩ → |r⟩ ≡ |x, y, z⟩ ≡ |ψr⟩, corresponding to each r = (x, y, z).

If the particle is confined to a region R (a subset of 3d space), the state is;

 | \psi \rangle  = \int\limits_R d^3\mathbf{r} \, | \mathbf{r} \rangle \langle \mathbf{r} | \psi\rangle = \int\limits_R d^3\mathbf{r} \, \psi(\mathbf{r}) | \mathbf{r} \rangle

The closure relation is

 1 = \int\limits_R d^3\mathbf{r} \, | \mathbf{r} \rangle \langle \mathbf{r} |

leading to the inner product of |ψ⟩ with itself leads to the normalization conditions in the three-dimensional definitions above:

 \langle \chi | \psi \rangle = \int\limits_R d^3\mathbf{r} \, \langle \chi | \mathbf{r} \rangle \langle \mathbf{r} | \psi \rangle = \int\limits_R d^3\mathbf{r} \, \chi(\mathbf{r})^{*} \psi(\mathbf{r}) .

Projecting  | \psi \rangle onto a particular position state |r0⟩, where r0 is in R:

 \langle \mathbf{r}_0 | \psi \rangle = \int\limits_R d^3 \mathbf{r} \, \langle \mathbf{r}_0 | \mathbf{r} \rangle \psi(\mathbf{r}) = \int\limits_R d^3 \mathbf{r} \, \delta( \mathbf{r}_0 - \mathbf{r} ) \psi(\mathbf{r}) = \psi(\mathbf{r}_0)

The above expressions take the same form for any number of spatial dimensions.

One spin particle in three dimensions

For a particle with spin s, in all three spatial dimensions, the basis states |r, sz⟩ are a combination of the discrete variable sz (the z-component spin quantum number) and the continuous variable r (position of the particle).[12] Applying the above formalism, the state can be written:

 | \Psi \rangle = \sum_{s_z} \int\limits_R d^3 \, \mathbf{r} \Psi(\mathbf{r},s_z) | \mathbf{r}, s_z \rangle

and therefore the closure relation (identity operator) is:

 1 = \sum_{s_z} \int\limits_R  d^3 \, \mathbf{r} | \mathbf{r},s_z\rangle \langle \mathbf{r} , s_z |

Projecting Ψ onto a particular position-spin state |r0, m⟩, where r0 is in R:

 \langle \mathbf{r}_0, m | \Psi \rangle = \sum_{s_z}\int\limits_R d^3 \mathbf{r} \, \langle \mathbf{r}_0, m | \mathbf{r}, s_z \rangle \Psi(\mathbf{r}, s_z) = \sum_{s_z}\int\limits_R d^3 \mathbf{r} \, \delta_{m \, s_z}\delta( \mathbf{r}_0 - \mathbf{r} )  \Psi(\mathbf{r}, s_z) = \Psi(\mathbf{r}_0, m) \,.

where the joint orthogonality relation

\langle \mathbf{r}_0, m | \mathbf{r}, s_z \rangle = \delta_{m\,s_z}\delta( \mathbf{r}_0 - \mathbf{r} )

has been used.

Time dependence

In the Schrödinger picture, the states evolve in time, so the time dependence is placed in |ψ⟩ according to:[13]

|\psi(t)\rangle = \sum_i \, | \varepsilon_i \rangle \langle \varepsilon_i | \psi(t)\rangle = \sum_i c_i(t) | \varepsilon \rangle

for discrete bases, or

|\psi(t)\rangle = \int d\varepsilon \, | \varepsilon \rangle \langle \varepsilon | \psi(t)\rangle  =  \int d\varepsilon \, \psi(\varepsilon,t) | \varepsilon \rangle

for continuous bases. However, in the Heisenberg picture the states |ψ⟩ are constant in time and time dependence is placed in the Heisenberg operators, so |ψ⟩ is not written as |ψ(t)⟩.

Wave function collapse

The physical meaning of the components of |ψ⟩ is given by the wave function collapse postulate also known as Wave function collapse. If the observable(s) ε (momentum and/or spin, position and/or spin, etc.) corresponding to states |εi⟩ has distinct and definite values, λi, and a measurement of that variable is performed on a system in the state |ψ⟩ then the probability of measuring λi is |⟨εi|ψ⟩|2. If the measurement yields λi, the system "collapses" to the state |εi⟩, irreversibly and instantaneously.

Ontology

Main article: Interpretations of quantum mechanics

Whether the wave function really exists, and what it represents, are major questions in the interpretation of quantum mechanics. Many famous physicists of a previous generation puzzled over this problem, such as Schrödinger, Einstein and Bohr. Some advocate formulations or variants of the Copenhagen interpretation (e.g. Bohr, Wigner and von Neumann) while others, such as Wheeler or Jaynes, take the more classical approach[14] and regard the wave function as representing information in the mind of the observer, i.e. a measure of our knowledge of reality. Some, including Schrödinger, Einstein, Bohm and Everett and others, argued that the wave function must have an objective, physical existence. The latter argument is consistent with the fact that whenever two observers both think that a system is in a pure quantum state, they will always agree on exactly what state it is in (but this may not be true if one or both of them thinks the system is in a mixed state).[15] For more on this topic, see Interpretations of quantum mechanics.

Examples

Here are examples of wavefunctions for specific applications:

  • Free particle
  • Particle in a box
  • Finite square well
  • Delta potential
  • Quantum harmonic oscillator
  • Hydrogen atom and Hydrogen-like atom

See also

  • Boson
  • Double-slit experiment
  • Faraday wave
  • Fermion
  • Normalisable wave function
  • Schrödinger equation
  • Wave function collapse
  • Wave packet

Wave function collapse

Wave function collapse

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Quantum mechanics
Introduction
Glossary · History
Background[show]
Fundamental concepts[show]
Experiments[show]
Formulations[show]
Equations[show]
Interpretations[show]
Advanced topics[show]
Founders[show]
  • v
  • t
  • e

In quantum mechanics, wave function collapse is the phenomenon in which a wave function—initially in a superposition of several eigenstates—appears to reduce to a single eigenstate after interaction with an observer.[1] It is the essence of measurement in quantum mechanics, and connects the wave function with classical observables like position and momentum. Collapse is one of two processes by which quantum systems evolve in time; the other is continuous evolution via the Schrödinger equation.[2] However in this role, collapse is merely a black box for thermodynamically irreversible interaction with a classical environment.[3] Calculations of quantum decoherence predict apparent wave function collapse when a superposition forms between the quantum system's states and the environment's states. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation.[4]

When the Copenhagen interpretation was first expressed, Bohr postulated wave function collapse to cut the quantum world from the classical.[5] This tactical move allowed quantum theory to develop without distractions from interpretational worries. Nevertheless it was debated, for if collapse were a fundamental physical phenomenon, rather than just the epiphenomenon of some other process, it would mean nature were fundamentally stochastic, i.e. nondeterministic, an undesirable property for a theory.[3][6] This issue remained until quantum decoherence entered mainstream opinion after its reformulation in the 1980s.[3][4][7] Decoherence explains the perception of wave function collapse in terms of interacting large- and small-scale quantum systems, and is commonly taught at the graduate level (e.g. the Cohen-Tannoudji textbook).[8] The quantum filtering approach[9][10][11] and the introduction of quantum causality non-demolition principle[12] allows for a classical-environment derivation of wave function collapse from the stochastic Schrödinger equation.

Contents

  • 1 Mathematical description
    • 1.1 Mathematical background
    • 1.2 The process of collapse
    • 1.3 Quantum decoherence
  • 2 History and context
  • 3 See also
  • 4 References

Mathematical description

Before collapse, the wave function may be any square-integrable function. This function is expressible as a linear combination of the eigenstates of any observable. Observables represent classical dynamical variables, and when one is measured by a classical observer, the wave function is projected onto a random eigenstate of that observable. The observer simultaneously measures the classical value of that observable to be the eigenvalue of the final state.[1]

Mathematical background

For an explanation of the notation used, see Bra–ket notation. For details on this formalism, see quantum state.

The quantum state of a physical system is described by a wave function (in turn – an element of a projective Hilbert space). This can be expressed in Dirac or bra-ket notation as a vector:

 | \psi \rangle = \sum_i c_i | \phi_i \rangle .

The kets \scriptstyle { | \phi_1 \rangle, | \phi_2 \rangle, | \phi_3 \rangle \cdots } , specify the different quantum "alternatives" available - a particular quantum state. They form an orthonormal eigenvector basis, formally

\langle \phi_i | \phi_j \rangle = \delta_{ij}.

Where \delta_{ij} represents the Kronecker delta.

An observable (i.e. measurable parameter of the system) is associated with each eigenbasis, with each quantum alternative having a specific value or eigenvalue, ei, of the observable. A "measurable parameter of the system" could be the usual position r and the momentum p of (say) a particle, but also its energy E, z-components of spin (sz), orbital (Lz) and total angular (Jz) momenta etc. In the basis representation these are respectively \scriptstyle { | \mathbf{r},t \rangle = | x,t \rangle + | y,t \rangle + | z,t \rangle, | \mathbf{p},t \rangle = | p_x,t \rangle + | p_y,t \rangle + | p_z,t \rangle, | E \rangle, | s_z \rangle, | L_z \rangle, | J_z \rangle, \cdots } .

The coefficients c1, c2, c3... are the probability amplitudes corresponding to each basis \scriptstyle { | \phi_1 \rangle, | \phi_2 \rangle, | \phi_3 \rangle \cdots } . These are complex numbers. The moduli square of ci, that is |ci|2 = ci*ci (* denotes complex conjugate), is the probability of measuring the system to be in the state \scriptstyle | \phi_i \rangle .

For simplicity in the following, all wave functions are assumed to be normalized; the total probability of measuring all possible states is unity:

\langle \psi|\psi \rangle = \sum_i |c_i|^2 = 1.

The process of collapse

With these definitions it is easy to describe the process of collapse. For any observable, the wave function is initially some linear combination of the eigenbasis \{ |\phi_i\rangle \} of that observable. When an external agency (an observer, experimenter) measures the observable associated with the eigenbasis \{| \phi_i \rangle\}, the wave function collapses from the full | \psi \rangle to just one of the basis eigenstates, | \phi_i \rangle, that is:

|\psi\rangle \rightarrow  |\phi_i\rangle.

The probability of collapsing to a given eigenstate | \phi_k \rangle is the Born probability, P_i=| c_k |^2 . Post-measurement, other elements of the wave function vector, c_{i \neq k}, have "collapsed" to zero, and c_k=1.

More generally, collapse is defined for an operator \hat{Q} with eigenbasis \{|\phi_i\rang\}. If the system is in state |\psi\rang, and \hat{Q} is measured, the probability of collapsing the system to state |\phi_i\rang (and measuring | \phi_i \rang) would be |\lang\psi|\phi_i\rang|^2. Note that this is not the probability that the particle is in state | \phi_k \rangle: that's complete nonsense. It is in state |\psi\rang until cast to an eigenstate of \hat{Q}.

However, we never observe collapse to a single eigenstate of a continuous-spectrum operator (e.g. position, momentum, or a scattering Hamiltonian), because such eigenfunctions are non-normalizable. In these cases, the wave function will partially collapse to a linear combination of "close" eigenstates (necessarily involving a spread in eigenvalues) that embodies the imprecision of the measurement apparatus. The more precise the measurement, the tighter the range. Calculation of probability proceeds identically, except with an integral over the expansion coefficient c (q, t) dq.[13] This phenomenon is unrelated to the uncertainty principle, although increasingly precise measurements of one operator (e.g. position) will naturally homogenize the expansion coefficient of wave function with respect to another, incompatible operator (e.g. momentum), lowering the probability of measuring any particular value of the latter.

Quantum decoherence

Main article: Quantum decoherence#Mathematical details

Wave function collapse is not fundamental from the perspective of quantum decoherence.[14] There are several equivalent approaches to deriving collapse, like the density matrix approach, but each has the same effect: decoherence irreversibly converts the "averaged" or "environmentally traced over" density matrix from a pure state to a reduced mixture, giving the appearance of wave function collapse.

History and context

The concept of wavefunction collapse was introduced by Werner Heisenberg in his 1927 paper on the uncertainty principle, "Über den anschaulichen Inhalt der quantentheoretischen Kinematic und Mechanik", and incorporated into the mathematical formulation of quantum mechanics by John von Neumann, in his 1932 treatise Mathematische Grundlagen der Quantenmechanik.[15] Consistent with Heisenberg, von Neumann postulated that there were two processes of wave function change:

  1. The probabilistic, non-unitary, non-local, discontinuous change brought about by observation and measurement, as outlined above.
  2. The deterministic, unitary, continuous time evolution of an isolated system that obeys the Schrödinger equation (or a relativistic equivalent, i.e. the Dirac equation).

In general, quantum systems exist in superpositions of those basis states that most closely correspond to classical descriptions, and, in the absence of measurement, evolve according to the Schrödinger equation. However, when a measurement is made, the wave function collapses—from an observer's perspective—to just one of the basis states, and the property being measured uniquely acquires the eigenvalue of that particular state, \lambda_i. After the collapse, the system again evolves according to the Schrödinger equation.

By explicitly dealing with the interaction of object and measuring instrument, von Neumann[2] has attempted to create consistency of the two processes of wave function change.

He was able to prove the possibility of a quantum mechanical measurement scheme consistent with wave function collapse. However, he did not prove the necessity of such a collapse. Although von Neumann's projection postulate is often presented as a normative description of quantum measurement, it was conceived by taking into account experimental evidence available during the 1930s (in particular the Compton-Simon experiment has been paradigmatic), and many important present-day measurement procedures do not satisfy it (so-called measurements of the second kind).[16][17][18]

The existence of the wave function collapse is required in

  • the Copenhagen interpretation
  • the objective collapse interpretations
  • the transactional interpretation
  • the von Neumann interpretation in which consciousness causes collapse.

On the other hand, the collapse is considered a redundant or optional approximation in

  • the Consistent histories approach, self-dubbed "Copenhagen done right"
  • the Bohm interpretation
  • the Many-worlds interpretation
  • the Ensemble Interpretation

The cluster of phenomena described by the expression wave function collapse is a fundamental problem in the interpretation of quantum mechanics, and is known as the measurement problem. The problem is deflected by the Copenhagen Interpretation, which postulates that this is a special characteristic of the "measurement" process. Everett's many-worlds interpretation deals with it by discarding the collapse-process, thus reformulating the relation between measurement apparatus and system in such a way that the linear laws of quantum mechanics are universally valid; that is, the only process according to which a quantum system evolves is governed by the Schrödinger equation or some relativistic equivalent.

Originating from Everett's theory, but no longer tied to it, is the physical process of decoherence, which causes an apparent collapse. Decoherence is also important for the consistent histories interpretation. A general description of the evolution of quantum mechanical systems is possible by using density operators and quantum operations. In this formalism (which is closely related to the C*-algebraic formalism) the collapse of the wave function corresponds to a non-unitary quantum operation.

The significance ascribed to the wave function varies from interpretation to interpretation, and varies even within an interpretation (such as the Copenhagen Interpretation). If the wave function merely encodes an observer's knowledge of the universe then the wave function collapse corresponds to the receipt of new information. This is somewhat analogous to the situation in classical physics, except that the classical "wave function" does not necessarily obey a wave equation. If the wave function is physically real, in some sense and to some extent, then the collapse of the wave function is also seen as a real process, to the same extent.

See also

  • Arrow of time
  • Interpretation of quantum mechanics
  • Quantum decoherence
  • Quantum interference
  • Schrödinger's cat
  • Zeno effect

Quantum atom

The Quantum Atom



Common experience tells us that the behavior of waves is much different than the behavior of particles. Wave phenomema has many common examples, but all waves share some common features. Waves have a frequency, a wavelength, a wave velocity, and an amplitude, which may be examined in the following figure:


For a given type of wave in a given medium, the wavelength l and the frequency n can be related to the speed of propagation of the wave(wave velocity) as follows:

l n = c


For Light (electromagnetic waves) travelling in a vacuum, this speed of propagation is mighty quick: 2.99792 x 108 m/s.
Light is just one portion (one range of frequencies) of the EM spectrum, which spans vastly diverse types of radiation:


A device that separates light by its frequency is said to 'disperse' the light. Prisms and raindrops disperse light by refraction, gratings and holograms by diffraction.

Newton and Einstein both thought that light, although a wave, can also have some properties of a particle. In fact, they were right. Light actually is a stream of particles called photons. The amplitude of the light can be related to the number of photons in a given volume, and the energy of each photon is related to the frequency of the light:

E = h n

Where h== Planck's constant = 6.626 x 10-34 Js. When light strikes matter, in particular a molecule, the entire energy of the photon must be absorbed or emitted. Thus the color of the light that interacts with a particular piece of matter tells you about the change in energy that is possible in that matter,

If Light has properties of a particle, surely particles have properties of a wave. DeBroglie showed that this was in fact the case, in the first equation that has both wave and particle properties related:

p = h / l

When one confines a wave to a particular region of space, the edges of the containment place a constraint on the wavelength due to 'boundary conditions'. This is how you play different notes on the same guitar string by moving the position in which you make the wave displacement zero, i.e. where your finger touches the fret.


Note that the 'boundary conditions' can be satisfied by many different waves (called harmonics) if each of those waves has a position of zero displacement at the right place. These positions where the value of the wave is zero are called nodes
(Sometimes we distinguish two types of waves, travelling waves and standing waves, by whether the nodes of the wave move or not. Our discussion of the atom will pretty much rely on the standing wave picture of the electron.)
If electrons are waves, then the wavelength of the electron must 'fit' into any orbit that it makes around the nucleus in an atom. This is the 'boundary condition' for a one electron atom. All orbits that do not have the electrons wavelength 'fit' are not possible, because wave interference will rapidly destroy the wave amplitude and the electron wouldn't exist anymore. This 'interference' effect leads to discrete (quantized) energy levels for the atom. Since light interacts with the atom by causing a transition between these levels, the color(spectrum) of the atom is observed to be a series of sharp lines. For the hydrogen atom:
This pattern is described by discrete energy levels of the atom that have energy inversely proportional to the square of the number of waves in the orbit. We will call this number n, which is an integer from one to infinity. This integer, n, is more precisely described as the number of nodes in the wavefunction of the electron plus one.
En = -Z2 RH /n2

This equation works for all one-electron atoms, not just hydrogen. (Here is a more complete derivation of the
Bohr Atom's Properties. Danger: Advanced). This is precisely the pattern of energy levels that are observed to exist in the Hydrogen atom. Transitions between these levels give the pattern in the absorption or emission spectrum of the atom.


The frequency of the transitions between the energy levels should be given by

hn = DE = En2 - En1 = -Z2 RH (1/n22 - 1/n12)

Where the constant RH has the value 2.180 x 10-18 J.

A Note on Signs:
Obviously the change in energy of an atom can be either positive or negative depending on whether energy is absorbed by the atom from the light field or emitted by the atom to its surroundings. Yet frequencies (or wavelengths) of light MUST be positive numbers. Thus, in the equation above, n2 must be greater than n1 for the resulting frequency to be a positive number. Yet, if the initial principle quantum number of the atom (ninitial) is smaller than the final principle quantum number (nfinal), photon absorption has taken place and if nfinal is smaller than ninitial, photon emission has occurred.

To represent the observed spectra of one electron atoms using the above energy spacing, it is useful to relate the energy of the photon to its wavelength through E = hn and l n = c:


(1/l) = -Z2 ( RH / hc ) (1/n22 - 1/n12) = -Z2 Rl (1/n22 - 1/n12)

Where Rl is called the Rydberg Constant and has been precisely measured as 1.096776 x 107 m-1.

The energy patterns of atoms give the elements their characteristic 'flame' colors because the light they emit when heated has specific photon energy:


We know, too, that Sodium is yellow (Streetlights), and Neon is red (Fluorescent Signs), etc...
The wave nature of the electron is what makes atoms have the properties that they do. It explains the colors of the atoms and also their size. It also means that we cannot think about the electron in an atom as a little ball whirling about the nucleus, but a cloud of probability that is smeared out over the orbit.

This cloud is only spherical for the lowest energy level of the atom. As the energy of the electron in the atom increases, its wavelength decreases, and the number of times the wave amplitude crosses zero per orbit increases. Again, these zero crossings are called nodes. The number of nodes is related to the frequency of the wave and therefore its energy.
The greater number of nodes, the greater the energy of the system.
The lowest level of the H atom has no (zero) nodes, the next higher level has 1 node, but that node can either be an angular node or a radial node. If it is an angular node, then you have a 2p orbital

If you have a radial node, then you have a 2s orbital (3s shown also)

The number of nodes determines the energy. The Principle quantum number, n, is equal to the number of nodes plus 1, i.e. nodes = n-1. For a hydrogen atom, the energy it takes to make a radial node is equal to the the energy it takes to make an angular node.

For higher n, you can have a greater numbers of nodes. For n>=3, you can have 2 angular nodes, and these are called d orbitals
Here are some more
pictures of the atomic orbital shapes

The following shows the angular nodes for a 2p orbital:

Similarly for a 3d orbital (with two angular nodes):

In short, the energy of the atom is determined by the number of nodes which is related to the principal quantum number n by: nodes = n-1.
The number of angular nodes is labelled by a letter (s, p, d, f, g, h, i, ....)

s: no angular nodes
p: one angular node
d: two angular nodes
f: three angular nodes
etc...
Note: The number of radial nodes is the total number of nodes minus the number of angular nodes.
HOME || Email || Operations || TOP
PJ Brucat || University of Florida

Quantum mechanics

Intro to Quantum Mechanics

This page is intended to give an ordinary person a brief overview of the importance and wonder of quantum mechanics. Unfortunately, most people believe you need the mind of Einstein in order to understand QM so they give up on it entirely. (Interesting side note: Einstein didn't believe QM was a correct theory!) Even some chemists fall into that category-- to represent physical chemistry our departmental T-shirts have a picture of the below atom, which is almost a century out of date. <Sigh>
So please read on, and take a dip in an ocean of information that I find completely invigorating!

Old atom  {1 kB}

If the above picture is your idea of an atom, with electrons looping around the nucleus, you are about 70 years out of date. It's time to open your eyes to the modern world of quantum mechanics! The picture below shows some plots of where you would most likely find an electron in a hydrogen atom (the nucleus is at the center of each plot).

Hydrogen electron orbitals  {18 kB}

What is quantum mechanics?

Simply put, quantum mechanics is the study of matter and radiation at an atomic level.

Why was quantum mechanics developed?

In the early 20th century some experiments produced results which could not be explained by classical physics (the science developed by Galileo Galilei, Isaac Newton, etc.). For instance, it was well known that electrons orbited the nucleus of an atom. However, if they did so in a manner which resembled the planets orbiting the sun, classical physics predicted that the electrons would spiral in and crash into the nucleus within a fraction of a second. Obviously that doesn't happen, or life as we know it would not exist. (Chemistry depends upon the interaction of the electrons in atoms, and life depends upon chemistry). That incorrect prediction, along with some other experiments that classical physics could not explain, showed scientists that something new was needed to explain science at the atomic level.

If classical physics is wrong, why do we still use it?

Classical physics is a flawed theory, but it is only dramatically flawed when dealing with the very small (atomic size, where quantum mechanics is used) or the very fast (near the speed of light, where relativity takes over). For everyday things, which are much larger than atoms and much slower than the speed of light, classical physics does an excellent job. Plus, it is much easier to use than either quantum mechanics or relativity (each of which require an extensive amount of math).

What is the importance of quantum mechanics?

The following are among the most important things which quantum mechanics can describe while classical physics cannot:

  • Discreteness of energy
  • The wave-particle duality of light and matter
  • Quantum tunneling
  • The Heisenberg uncertainty principle
  • Spin of a particle

Discreteness of energy

If you look at the spectrum of light emitted by energetic atoms (such as the orange-yellow light from sodium vapor street lights, or the blue-white light from mercury vapor lamps) you will notice that it is composed of individual lines of different colors. These lines represent the discrete energy levels of the electrons in those excited atoms. When an electron in a high energy state jumps down to a lower one, the atom emits a photon of light which corresponds to the exact energy difference of those two levels (conservation of energy). The bigger the energy difference, the more energetic the photon will be, and the closer its color will be to the violet end of the spectrum. If electrons were not restricted to discrete energy levels, the spectrum from an excited atom would be a continuous spread of colors from red to violet with no individual lines.

Emission spectra  {52 kB}

The concept of discrete energy levels can be demonstrated with a 3-way light bulb. A 40/75/115 watt bulb can only shine light at those three wattage's, and when you switch from one setting to the next, the power immediately jumps to the new setting instead of just gradually increasing.

It is the fact that electrons can only exist at discrete energy levels which prevents them from spiraling into the nucleus, as classical physics predicts. And it is this quantization of energy, along with some other atomic properties that are quantized, which gives quantum mechanics its name.

The wave-particle duality of light and matter

In 1690 Christiaan Huygens theorized that light was composed of waves, while in 1704 Isaac Newton explained that light was made of tiny particles. Experiments supported each of their theories. However, neither a completely-particle theory nor a completely-wave theory could explain all of the phenomena associated with light! So scientists began to think of light as both a particle and a wave. In 1923 Louis de Broglie hypothesized that a material particle could also exhibit wavelike properties, and in 1927 it was shown (by Davisson and Germer) that electrons can indeed behave like waves.

How can something be both a particle and a wave at the same time? For one thing, it is incorrect to think of light as a stream of particles moving up and down in a wavelike manner. Actually, light and matter exist as particles; what behaves like a wave is the probability of where that particle will be. The reason light sometimes appears to act as a wave is because we are noticing the accumulation of many of the light particles distributed over the probabilities of where each particle could be.

For instance, suppose we had a dart-throwing machine that had a 5% chance of hitting the bulls-eye and a 95% chance of hitting the outer ring and no chance of hitting any other place on the dart board. Now, suppose we let the machine throw 100 darts, keeping all of them stuck in the board. We can see each individual dart (so we know they behave like a particle) but we can also see a pattern on the board of a large ring of darts surrounding a small cluster in the middle. This pattern is the accumulation of the individual darts over the probabilities of where each dart could have landed, and represents the 'wavelike' behavior of the darts. Get it?

Quantum tunneling

This is one of the most interesting phenomena to arise from quantum mechanics; without it computer chips would not exist, and a 'personal' computer would probably take up an entire room. As stated above, a wave determines the probability of where a particle will be. When that probability wave encounters an energy barrier most of the wave will be reflected back, but a small portion of it will 'leak' into the barrier. If the barrier is small enough, the wave that leaked through will continue on the other side of it. Even though the particle doesn't have enough energy to get over the barrier, there is still a small probability that it can 'tunnel' through it!

Let's say you are throwing a rubber ball against a wall. You know you don't have enough energy to throw it through the wall, so you always expect it to bounce back. Quantum mechanics, however, says that there is a small probability that the ball could go right through the wall (without damaging the wall) and continue its flight on the other side! With something as large as a rubber ball, though, that probability is so small that you could throw the ball for billions of years and never see it go through the wall. But with something as tiny as an electron, tunneling is an everyday occurrence.

On the flip side of tunneling, when a particle encounters a drop in energy there is a small probability that it will be reflected. In other words, if you were rolling a marble off a flat level table, there is a small chance that when the marble reached the edge it would bounce back instead of dropping to the floor! Again, for something as large as a marble you'll probably never see something like that happen, but for photons (the massless particles of light) it is a very real occurrence.

The Heisenberg uncertainty principle

People are familiar with measuring things in the macroscopic world around them. Someone pulls out a tape measure and determines the length of a table. A state trooper aims his radar gun at a car and knows what direction the car is traveling, as well as how fast. They get the information they want and don't worry whether the measurement itself has changed what they were measuring. After all, what would be the sense in determining that a table is 80 cm long if the very act of measuring it changed its length!

At the atomic scale of quantum mechanics, however, measurement becomes a very delicate process. Let's say you want to find out where an electron is and where it is going (that trooper has a feeling that any electron he catches will be going faster than the local speed limit). How would you do it? Get a super high powered magnifier and look for it? The very act of looking depends upon light, which is made of photons, and these photons could have enough momentum that once they hit the electron they would change its course! It's like rolling the cue ball across a billiard table and trying to discover where it is going by bouncing the 8-ball off of it; by making the measurement with the 8-ball you have certainly altered the course of the cue ball. You may have discovered where the cue ball was, but now have no idea of where it is going (because you were measuring with the 8-ball instead of actually looking at the table).

Werner Heisenberg was the first to realize that certain pairs of measurements have an intrinsic uncertainty associated with them. For instance, if you have a very good idea of where something is located, then, to a certain degree, you must have a poor idea of how fast it is moving or in what direction. We don't notice this in everyday life because any inherent uncertainty from Heisenberg's principle is well within the acceptable accuracy we desire. For example, you may see a parked car and think you know exactly where it is and exactly how fast it is moving. But would you really know those things exactly? If you were to measure the position of the car to an accuracy of a billionth of a billionth of a centimeter, you would be trying to measure the positions of the individual atoms which make up the car, and those atoms would be jiggling around just because the temperature of the car was above absolute zero!

Heisenberg's uncertainty principle completely flies in the face of classical physics. After all, the very foundation of science is the ability to measure things accurately, and now quantum mechanics is saying that it's impossible to get those measurements exact! But the Heisenberg uncertainty principle is a fact of nature, and it would be impossible to build a measuring device which could get around it.

Spin of a particle

In 1922 Otto Stern and Walther Gerlach performed an experiment whose results could not be explained by classical physics. Their experiment indicated that atomic particles possess an intrinsic angular momentum, or spin, and that this spin is quantized (that is, it can only have certain discrete values). Spin is a completely quantum mechanical property of a particle and cannot be explained in any way by classical physics.

It is important to realize that the spin of an atomic particle is not a measure of how it is spinning! In fact, it is impossible to tell whether something as small as an electron is spinning at all! The word 'spin' is just a convenient way of talking about the intrinsic angular momentum of a particle.

Magnetic resonance imaging (MRI) uses the fact that under certain conditions the spin of hydrogen nuclei can be 'flipped' from one state to another. By measuring the location of these flips, a picture can be formed of where the hydrogen atoms (mainly as a part of water) are in a body. Since tumors tend to have a different water concentration from the surrounding tissue, they would stand out in such a picture.

What is the Schrödinger equation?

Every quantum particle is characterized by a wave function. In 1925 Erwin Schrödinger developed the differential equation which describes the evolution of those wave functions. By using Schrödinger's equation scientists can find the wave function which solves a particular problem in quantum mechanics. Unfortunately, it is usually impossible to find an exact solution to the equation, so certain assumptions are used in order to obtain an approximate answer for the particular problem.

Schrodinger equation  {5 kB}

What is a wave packet?

As mentioned earlier, the Schrödinger equation for a particular problem cannot always be solved exactly. However, when there is no force acting upon a particle its potential energy is zero and the Schrödinger equation for the particle can be exactly solved. The solution to this 'free' particle is something known as a wave packet (which initially looks just like a Gaussian bell curve). Wave packets, therefore, can provide a useful way to find approximate solutions to problems which otherwise could not be easily solved.

First, a wave packet is assumed to initially describe the particle under study. Then, when the particle encounters a force (so its potential energy is no longer zero), that force modifies the wave packet. The trick, of course, is to find accurate (and quick!) ways to 'propagate' the wave packet so that it still represents the particle at a later point in time. Finding such propagation techniques, and applying them to useful problems, is the topic of my research.

References

  1. Claude Cohen-Tannoudji, Bernard Diu, and Franck Laloë, Quantum Mechanics, Volumes 1 and 2, John Wiley & Sons, New York (1977).
  2. John J. Brehm and William J. Mullin, Introduction to the Structure of Matter: A Course in Modern Physics, John Wiley & Sons, New York (1989).
  3. Donald A. McQuarrie, Quantum Chemistry, University Science Books, Mill Valley, Calif. (1983).


A Few Places That Refer to This Page

Links2Go Key Resource in Quantum Physics

The American Institute of Physics, as a part of a series about the achievements of Albert Einstein.

A Hypernote(#7) in an article in Science magazine (vol. 282, 23 Oct 1998, pp. 637-638) about quantum teleportation.

Professor David Banach's Philosophy of Science homepage.

A science link at Sandhills Community College.

Phil Plait's Bad Astronomy pages, dealing with a question about electron probability.


Useful Links

Basic Ideas of Quantum Mechanics.

Quantum Mechanics Overview.


Written by Todd Stedl (tstedl@quantumintro.com).
Last modified on 25 July 1996.
Minor revisions on 25 March 2000.
Reconstructed on 18 August 2004 after a server meltdown.
Moved to quantumintro.com on 31 July 2005.

I'll remember you all ( Main page)

quality statistics s

I'll remember you all

web site tracking stats
n

My friends

Counters
Television Shopping

My people

Other people

Free Counters

Mi gente

Geo Visitors Map Visit http://www.ipligence.com

No Faxing Payday Loans
No Faxing Payday Loans

M

track web site visits
Fast $1500 Online

Seguidores

Mis amigos

Geo Visitors Map

My music

Tema Picture Window. Con la tecnología de Blogger.