- Historical overview
- Observations and experimental limits
- What did we learn?
- Quantum mechanics
- The standard model of particle physics
- Conclusions
- References and further reading

Since the early days of mankind philosophers have speculated what the smallest component may be which matter is made of. They intuitively assumed that this question makes sense, i.e. that there *is* such a thing as the “smallest scale”. However, answering this question is out of the scope of science itself, but is rather in the realm of philosophy. So let's don't touch this question in this introduction anymore.

The idea of tiny, indivisible units dates back to the ancient Greeks who invented the word “atom” (=indivisible unit). The first known inventor of this idea was Leukipp (5th century BC). Demokrit (also 5th century BC) refined the theory further by introducing different properties of atoms and later followed an atom-based creation theory by Epikur (3rd century BC).

In medieval times the atomic theory played no role, but with the advent of natural sciences interest in these ideas was revived. Descartes (1596-1650) formulated a corpuscular model to explain propagation in the vacuum. Newton (1643-1727) came up with a different idea that atoms might be carriers of interactions. Finally atoms had their breakthrough when they were used to describe observable phenomena in the kinetic gas theory by D. Bernoulli in 1738, in chemistry by Proust, Dalton, Avogadro, Berzelius and in electrodynamics by Maxwell, Lenard, Thomson, Rutherford (during the 19th century).

According to our modern view ordinary matter is indeed build up of atoms; however, these atoms are not truly elementary, but they are build out of a lot of different particles on a smaller scale; the particles on the smallest scale we are aware of today do not seem to exhibit a substructure and are thus assumed to be point-like.

However, although the goal of modern sciences is not to give an explanation (as the ancient philosophy did) of the building blocks of matter but rather a description, there are applications of theories of particle physics to philosophy. When discussing these topics we again have to remember that these considerations cannot be answered by scientific means and we shouldn't discuss them in this context.

In order to examine the structure of matter we have to use several technical instruments to go beyond the limits of our senses. The first invention was the optical microscope which used light to probe the structure of matter. However, it only works for structures larger than its wavelength. In order to use light for magnifying objects we furthermore have to find a way to focus the beam. This only works in the optical and ultraviolet range but is not really competitive for shorter wavelengths (like X-ray or even -radiation).

To overcome the limits of photons as probes of structure one instead uses other particles like electrons whose wavelength can be made smaller in a controlled way by accelerating them. This principle is used by electron microscopes. Together with the raster tunnel microscope these are the methods of choice for examining structures on the atomic and molecular level.

If one wants to go deeper inside matter and examine smaller structures, one has to accelerate the probing particle even further. This is achieved in a particle collider — the most modern and advanced today have diameters of several miles.

There are different types of colliders — the basic mechanism is the same: you crash two different sorts of particles and look at what comes out afterward. In most cases nothing will happen at all, but in a few cases you manage to see the substructure of the particles. Examining these experiments has in the early days of particle physics been done by diploma and PhD students and was not truly a fun exercise. Luckily today we have computers which simplify the task significantly.

One lessen we have learned early in this century (Schrödinger, Heisenberg et al) was that physics on atomic scales works differently than physics on scales which are accessible to our ordinary senses.

One important experiment was the double-slit experiment which worked by sending a collinear beam of particles (in fact it works for any particle if we adjust the width of the slits and the distance between the slits on some critical particle-type dependent length scale) on a double slit. In the single slit experiment we observe the following pattern:

Now let's open a 2nd slit (and move the center of the drawing between the 2 slits). Then we observe instead:

This behavior would rather be expected for waves than for particles since the wavy pattern looks like an interference of two waves. The lesson physicists learned from experiments of this type was that conventional mechanics was not able to account for these phenomena and that they had to come up with something different.

What I skipped (and hoped that no-one would notice) was the fact that the patterns actually also depend on the width of the slit(s) and the distance between them in the 2nd case. This alters the shapes of the above figures, but does not change the conclusions.

First hints towards this theory were found by Planck in 1899 although nobody (not even himself) took the idea seriously that the fundamental objects of radiation could be non-continuous beings. The first one to take this idea seriously was Bohr in 1912 when he formulated his atomic model. However, the first complete formulation of the mathematical apparatus of quantum mechanics as we know it today was performed in 1925/1926 by Schrödinger and Heisenberg.

Although this formulation was comparatively easy to use for actual calculations, it was conceptually extremely difficult to understand. Feynman suggested in 1948 a different formulation of quantum mechanics which was conceptually simpler to use but was more difficult to do calculations with. In principal both formulations are equivalent, but the practitioner will usually choose the formulation which is simpler to use for a given problem.

Here I am deliberately not doing justice to other quantization prescriptions which are also in use today. This is just due to my personal ignorance of this subject. The best I can do is to ask you to refer to the literature if you are interested in these topics.

During the years more and more sophisticated models in quantum theory have been developed. Some have applications in the industry, some have so far been used only in fundamental research and some are purely mathematical objects living in their own ivory tower.

Basically what one has to do to formulate a quantum theory is the following:

- Start with a classical theory which might be apply to the problem you want to solve (or which would apply on scales much larger than atomic scales)
- Apply a certain procedure called “quantization” (which uses either the Schrödinger or Feynman's approach)

At this point one usually encounters a problem: the quantum theory which corresponds to a classical theory is not unique — i.e. for a given classical theory you may find lots of different quantum theories. Just remember the drawing above: The opening of the 2nd slit caused interference between the two slits. This wouldn't happen in a classical theory. So we find that the quantum theory has *more* information than the classical theory, namely the correlations between different quantities and we need to get this knowledge somehow.

To find the “correct” quantum theory we also need some additional input like experimental data. Often, however, one also resorts to other general features which one believes a “good” model should have (especially if there is no experimental data around so far).

However, there are also cases where this procedure fails. If the classical theory we have chosen as a starting point has a feature which is not compatible with our prescription for quantization, we are typically running into some problems. A famous case where this happens is when people try to quantize gravity. There is a lot of activity in this field, but at the moment of this writing, the problems have not been solved.

Another interesting question is how it is possible that a quantum theory at a small scale exhibits such a weird behavior, but we do not see traces of this weirdness at larger scales.

In fact, one also has to prove that a given quantum theory reduces to a classical theory in a certain limit. This is rather complicated and requires the use of quantum mechanical objects called *coherent states* and exploit certain mathematical expansions. I cannot explain them here, but ask the interested reader again to consult the literature.

With today's experimental methods and some neat quantum mechanical models physicists have invented a framework to explain almost all phenomena at length scales within experimental reach.

As already hinted above, the model which governs most fields of solid state physics, chemistry and thermodynamics is the atomic theory. However, once we move to smaller scales, we will soon discover different physics with new interactions and new particles. Physicists have invented a classification scheme for particles and a model which describes their behavior. They also invented beautiful names as “Charmonium”, “Quarks”, “Beauty-meson” and “Kaon”. This model is called the “Standard model of Particle Physics” and has been suggested by Glashow, Salam and Weinberg at the beginning of the 70ties. So far it has been able to describe all experimental data within the errors. However, recently some new findings have been reported which may hint that we still haven't understood everything.

However, there is no model which explains physics at *all* accessible scales. Rather physicists use different models for different classes of applications. Although one has a rough idea about how to go from one region to the other, few rigorous mathematical proves are available and after all we do not really know what happens if we go to length scales where the discrepancy between quantum mechanics and gravity might become of importance.

We have seen that physics has developed significantly in the last 100 years. We also had to revise our models of matter several times. We have been able to construct a highly successful model of particles which is accurate in almost all experiments performed so far. However, there is no reason to believe that our mission to find the “smallest” structure may ever succeed since we have no way to make sure that there is not yet a smaller substructure which is just too small for experiments to resolve them.

But as an old wisdom teaches us: Even if we cannot solve a problem completely, we may still learn so much on the way, that it was worth it nonetheless!

For the historical overview I consulted:

**Meyers Neues Lexikon, Bibliographisches Institut Mannheim/Wien/Zürich 1978**

For an introduction to quantum mechanics the book to read is:**R.P. Feynman, QED – The strange Theory of Light and Matter, Princeton Univ. Press 1985**

The German translation is**R.P. Feynman, QED – Die seltsame Theorie des Lichts und der Materie, R. Piper GmbH&Co KG, München 1988**

A well-written and interesting book has been written by Christine Sutton and can be read at any level:**C. Sutton: Spaceship Neutrino, Cambridge Univ. Press, 1992**

The German translation is**C. Sutton: Raumschiff Neutrino, Die Geschichte eines seltsamen Elementarteilchens, Birkhäuser Verlag Basel, 1994**

If you want to go deeper into the matter and are looking for a mathematical and technical book instead, you should have a look at**A. Peres, Quantum Theory: Concepts&Methods, Kluwer Academic Publishers 1993**

One of the classics is also:**Messiah, Quantenmechanik I, Walter de Gruyter 1976**

I don't know about an English translation though.

There is information of general interest aimed at beginning physics students in**M. Bormann, Experimentalphysik Bd. 3 (Optik, Atomphysik), Universitätsverlag Dr. Brockmeyer, 1991**

If you are a physicist with interest in mathematics or a mathematician with interest in physics, you shouldn't miss **J. Baez, This weeks finds in physics, http://math.ucr.edu/home/baez/twfshort.html**

Imprint / Impressum

© 1997-2013 Dr. Wolfram Schroers. This site is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

Additional permissions available at http://www.field-theory.org/editorial/index.html.

See the Imprint or Impressum (in German) for further information.

About

Wolfram is a leading software engineer focused on Enterprise and B2B apps on iOS. His clients rank from small independent studios to companies in the German DAX index.

He has worked at top Universities on three continents in the past decade and is a popular speaker at conferences. He is currently working in Berlin, Germany, and can be reached at his company website.