Over the past decades, physics, astronomy, and cosmology have seen amazing advances driven by improvements in modeling, simulation, and data analysis algorithms.
The enormous and complex data sets generated by modern sky surveys and international physics experiments like the Large Hadron Collider are sifted, analyzed, stored, and fed into simulations that have become virtual experimental laboratories themselves.
In the safe confines of a simulation, it’s possible to explode enormous supernovae, slow down and observe the dance of subatomic particles, or even to time travel, rewinding and replaying the evolution of our universe from the Big Bang through today and into the distant future.
Simulations also offer the only way for humans to perceive the stuff that makes up most of the universe: dark matter.
Thanks to a high-speed data pipeline, image analysis software and machine learning algorithms, the Palomar Transient Factory identified two supernova firsts in one year: an early state supernova and a “progenitor” star system that turned into a supernova.
Both of these finds led astronomers to train their telescopes on supernovae as they happened, rather than catching them at their fullest brightness, if at all.
A new visualization technique offers a stunningly detailed look at the distribution of dark matter in the universe.
Dark matter makes up about 85 percent of our universe, but because it doesn’t interact with light, we can’t see it. Computer simulations and visualizations help humans to envision this important, but invisible stuff. Using a new technique developed at Stanford University’s Kavli Institute for Particle Astrophysics and Cosmology (KIPAC) and the SLAC National Accelerator Laboratory, Ralf Kaehler, Oliver Hahn, and Tom Abel created this model to more accurately represent how dark matter forms in diaphanous sheets and concentrated clumps.
You must be logged in to post a comment.