• 0 Posts
  • 50 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle







  • how people very knowledgeable on the current paradigm cannot see (most of times historicaly) that a paradigm shift is about to happen ?

    I’m not sure I’d agree with that assessment. Generally a new model or understanding of physics arises because of known shortcomings in the current model. Quantum physics is the classic example that resolved a number of open problems at the time: the ultraviolet catastrophe in black body radiation, the photoelectric effect, and the interference pattern of the double slit experiment, among others. In the years leading up to the development of quantum theory, it was clear to everyone active in physics that something was missing from the current understanding of Newtonian/classical physics. Obviously it wasn’t clear what the solution was until it came about, but it was obvious that a shift was coming.

    The same thing happened again with electroweak unification%20and%20the%20weak%20interaction.) and the standard model of particle physics. There were known problems with the previous standard model Lagrangian, but it took a unique mathematical approach to resolve many of them.

    Generally research focuses on things that are unknown or can’t be explained by our current understanding of physics. The review article you linked, for example, details open questions and contradictory observations/predictions in the state of the art.




  • There isn’t a link in your post, but it looks like you’re referring to this preprint. The article has been published in a peer reviewed journal paywall warning.

    This is a review article, so it isn’t proposing anything new and is instead giving a summary of the current state of the field. These sorts of articles are typically written by someone who is deeply familiar with the subject. They’re also super useful if you’re learning about a new area - think of them as a short, relatively up-to-date textbook.

    I’m not sure how you’re interpreting this review as an alternative to the standard model of cosmology and the Big Bang. Everything is pretty standard quantum field theory. The only mention of the CMB is in regards to the possibility that gravitons in the early universe would leave detectable signatures (anisotropies and polarization). They aren’t proposing an alternative production mechanism for the CMB.






  • First a caveat: An object with mass can’t move at the speed of light, but it could move at speeds arbitrarily close to that.

    The most successful model of gravity isGeneral Relativity, which treats gravity as a curvature of 4-dimensional space time. Gravity’s influence travels at the speed of light. There’s a classic thought experiment that sort of answers your question: what would happen if the sun was teleported away? The answer is the earth would continue to orbit around the spot the sun was for 8 minutes, and we would continue to see sunlight for that same amount of time since that’s how long it takes light to travel that distance. Then after 8 minutes the sun would disappear and the first “lack of gravity” would reach us, and things would be bad for earth :(

    The fact that gravity travels at the speed of light actually leads to an interesting phenomenon: Gravitational waves If a massive object rapidly accelerates (or decelerates), for example a star sized mass moving quickly and then coming to an abrupt stop, it will emit a ripple in space time called a gravitational wave that will travel outward at the speed of light.

    It was big news about a decade ago when gravitational waves were first detected by LIGO, a series of large interferometers that look for expansion/contraction in spacetime. Their first detection was the collision of 2 black holes; as the black holes spiral around each other and eventually merge, they emit oscillating waves with increasing frequency. They made a cool video showing how the frequency increases by converting it to sound.

    Since then LIGO and VIRGO (similar European collaboration) have detected multiple gravitational waves from the collision of black holes and neutron stars. So not only are gravitational waves a neat validation of general relativity, they’re actually being used to do astronomy.





  • The x-axis range spans the same region of “photon energy” space in both plots. The data starts at about 280 nm in the first plot, which is 1000 THz (the maximum value in the second plot).

    The stretching effect caused by working in different x-axis units is because the units don’t map linearly, but are inversely proportional. A 1 nm wide histogram bin at 1000 nm will contain the histogram counts corresponding to a 0.3 THz wide region at 300 THz in the frequency plot. Another 1 nm wide bin at 200 nm will correspond to a 7.5 THz wide region located at 1500 THz in the frequency plot.

    You can get a sense of how this works just by looking at how much space the colorful visible light portion of the spectrum takes up on each plot. In the wavelength plot, by eye I’d say visible light corresponds to about 1/6 the horizontal axis scale. In the frequency plot, it’s more like 1/4.

    That normalization is necessary because otherwise exactly how you bin the data would change the vertical scale, even if you used the same units. For example, consider the first plot. Let’s assume the histogram bins are uniformly 1 nm wide. Now imaging rebinning the data into 2 nm wide bins. You would effectively take the contents of 2 bins and combine them into one, so the vertical scale would roughly double. 2 plots would contain the same data but look vastly different in magnitude. But if in both cases you divide by bin width (1 nm or 2 nm, depending) the histogram magnitudes would be equal again. So that’s why the units have to be given in “per nm” or “per THz).