|
Post by mrsonde on Mar 30, 2015 6:26:32 GMT 1
Two illustrations of how the Physics community, and cosmologists in particular, have become completely in thrall to mathematics. It is not so much that the mathematical content of theories is preferred to empirical observations - the maths dictates the observations (gives their theoretical interpretation to such a complete degree of ellision these days that the belief amounts to - and is unashamedly called - knowledge: the interpretation of the theory is what is observed.
One - a highly theoretical premise (so solid it amounts to dogma) derived from very rough and ready mathematical approximations dictates that all supernovae explode when they have the same mass. These are the new standard candles in astronomy - if this mathematical theory is correct, the brightness of a supernova is an absolute measure of its distance. This is where the theory that the universal expansion is accelerating comes from, and it won its "observer" the Nobel Prize.
Two - from the same dogma, by simple trig, it is held to be possible to measure the curvature of the universal geometry: if you know the distance of two widely separated supernovae, you should if c is constant be able to calculate the path its light has travelled. This "measurement" - entirely deduced from highly mathematical theories, that have no independent observational evidential support apart from these theories - has been used to declare that we now know the universe is flat (Euclidean) and is therefore infinite in extent. This has been "observed".
Now, should a supernova ever be observed exploding when an accurate genuinely empirical measurement of its mass has been made, and it is found to be even a tiny bit different from the very precise figure used (small or larger) to define this standard candle, both these "observations" fly out of the window. They'd be completely wrong. Moreover, they couldn't be "corrected", made more precise through better calibration - the wrongness would be irredemiable.
So - who's willing to bet we know enough about stars and their life cycles and exactly how much mass they have when they explode? When the nature of the one nearest to us is still very much an utter mystery, and new empirical discoveries about its nature completely dumbfound astrophysicists every year, every time we devise a new method to actually look at the thing?
|
|
|
Post by mrsonde on Mar 30, 2015 19:11:44 GMT 1
What - no one? Okay, I'll give it five years, from this date. I'll offer odds of...ermmm...five to one. Before the supernova super candle has been shown to be no reliable measure at all, and therefore the conclusions drawn from it are totally fallacious. Nobel Prizes can't be returned, of course, but possibly an awareness of the theory-dependence of such "observations" might return, at last.
These are better odds than my Mayweather offer, which on reflection I'll raise to the same odds. Better than you'll find in any bookies in the world, so I'll put a limit of £100 on it. Come on now, gentlemen.
|
|
|
Post by fascinating on Apr 1, 2015 7:59:39 GMT 1
You are saying that, contrary to my statement, people became aware of the quantum mechanical world simply by making mathematical calculations? Is that right?
|
|
|
Post by fascinating on Apr 1, 2015 8:19:52 GMT 1
It's not ALL supernovae, it's just the Type 1a ones, those that accumulated their mass from an associated star, and exploded when they reach a certain size limit. They can be identified by a characteristic profile in their spectrum.
I suppose it is possible that they have got the exact mass wrong in such situations. You'd have to trawl through the relevant papers showing the calculation of the mass, and the quoted range of uncertainty. First we need to establish why they are so certain of the figures that they give.
|
|
|
Post by mrsonde on Apr 6, 2015 5:40:19 GMT 1
You are saying that, contrary to my statement, people became aware of the quantum mechanical world simply by making mathematical calculations? Is that right? The "quntum mechanical world"? Given that such a "world" is a collection of inter-related mathematical techniques and formulae, with consequent philosophical interpretations of what they might mean in the "real" world, yes. Heisenberg's and Schrodinger's alternative "mechanics" were techniques for arriving at approximately correct outputs from empirically measured inputs, but the interpretations of those formulae into a "world" was no empirical in the least. Consequently, further elaborations constitutive of quantum physics were interpolations of those formulae, or measurements arrived at by employing them. Pauli and Feynman had some empirical experience, but none directly relevant to their mathematical theories, as with Born, Bohr, and Dirac. Later technological developments supposedly exploiting that "quantum mechanical world" - transistors, lasers, masers, MRI and so forth - were actually employing those mathematical theories and again, using them to interpret empirical discoveries. The complexity comes in with this mysterious ability for such theories to correctly predict real world behaviour but, as should have been clear shortly after Copernicus and Kepler, if not Newton and Einstein, such predictive power is not in itself evidence for the truth of any such structural interpretations of those maths.
|
|
|
Post by mrsonde on Apr 6, 2015 5:49:00 GMT 1
It's not ALL supernovae, it's just the Type 1a ones, those that accumulated their mass from an associated star, and exploded when they reach a certain size limit. They can be identified by a characteristic profile in their spectrum. Entirely theoretical. How do you think this calculation is arrived at? From inductive generalisation of measuring the mass of x number of exemplars? And how would such measurements have been made? It's all done from the maths. As for the range of uncertainty - there isn't one. That's my point: purely and simply, that's what the maths say. They're "certain" solely due to a faith in the tuth of the mathematical description - they believe it's knowledge of the way the world is.
|
|
|
Post by fascinating on Apr 6, 2015 15:55:52 GMT 1
There were several papers produced relating to this but I think the most relevant one is this arxiv.org/abs/astro-ph/9609059. They used a sample of 29 Type 1a Supernovae, 9 of which were in nearby galaxies which could be distance-measured by other methods, such as Cepheid variables. Their observations showed that there was a linear relationship between the decline rate of the supernovaes brightness and their absolute magnitude, and that Sn 1a can be used as distance indicators with a precision of 7-10%.
|
|
|
Post by mrsonde on Apr 7, 2015 5:50:14 GMT 1
How do you think this calculation is arrived at? From inductive generalisation of measuring the mass of x number of exemplars? And how would such measurements have been made?
|
|
|
Post by fascinating on Apr 7, 2015 8:25:43 GMT 1
Sorry but I thought I had answered these questions in my last reply. Remember the issue here is whether the Type 1a supernovae can reliably be used as standard candles. So the issue is ultimately to do with the absolute brightness, not specifically the mass, of this type of object. So
If we are talking of the calculation of the absolute brightness, observations of nearby galaxies, with type 1a supernovae, whose distances have been determined by other methods. Knowing the distances allows them to easily calculate the absolute brightness from the combination of apparent brightness and known distance (apparent brightness decreases as to the inverse square of the distance).
The brightness is determined from inductive generalisation of examplars. x=9.
By use of the panoply of sensitive observational instruments that astronomers use, such as photomultipliers, spectrometers etc.
|
|
|
Post by fascinating on Apr 14, 2015 7:51:38 GMT 1
|
|
|
Post by mrsonde on May 14, 2015 2:33:19 GMT 1
How do you think this calculation is arrived at? From inductive generalisation of measuring the mass of x number of exemplars? And how would such measurements have been made?
|
|
|
Post by mrsonde on May 14, 2015 2:36:22 GMT 1
It doesn't matter. Anyone with the slightest slimmest knowledge of the history of astronomy would not need any additional doubt casting on this question. That's my point.
|
|
|
Post by fascinating on May 27, 2015 22:12:38 GMT 1
mrsonde, what you said a few posts ago was "a highly theoretical premise (so solid it amounts to dogma) derived from very rough and ready mathematical approximations dictates that all supernovae explode when they have the same mass. These are the new standard candles in astronomy..." As I said, it is not ALL supernovae, it's the Type 1a ones that you are referring to. As you say, they are used as standard candles, in other words, if a supernova is observed, and it is identified from the spectrum that it is of this type (1a) then the absolute brightness is easily determined (from the rate of the decline in the luminosity), thus it can be used as standard to determine its distance.
As far as I can see, the mass of the supernova is not directly relevant here.
The issue is whether or not Type 1a supernovae can reliably be used as standard candles.
I think you will have heard of Cepheid variables and RR Lyrae variables.
Distances of galaxies are not determined using parallax. The main methods are using certain types of variables.
MY data?
What formula?
The examplars themselves are classified accoreing to an inductive gereralisation. In my grandfathers's time galaxies were thought to be stars. Never mind about your grandfather, what inductive generalisation are you talking about?
I was talking about the instruments measuring brightness; like I said, the mass is not directly relevant here.
|
|
|
Post by mrsonde on May 28, 2015 19:39:36 GMT 1
That typology is itself derived from the theoretical basis that we're - I'm - questioning.
According to the theory.
Of course it is. If it's much smaller or larger than the theory predetermines, its brightness is caused by different factors - the obvious one being its distance.
Exsactly - that's what I said. The issue is what is the distance of these bright objects? To determine that you need to know the mechanism which is producing the brightness (you call it "absolute", going along with the theory - Hubble's - that is now three quarters of a century out of date, and that no one would dream of subscribing to - the correct term is "apparent".)
You need to ask yourself what you mean by this term.
Their distances have not been determined by other methods. The theory-derived distances of Cepheid variables have changed in my lifetime several times - I can't be bothered to work out the percentages, but they're not negligible. Something like 70%. This isn't a question of greater and greater resolution. It's a question of theory shift.
I said even using parallax is theoretically based. If this straightforward geometrical method can not be relied upon observationally, then don't pretend that the highly abstract theoretically derived interpretation of what "variables" mean is any more so.
Jeez, it's like talking to a child! No, not your data - you don't have any data, you don't even understand Newtonian mechanics. The data. How bright they look. You can call it "absolute" in the misapprehension that this makes them somehow solidly reliable - it's an illusion. Apparent brightness is the adjective that is approrpiate. The rest is theoretical interpretation.
Hubble's formula, much modified over the years: the Omega formula. It all changes once the theory changes.
At the simplest level, that light detected is of this or that type. That this or that spectrum recorded implies this or that object. Then you need to process that object's behaviour into your theoretical structure - this or that type of object is composed of this or that, and behaves in such and such a way, and therefore must be like the other objects we think we know the distance of.
So you think that if you see a light out at sea it's not relevant to you that it might be a lighthouse warning you of rocks ten miles ahead or a raft with a candle ten yards ahead? You wouldn't be able to tell the difference, unless you knew the size of the source of the illumination, or until you rammed a liferaft or crashed onto the rocks. You've never seen the Father Ted skit with the cows?
|
|
|
Post by fascinating on May 29, 2015 22:21:29 GMT 1
I do take your main point about theories of cosmology being something of a house of cards with each questionable theory depending upon the somewhat wobbly ones beneath. I think there is sufficient firmness in the "cards" to make some legitimate generalisations, say for example about the sizes and distances of the nearer (less than 500 million light years) galaxies. You could argue about how accurate the distance measure using the Cepheids is, but I think it is probably of the right order.
Let's say that a substantial number of supernovae are observed in a range of relatively near galaxies whose distances are known by other methods. Taking into account the distances of the galaxies, the ACTUAL brightness of them can be determined - perhaps I should have used the term "absolute magnitude", which shows how bright the objects would appear to be if placed at the same distance. Then it is seen that, in certain supernovae, there is a characteristic form of the brightness trace in the time after the explosion, and that the form of the trace is related to brightness. You don't need to know how exactly the explosion was caused, or the mass of the object involved.
A strange analogy. I don't need to know the mass of a lighthouse, nor the mass of the raft. You need to know the distance of the light source and the nature of it. The lighthouse might well have a have identifying flashes, and it could have different colours to identify dangerous and safe areas. The candle might look more yellow and flutter in the breeze. Hence the navigator may be able to make an assessment of the likely actual brightness, and therefore distance, of the light sources.
Thank you for acknowledging that it is incorrect to say that it is my data.
Why do you say that?
|
|