|
Post by StuartG on Jun 18, 2011 23:47:41 GMT 1
"Noun: computer simulation 1. (computing) the technique of representing the real world by a computer program "a computer simulation should imitate the internal processes and not merely the results of the thing being simulated"; " www.wordwebonline.com/en/COMPUTERSIMULATION---- "Which scenarios should be considered most realistic is currently uncertain, as the projections of future CO2 (and sulphate) emission are themselves uncertain." "Future scenarios do not include unknowable events - for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to GHG forcing in the long term, but large volcanic eruptions, for example, are known to exert a temporary cooling effect." en.wikipedia.org/wiki/Global_climate_model#Projections_of_future_climate_changeStuartG ps. no text from me - not needed
|
|
|
Post by StuartG on Jun 19, 2011 10:57:52 GMT 1
|
|
|
Post by marchesarosa on Jun 19, 2011 15:01:03 GMT 1
Ah, yes, louise, the alarmists certainly do pay attention to "internal forcings" when it suits them, don't they?
Remember this from Prof Richard Lindzen?
“For small changes in climate associated with tenths of a degree, there is no need for any external cause. The earth is never exactly in equilibrium. The motions of the massive oceans where heat is moved between deep layers and the surface provides variability on time scales from years to centuries. Recent work (Tsonis et al, 2007), suggests that this variability is enough to account for allclimate change since the 19th Century.”
I buy that!
|
|
|
Post by marchesarosa on Jun 20, 2011 21:04:49 GMT 1
Geophysical Research Abstracts, Vol. 6, 05276, 2004
MULTI-SATELLITE ALTIMETRIC SEA LEVEL CHANGE 1992-2003: WHAT DO WE KNOW AND WHAT NOT?
R. Scharroo and L. Miller National Oceanic and Atmospheric Administration, Laboratory for Satellite Altimetry (remko.scharroo@noaa.gov)
The TOPEX/Poseidon mission is nearing the completion of its twelveth year. The remarkable length of the record implies that the global rate of sea level change can be estimated from this single altimeter with striking reliability. The currently accepted value is 2.5 ±0.5 mm/year.
However, every few years we learn about mishaps or drifts in the altimeter instruments, errors in the data processing or instabilities in the ancillary data that result in rates of change that easily exceed the formal error estimate, if not the rate estimate itself. In all these cases the intercomparision with external sources, mainly contemporary altimeter satellites, like ERS-1 and ERS-2, were pivotal to the uncovering and correction of the problems. With the missions of Jason-1 and Envisat now on the way for a few years, more differences between the missions pop-up. Neither of these missions currently fit the established rates. It seems that the more missions are added to the melting pot, the more uncertain the altimetric sea level change results become.
This presentation highlights a number of issues in altimeter data and their process- ing that have an impact on the global rates of sea level change, such as: geographical sampling, temporal sampling, various forms of altimeter drift, wet tropospheric cor- rection, and other models. Ultimately, this exercise will tell us more about what we know about altimetric sea level change, and what not.
-------
Unusual honesty!
|
|
|
Post by marchesarosa on Jun 23, 2011 12:32:21 GMT 1
Willis Eschenbach critiques the lastest Mann/Rahmsdorf ( and don't forget young Kemp!) attempt to keep the hockeystick alive - this time with sea level. wattsupwiththat.com/2011/06/23/reduce-your-co2-footprint-by-recycling-past-errors/#more-42111........ Anthony has pointed out the further inanities of that well-known vanity press, the Proceedings of the National Academy of Sciences. This time it is Michael Mann (of Hockeystick fame) and company claiming an increase in the rate of sea level rise (complete paper here, by Kemp et al., hereinafter Kemp 2011). judithcurry.com/2011/06/22/sea-level-hockey-stick/A number of commenters have pointed out significant shortcomings in the paper. AMac has noted at ClimateAudit climateaudit.org/2011/06/21/amac-upside-down-mann-lives-onin-kemp-et-al-2011/ that Mann’s oft-noted mistake of the upside-down Tiljander series lives on in Kemp 2011, thus presumably saving the CO2 required to generate new and unique errors. Steve McIntyre has pointed out that, as is all too common with the mainstream AGW folks and particularly true of anything touched by Michael Mann, the information provided is far, far, far from enough to reproduce their results. Judith Curry is also hosting a discussion of the issues. judithcurry.com/2011/06/22/sea-level-hockey-stick/I was interested in a couple of problems that haven’t been touched on by other researchers. The first is that you can put together your whiz-bang model that uses a transfer function to relate the “formaminiferal assemblages” to “paleomarsh elevation” (PME) and then subtract the PME from measured sample altitudes to estimate sea levels, as they say they have done. But how do you then verify whether your magic math is any good? The paper claims that Agreement of geological records with trends in regional and global tide-gauge data (Figs. 2B and 3) validates the salt-marsh proxy approach and justifies its application to older sediments. Despite differences in accumulation history and being more than 100 km apart, Sand Point and Tump Point recorded near identical RSL variations. Hmmm, sez I … so I digitized the recent data in their Figure 2B. This was hard to do, because the authors have hidden part of the data in their graph through their use of solid blocks to indicate errors, rather than whiskers as are commonly used. This makes it hard to see what they actually found. However, their results can be determined by careful measurement and digitization. Figure 1 shows those results, along with observations from the two nearest long-term tidal gauges and the TOPEX satellite record for the area. Figure 1. The sea-level results from Kemp 2011, along with the nearest long-term satellite records (Wilmington and Hampton Roads) and the TOPEX sea level records for that area. Blue and orange transparent bands indicate the uncertainties in the Kemp 2011 results. Their uncertainties are shown for both the sea level and the year. SOURCES: Wilmington, Hampton Roads, TOPEX My conclusions from this are a bit different from theirs. The first conclusion is that as is not uncommon with sea level records, nearby tide gauges give very different changes in sea level. In this case, the Wilmington rise is 2.0 mm per year, while the Hampton Roads rise is more than twice that, 4.5 mm per year. In addition, the much shorter satellite records show only half a mm per year average rise for the last twenty years. As a result, the claim that the “agreement” of the two Kemp 2011 reconstructions are “validated” by the tidal records is meaningless, because we don’t have observations accurate enough to validate anything. We don’t have good observations to compare with their results, so virtually any reconstruction could be claimed to be “validated” by the nearby tidal gauges. In addition, since the Tump Point sea level rise is nearly 50% larger than the Sand Point rise, how can the two be described as “near identical”? As I mentioned above, there is a second issue with the paper that has received little attention. This is the nature of the area where the study was done. It is all flatland river delta, with rivers that have created low-lying sedimentary islands and constantly changing border islands, and swirling currents and variable conditions. Figure 2 shows what the turf looks like from the seaward side: Figure 2. Location of the study areas (Tump Point and Sand Point, purple) for the Kemp 2011 sea level study. Location of the nearest long-term tidal gauges (Wilmington and Hampton Roads) are shown by yellow pushpins. Why is this important? It is critical because these kinds of river mouth areas are never stable. Islands change, rivers cut new channels, currents shift their locations, sand bars are created and eaten away. Figure 3 shows the currents near Tump Point: Figure 3. Eddying currents around Tump Point. Note how they are currently eroding the island, leading to channels eaten back into the land. Now, given the obviously sedimentary nature of the Tump Point area, and the changing, swirling nature of the currents … what are the odds that the ocean conditions (average temperature, salinity, sedimentation rate, turbidity, etc.) are the same now at Tump Point as they were a thousand years ago? And since the temperature and salinity and turbidity and mineral content a thousand years ago may very well have been significantly different from their current values, wouldn’t the “formaminiferal assemblages” have also been different then regardless of any changes in sea level? Because for the foraminifera proxy to be valid over time, we have to be able to say that the only change that might affect the “foraminiferal assemblages” is the sea level … and given the geology of the study area, we can almost guarantee that is not true. So those are my issues with the paper, that there are no accurate observations to compare with their reconstruction, and that important local marine variables undoubtedly have changed in the last thousand years. Of course, those are in addition to the problems discussed by others, involving the irreproducibility due to the lack of data and code … and the use of the Tiljander upside-down datasets … and the claim that we can tell the global sea level rise from a reconstruction in one solitary location … and the shabby pal-review by PNAS … and the use of the Mann 2008 temperature reconstruction … and … In short, I fear all we have is another pathetic attempt by Michael Mann, Stefan Rahmstorf, and others to shore up their pathetic claims, even to the point of repeating their exact same previous pathetic mistakes … and folks wonder why we don’t trust mainstream AGW scientists? Because they keep trying, over and over, to pass off this kind of high-school-level investigation as though it were real science. My advice to the authors? Same advice my high school science teacher drilled into our heads, to show our work. PUBLISH YOUR CODE AND DATA, FOOLS! Have you been asleep for the last couple years? These days nobody will believe you unless your work is replicable, and you just look stupid for trying this same ‘I won’t mention the code and data, maybe nobody will notice’ trick again and again. You can do all the hand-waving you want about your “extended semiempirical modeling approach”, but until you publish the data and the code for that approach and for the other parts of your method, along with the observational data used to validate your approach, your credibility will be zero and folks will just point and laugh. .......... Tee hee!
|
|
|
Post by speakertoanimals on Jun 23, 2011 13:43:40 GMT 1
Go read the papers? I'm sorry, but this is pretty feeble, implying that the people designing and running these experiments are too daft to get even the basics of what they can measure right.
If we can measure the rate of recession of the moon to be 2cm a year, the rate of opening of the Atlantic to be 2.5cm a year, and the LIGO chaps can measure the change in length of a 4km interferometer arm of less than a thousandth the diameter of a proton, I don't find oceanographers claims of a fraction of a millimeter per year to be that incredible.
Kiddie level answer:
Wrong. You are confusing the error on a single measurement, with the error on an average (or in this case, the error on a single measurement compared to the error on the slope of a line fitted to a whole series of measurements).
Its the whole 'scientists are unbelievably stupid' argument all over again............................
|
|
|
Post by speakertoanimals on Jun 23, 2011 14:40:57 GMT 1
And I DO wish that M would refrain from just posting entire webpages from elsewhere -- a LINK would do the job, surely. And then perhaps some comments of her own, with relevant quotes in boxes.........................
I could, after all, just cut and paste every paper I could find about a topic, but that would hardly encourage debate (or even reading). Or perhaps that is the aim? A single article on a skeptic website travels a long way, as we have already seen with Casey and that daft stuff about isotopic analysis. BECAUSE no one actually read it properly, the SAME document and the SAME error gets repeated ad infinitum, and referenced as if it were an actual proper scientific study.
And they try to criticise climate scientists for being sloppy................................
|
|
|
Post by marchesarosa on Jun 23, 2011 14:45:03 GMT 1
PNAS Reviews: Preferential Standards for Kemp (Mann) et alSteven McIntyre Jun 22, 2011 at 1:50 PM climateaudit.org/2011/06/22/pnas-reviews-preferential-standards-for-kemp-mann-et-al/#more-13934About 10 days ago, we discussed the PNAS reviews of the recent submission by Richard Lindzen, a member of the National Academy of Sciences with a distinguished publication record. A few days ago, PNAS published Kemp et al 2011, a submission by one of Mann’s graduate students. While, in this case, we do not have access to the reviews, it is possible to make conclusions about the review process of the Mann article both on the limited information in the article and on the basis of the article itself. The Kemp article states in its masthead: Edited* by Anny Cazenave, Center National d’Etudes Spatiales (CNES), Toulouse Cedex 9, France, and approved March 25, 2011 (received for review October 29, 2010)The asterisk says: *This Direct Submission article had a prearranged editor.It was certainly generous of PNAS to give a “prearranged editor” to a submission by a graduate student at Penn State. I’m sure that Lindzen, an actual NAS member, would have appreciated a similar courtesy. It was particularly nice of PNAS to allow the Team to “prearrange” an editor who had been a collaborator with a coauthor within the past 4 years – Cazenave was coauthor with Rahmstorf in Rahmstorf et al (Science 2007), Recent climate observations compared to projections (accepted Jan 25, 2007; published Feb 1, 2007). In contrast, PNAS objected to Lindzen’s submission being reviewed by Chou, who had co-authored with him in 2001. In the previous discussion of the Lindzen reviews, some defenders of the PNAS reviews argued that the comments were justified. My own issue with PNAS and other review processes is not that any given criticism cannot be justified, but the hypocrisy of seemingly inconsistent standards for Team critics and Team members. This hypocrisy is nicely illustrated by the contrasting standards for replicability required for Lindzen and for Mann et al. For example, Reviewer 2 of Lindzen’s submission stated: The description of the procedures is long on philosophical discussion, but rather too spare in describing exactly what was done. Sufficient description is necessary so that another experimenter could reproduce the analysis exactly. I don’t think I could reproduce the analysis based on the description given. For example, exactly how were the intervals chosen? Was there any subjectivity introduced?If this criticism of Lindzen’s submission is valid, I, of all people, can hardly take issue with it. Lindzen contradicted this criticism in his reply, arguing that the results were replicable. I’m not familiar enough with the data to have my own opinion on who’s right or not. For present purposes, the point is that the PNAS reviewer applied this standard to Lindzen. If PNAS is to be consistent, then the same standard should apply to Kemp et al. However, if you examine statements in the article itself, the reviewers have clearly not paid any attention to replication or subjectivity in a Team submission. Jeff Id almost immediately noticed unsupported statements in the article and SI, in particular mentioning their ‘discussion” of weights. I urge readers to search both the article and the SI for the word “weight”, excerpts of which are provided below. The statements highlighted below do not represent any sort of fine tooth comb. Rather they stick out like a sore thumb – to mix metaphors. The term ‘weight” is used only once in the main article, in the caption to Figure 4 which states: Fig 4. Salt-marsh proxy data used in Bayesian update were down-weighted by a factor of 10 and used only after AD 1000.Given that the article was said to “present new sea-level reconstructions for the past 2100 y based on salt-marsh sedimentary sequences from the US Atlantic coast”, it is puzzling, to say the least, that the actual salt-marsh proxy data in Figure 4 was downweighted by a factor of 10 and not used after AD1000. These puzzling and shall-we-say “subjective” decisions were not discussed in the article itself..... If PNAS review standards condemn the supposed “subjectivity” of the Lindzen submission, how could PNAS permit an author to simply assert that “an appropriate choice for this [downweighting] factor would be 10″. This far exceeds the alleged “subjectivity” of the Lindzen submission. Why didn’t PNAS reviewers and editors object? And where were the reviewers when the authors perpetuated the known problem of upside-down and contaminated (Tiljander) sediments? Lindzen’s reviewer 2 also emphasized replicability. Again, what was sauce for the goose wasn’t sauce for the gander. PNAS reviewers didn’t pay even the slightest lip-service to ensuring that the underlying data for Kemp et al was available. more......... ------------ Only for amateurs of Steve McIntyre's forensic attention to detail and scientific audit! There will undoubtedly be more revelations when he gets his teeth into their data IF he is permitted access to the data and computer code used.
|
|
|
Post by speakertoanimals on Jun 23, 2011 14:57:57 GMT 1
And the point of this is? A comparison is daft, given that the two different papers were probably being reviewed by two different sets of reviewers anyway.
snide comments about pre-arranged editors are also daft, given that Lindzen got to present his own reviews in the place (just that they rejected them, and he didn't like the reviewers they did suggest).
And any comparison is totally useless given that we don't have the reviews anyway.
and yet another cut and paste from M. M if We WANTED to read the whole web, we'd go there, we don't need you to paste here everything that you think we should read.
Cut and paste is NOT discussion!
|
|
|
Post by marchesarosa on Jun 23, 2011 15:01:15 GMT 1
Stop whining STA. My posts are for presenting info for people who don't necessarily follow up links. It's a useful archive.
And don't you other folks a agree a picture is worth a thousand words? Did the authors include photographs of the geographical location of their research into GLOBAL sea level? Would it have helped if they had? Yes, they would have been drummed out of town!
|
|
|
Post by StuartG on Jun 23, 2011 17:38:27 GMT 1
Reply to: « Reply #66 Today at 12:43 » radio4scienceboards.proboards.com/index.cgi?action=gotopost&board=witter&thread=116&post=12382" If we can measure the rate of recession of the moon to be 2cm a year" Yes, I can believe that. The Moon is well ordered, and is governed by forces that we know. " the rate of opening of the Atlantic to be 2.5cm a year" I didn't know what was meant here, but if the reference is to the Mid Atlantic Ridge, then yes that is understood that the Average of measurements taken is that value, because again, the movement is reasonably well ordered, except when its not and places like Christchurch and Tokio experience problems. In the latter two, presumably their rate of opening has gone for re-assessment. "the LIGO chaps can measure the change in length of a 4km interferometer arm of less than a thousandth the diameter of a proton" I marvel a bit at that, but nevertheless I can accept that, but remember they are in their own controlled environment, and no that's not slagging them off, but it would be difficult to do it in an open field [say] " I don't find oceanographers claims of a fraction of a millimeter per year to be that incredible." Well that's where we disagree. Hence my posting of the video... " "Ships In Storms!! High sea! " +/- 0.3mm www.youtube.com/watch?v=nvh2hCxUvJA&feature=fvwrel " ...in order to highlight the unpredictability of the ocean. The oceans are not level, nor flat, but most of all they are chaotic and not at all ordered, in fact the chaos must be divided into separate 'pools of chaos' so much so that the chaos is disorganised! The Seas cannot have an accuracy attributed to them that is not intrinsically theirs. What is being said is that the measurements, averages, "the error on the slope of a line fitted to a whole series of measurements", is a 'paper attribute' given to the calculations involved, with the effect that change one place of measurement [say] the whole calculation will have to be re-assessed. The Seas are not aware of this and carry on their chaotic ways. Here's the address of 'how stuff works' science.howstuffworks.com/environmental/earth/oceanography/question356.htm...it was read, and the first sentence states... " An accurate measurement of sea level is very hard to pin down. But it is an important measurement for two main reasons: " That suggests a level of uncertainty at least. It continues... "T he concern is that global warming and other weather changes caused by man might be leading to an overall rise in sea level." Sounds reasonable enough, later it says... " The problem with measuring the sea level is that there are so many things that perturb it. If you could take planet Earth and move it out into deep space so that the sun, moons and other planets did not affect it and there were no temperature variations worldwide, then everything would settle down like a still pond. Rain and wind would stop, and so would the rivers. Then you could measure sea level accurately" OK, third paragraph... " You can see that getting an accurate reading (for example, down to the millimeter level) is extremely difficult. Satellites are now used as well, but they suffer from many of the same problems. Scientists do the best they can, using extremely long time spans, to try to figure out what the sea level is and whether or not it is rising. The general consensus seems to be that the oceans rise about 2 millimeters per year (although the last link below has an interesting discussion on that consensus...). " So it's a concensus, on the 2 mm, then a +/- 0.3 mm is not realistic, except on paper, not in the real world. " Its the whole 'scientists are unbelievably stupid' argument all over again" no, that's never been said, just practicality should be included. " Scientists do the best they can..." StuartG
|
|
|
Post by principled on Jun 23, 2011 17:49:26 GMT 1
STA: From your link: So getting a measurement change to an accuracy of 10 times this level must be even more than a tad difficult. Ever tried to measure 0.3mm in measuring tube without the meniscus getting in the way? Still, we can always rely on satellite measurement, can't we? Well, they seem to be problematic as well (from your link.): And as for bringing in: That's a bit like saying that because I can measure the height of a solid item - say a steel bar- to the accuracy of a micron, then I can get the same level of accuracy measuring something like a jelly. Me thinks not. P
|
|
|
Post by eamonnshute on Jun 23, 2011 18:11:28 GMT 1
To illustrate how you can extract a trend from very noisy data look at this: www.skyandtelescope.com/community/skyblog/newsblog/121249239.htmlThe light curve of the star 55 Cancri is very noisy, with no obvious pattern, but there is a planet orbiting the star, and every 17h 41m it transits it, producing a small dip in brightness. These dips are not apparent from the light curve, but by averaging over many orbits we can make the transits visible. The signal is much less than the noise, but we can still extract useful information. it is the same with sea level data.
|
|
|
Post by speakertoanimals on Jun 23, 2011 18:56:57 GMT 1
Tump point 3.6, sand point 2.7 50% larger than sand point would be 2.7 + (2.7/2) = 2.7 + 1.35 = 4.05 which doesn't make it nearly 50% larger. In fact, since 2.7 = 3 x 0.9 and 3.6 is 4 x 0.9 (your times table DO come in useful sometimes), that makes tump point exactly 33% larger. Which isn't NEARLY 50% unless you are incapable of basic arithmetic. Frankly 2.7 versus 3.6 is pretty close in orders of magnitude. They are both just linear fits to data anyway, the point being, what is the error on these slopes in the first place? What was called 'near identical' in the original paper? It was the actual plots , NOT some straight line fit, and if you look at the actual tide gauge records that are being compared (red and green lines in the inset to Figure 2B), then I'd quite happily agree that these two plots ARE near identical! What is going on? Except these blocks are not for the actual tide-gauge data, but for the proxy reconstructions! There is a jooly good reason WHY the errors on the reconstructions are presented as solid blocks, because you couldn't see the difference between Tump and sand point if whiskers had been used. I don't understand what is being shown in the attempted criticism, because the straight lines seem to be fitted to the centres of the BLOCKS, but that is the reconstruction, NOT the actual tide gauge plots! and the tide gauge plots in the original figure ARE actually near identical! When it comes to the reconstructions, it is quite clear from the inset (and the version below), that they aren't quite the same. It's also clear, if you look at the WHOLE reconstruction from 1000AD to 2000AD, that they AREN'T straight lines anyway! So, if 'near identical' referred to tide gauge data, that's true. The straight line plots refer to the reconstruction. Plus fitting straight lines to reconstructions that are very clearly NOT straight lines is wrong. As is the authors attempt at simple arithmetic. In short, the supposed criticism is just TOTAL BOLLOCKS. Other points -- the point wasn't that DIFFERENT tide gauges at different locations agree, but that the reconstructions agree with the tide gauges (the actual tide gauge data is NOT what has been digitized below). If they ARE claiming the reconstructions are near identical, then what should be compared is the WHOLE of the overlapping reconstruction, from 1000AD to 2000AD, NOT just the section from 1850 onwards which is what has been digitised! And if you look at that plot, I'd quite happilly agree that over this longer time-scale, they are pretty bloody similar! In short, this supposed criticism is BOLLOCKS, and quite clearly so if you go and look at the actual paper, not just the hatchet job that somebody has made from what was only ever an INSET to the full figure. If you're lucky (its supposed to be open access), you canfind the original paper at: www.pnas.org/content/early/2011/06/13/1015619108.full.pdf
|
|
|
Post by StuartG on Jun 23, 2011 19:09:46 GMT 1
"with no obvious pattern" From that it can be said that there is one, just that because of the noise it's masked. The site here starts an explanation... www.bores.com/courses/intro/time/2_auto.htm in short Digital signal Processing. Work with an old Marconi Spectrum analyser and it's possible to see signals in the noise by eye, the clever bit is extracting them agreed. "The planet is 55 Cancri e, the closest of the star’s five known worlds. All were discovered by the radial-velocity wobbles that they induce in the star. The wobbling due to planet e was teased out of complex radial-velocity patterns in 2004. Its strength tells that planet e has only 8.6 ± 0.6 Earth masses — a super-Earth." and it later continues... "Astronomers had assumed that 55 Cnc e revolves around the star every 2.8 days. But in late 2010 a paper by Rebekah Dawson, a graduate student at Harvard, and her colleague Dan Fabrycky, now at the University of California in Santa Cruz, analyzed the data differently. They proposed that its orbital period is four times faster: that it circles the star in 17 hours 41 minutes" Well done Rebekah and Dan! They extracted that information from a 'sea of noise'. That is fixed information, G8 and its planet's movements characterised out of the noise. The Earth oceans have no fixed level to be characterised by a +/- 0.3 mm tolerance, if You like they are the noise, and we can plot a submarine under the surface by it's perturbations. There is also another lesson there, "analyzed the data differently. " StuartG
|
|