Bahamut.Kara said: »
Anyone who has handled raw data
Should wash their hands thoroughly and also wash any and all of the surfaces that may have come into contact with the raw data to prevent cross contamination.
The New Global Cooling Thread. |
||
The new Global Cooling thread.
Bahamut.Kara said: » Anyone who has handled raw data Should wash their hands thoroughly and also wash any and all of the surfaces that may have come into contact with the raw data to prevent cross contamination. Asura.Kingnobody said: » None. So, tell me, why were there additional "scrubbing" of data done to the raw data after the obvious errors were taken out? Why were there so many errors to begin with? One would expect to see that the unadjusted and adjusted data to be roughly the same, but that is not the case with this. What additional "errors" were found that is not reported, and are we sure that the "errors" found and removed were actual "errors" listed on Tables 1 and 2? Altima couldn't seem to find the words so you'll have to take his place as someone who obviously knows what they're talking about when it comes to this analysis. So please, fill me in. Cerberus.Pleebo said: » Asura.Kingnobody said: » None. So, tell me, why were there additional "scrubbing" of data done to the raw data after the obvious errors were taken out? Why were there so many errors to begin with? One would expect to see that the unadjusted and adjusted data to be roughly the same, but that is not the case with this. What additional "errors" were found that is not reported, and are we sure that the "errors" found and removed were actual "errors" listed on Tables 1 and 2? Altima couldn't seem to find the words so you'll have to take his place as someone who obviously knows what they're talking about when it comes to this analysis. So please, fill me in. I would love to explain, but I'm afraid that it might be a little too much for you, and I don't want to waste either mine or your time. HAHAHAHHAHAHAHAHAHAHAHHAHAHHAHAHHAHAHA
Excellent cop out, my good man. You got a very audible guffaw out of me. Now if we could only get Bill Herp The Science Derp to stop making threads about things he doesn't understand we'd be making progress. Asura.Kingnobody said: » Cerberus.Pleebo said: » Asura.Kingnobody said: » None. So, tell me, why were there additional "scrubbing" of data done to the raw data after the obvious errors were taken out? Why were there so many errors to begin with? One would expect to see that the unadjusted and adjusted data to be roughly the same, but that is not the case with this. What additional "errors" were found that is not reported, and are we sure that the "errors" found and removed were actual "errors" listed on Tables 1 and 2? Altima couldn't seem to find the words so you'll have to take his place as someone who obviously knows what they're talking about when it comes to this analysis. So please, fill me in. I would love to explain, but I'm afraid that it might be a little too much for you, and I don't want to waste either mine or your time. Cerberus.Pleebo said: » HAHAHAHHAHAHAHAHAHAHAHHAHAHHAHAHHAHAHA Excellent cop out, my good man. You got a very audible guffaw out of me. There is not enough time in the whole timespan to educate you in methods and meanings of analytic reporting. Especially in regards to this issue. I mean, it would take you years to understand why too much data scrubbing is not good. Bahamut.Milamber said: » Because it isn't a perfect world, and there are quite a number of non-obvious ways for sensors to either fail or report non-true data, and no-one so far has spent the money to be properly redundant at any given site (by general appearances)? I wouldn't necessarily expect raw/corrected data to appear the same; it really, really depends on potential error sources. Real data is not necessarily easy or cheap to acquire in large, distributed systems. I said pick one. Any one of them. If you want to critique methodology, then do it.
Cerberus.Pleebo said: » I said pick one. Any one of them. If you want to critique methodology, then do it. I have an issue with the overall product. The whole purpose of adjusting the data that takes out errors is to give better, usable data. But if the data was so horrible that, after the alterations, it produces an entire different set of data, then what is the point of the original data in the first place? That signifies that the method of obtaining the data is flawed and the data itself should never be used, altered or raw, because it is inherently incorrect. "I can't find any reason to disagree besides not liking the narrative"
Asura.Kingnobody said: » Bahamut.Milamber said: » Because it isn't a perfect world, and there are quite a number of non-obvious ways for sensors to either fail or report non-true data, and no-one so far has spent the money to be properly redundant at any given site (by general appearances)? I wouldn't necessarily expect raw/corrected data to appear the same; it really, really depends on potential error sources. Real data is not necessarily easy or cheap to acquire in large, distributed systems. Physical systems can have constant, semi-constant, or otherwise compensateable errors. What can be bad on its own can be corrected for based on the type of error (if known), however some cannort be recovered or compensated for; these samples cannot be used. Odin.Jassik said: » "I can't find any reason to disagree besides not liking the narrative" Asura.Kingnobody said: » Odin.Jassik said: » "I can't find any reason to disagree besides not liking the narrative" I think it's not ok to claim credulity in matters far outside your capabilities. It's not honest to expect the world to slow down for you because you don't like reality. Bahamut.Milamber said: » Asura.Kingnobody said: » Bahamut.Milamber said: » Because it isn't a perfect world, and there are quite a number of non-obvious ways for sensors to either fail or report non-true data, and no-one so far has spent the money to be properly redundant at any given site (by general appearances)? I wouldn't necessarily expect raw/corrected data to appear the same; it really, really depends on potential error sources. Real data is not necessarily easy or cheap to acquire in large, distributed systems. Physical systems can have constant, semi-constant, or otherwise compensateable errors. What can be bad on its own can be corrected for based on the type of error (if known), however some cannort be recovered or compensated for; these samples cannot be used. If it was decades before the errors in the record-keeping was discovered, then the real issue is the date prior to the correction of the methods of record-keeping. To use those errors knowingly not only taints the actual study being performed, but also commits "intellectual dishonesty" or "intellectual fraud," which in this case, seems to be committed by the recording/reporting agencies (not the scientists themselves, unless they knew that the data is incorrect and still used them anyway). Ever heard of the concept: "garbage in, garbage out"? Asura.Kingnobody said: » I don't have an issue in the methodology. I have an issue with the overall product. The whole purpose of adjusting the data that takes out errors is to give better, usable data. But if the data was so horrible that, after the alterations, it produces an entire different set of data, then what is the point of the original data in the first place? That signifies that the method of obtaining the data is flawed and the data itself should never be used, altered or raw, because it is inherently incorrect. Odin.Jassik said: » Asura.Kingnobody said: » Odin.Jassik said: » "I can't find any reason to disagree besides not liking the narrative" I think it's not ok to claim credulity in matters far outside your capabilities. It's not honest to expect the world to slow down for you because you don't like reality. Asura.Kingnobody said: » Ever heard of the concept: "garbage in, garbage out"? Ever hear of the concept: "talking out your ***?" Cerberus.Pleebo said: » Asura.Kingnobody said: » I don't have an issue in the methodology. I have an issue with the overall product. The whole purpose of adjusting the data that takes out errors is to give better, usable data. But if the data was so horrible that, after the alterations, it produces an entire different set of data, then what is the point of the original data in the first place? That signifies that the method of obtaining the data is flawed and the data itself should never be used, altered or raw, because it is inherently incorrect. I think that qualifies as lack of reading comprehension. Which is actually common with you. So, thank you for proving once again your idiocy. Asura.Kingnobody said: » Your posts are the internet forum equivalent of "I know you are but what am I?" Learn some humility, you aren't in any way qualified to question the findings of experts and even less qualified to participate in a scholarly debate. No, you gave me the Sarah Palin answer of "Uh... all of them". I want a specific answer. This is how scientists do it. We're all waiting.
Asura.Kingnobody said: » Bahamut.Milamber said: » Asura.Kingnobody said: » Bahamut.Milamber said: » Because it isn't a perfect world, and there are quite a number of non-obvious ways for sensors to either fail or report non-true data, and no-one so far has spent the money to be properly redundant at any given site (by general appearances)? I wouldn't necessarily expect raw/corrected data to appear the same; it really, really depends on potential error sources. Real data is not necessarily easy or cheap to acquire in large, distributed systems. Physical systems can have constant, semi-constant, or otherwise compensateable errors. What can be bad on its own can be corrected for based on the type of error (if known), however some cannort be recovered or compensated for; these samples cannot be used. If it was decades before the errors in the record-keeping was discovered, then the real issue is the date prior to the correction of the methods of record-keeping. To use those errors knowingly not only taints the actual study being performed, but also commits "intellectual dishonesty" or "intellectual fraud," which in this case, seems to be committed by the recording/reporting agencies (not the scientists themselves, unless they knew that the data is incorrect and still used them anyway). Ever heard of the concept: "garbage in, garbage out"? Yes, I am extremely familiar with GIGO. That's generally why you even perform this in the first place, to even determine if you can draw any conclusions from the data, or to what extent. Cerberus.Pleebo said: » We're all waiting. I'm not. I lost interest a long time ago. But we can scrub the data so it still looks like everyone is interested! Bahamut.Milamber said: » You can correct different sections of data different ways, for different reasons; that is essential to do when you have different sources of different types. You generally are required to take the worst error margin of sampling, but that can be different based on how much the error for that dataset contributes to the overall sample. Yes, I am extremely familiar with GIGO. That's generally why you even perform this in the first place, to even determine if you can draw any conclusions from the data, or to what extent. How much data would have to be altered before the data set to be considered useless? I think that a 5% error rate would signify issues in regards to the record keeping of the data, and anything above that would throw the entire set into question of feasibility. 6% error rate should automatically disqualify the data from usability. Of course, the data we are talking about in question is greater than 6%. What are your thoughts in that? Asura.Kingnobody said: » Bahamut.Milamber said: » You can correct different sections of data different ways, for different reasons; that is essential to do when you have different sources of different types. You generally are required to take the worst error margin of sampling, but that can be different based on how much the error for that dataset contributes to the overall sample. Yes, I am extremely familiar with GIGO. That's generally why you even perform this in the first place, to even determine if you can draw any conclusions from the data, or to what extent. How much data would have to be altered before the data set to be considered useless? I think that a 5% error rate would signify issues in regards to the record keeping of the data, and anything above that would throw the entire set into question of feasibility. 6% error rate should automatically disqualify the data from usability. Of course, the data we are talking about in question is greater than 6%. What are your thoughts in that? You seem to have unrealistic expectations of data accuracy spanning the types and timeframe that climate data does. Also very little understanding of how data is analyzed. Odin.Jassik said: » You seem to have unrealistic expectations of data accuracy spanning the types and timeframe that climate data does. You know, something that is recorded at every airport in the country on a daily basis (at the very least). Oh, I forgot, there's "climate speech" included in that data that only "climate scientists" would know about. Odin.Jassik said: » Also very little understanding of how data is analyzed. The simple answer to that question is yes. Yes it is more complex than a weather station at the *** airport. Now stop trying to steer the conversation away from my questions.
Idc what it's called, IM TIRED OF THESE STORMS!!!!
Ragnarok.Yatenkou said: » Idc what it's called, IM TIRED OF THESE STORMS!!!! Shame on you! Ragnarok.Yatenkou said: » Idc what it's called, IM TIRED OF THESE STORMS!!!! These storms are at least cooling things down a bit, so I don't mind them so much (except when they kill my internet connection). I'm just tired of this insane heat. |
||
All FFXI content and images © 2002-2024 SQUARE ENIX CO., LTD. FINAL
FANTASY is a registered trademark of Square Enix Co., Ltd.
|