Jump to content

cabrerad

BB Participant
  • Posts

    221
  • Joined

  • Last visited

Everything posted by cabrerad

  1. Yes, the NAS found some flaws the study, that is the progression of science. They did, however, back up the main point of the study, which is why I quoted the summary findings not once, but twice. Well the statement is the leading paragraph from wich you drew your quote and pretty important in that it contradicts what you are trying to say about the NAS conclusions. I submit this statement yet again as the findings of the NAS and that your quotation needs to be taken within this context for objectivity. I am adressing the science here and explaining "the flaws". If you want to keep calling the study flawed and not actually look into it, fine. But then you are not discussing the science, nor have you attempted to adress my commentary on the issue other than saying it is dismissive. Well, that's funny, I have a life too..something we have in common. Actually I suggested actually reading the studies (several of which I have) becasue a lot them use much better proxies (the 98 proxies are almost 10 years old) and better models. That's a far cry from ignoring things. In fact, what you call ignoring is my refusal to just assume all those studies are automatically flawed.
  2. Wegan's main point was that MBH98 used a non center weighted PCA. What you call a quick blanket dismissal is an analysis of PCA, explanation of centering conventions, and why MBH98 used the one they did. For everyone's benefit, here are the relevant excerpts from my posts First, post 52 ( I left out the explanation of PCA the immediatelt preceeds this statement for clarity) "You could also look at an interger or interval of a priori interest to your research to see if there is deviation from that value. This is what MBH did in setting the mean over the 20th century to zero (to catch deviations from this period). Here is where MM had a bone to pick. They argued that the entire period should be set with mean O, not just the 20th century. In the MM analysis, the "hockey stick " component came up as PC1 in the MBH analysis and PC 4 in the MM analysis. MM decided that to exclude PC4 (and 3 and 5) and called them artifacts. This was a mistake. Subsequent analysis using MM's methodology has found all three excluded components to be statistically significant, so they should not have thrown them out. In addition, the MM analysis does not fit the entire data set when only the first two PC's are considered. When PC's 3, 4, and 5 are included it matches both the analysis of MBH and the data set. The only way the MM analysis fits the data set is to remove two of the tree ring data sets. You cannot omit data sets from an analysis without darn good reason to do so (statistical outliers are an example). This is an important check on the fit of the data and MM fall short here also. " If you are familiar with Wegman's testimony, you know that this was also a criticism leveled by Wegman" further down post 52 "Now on to Wegman: I finally got a hold of what his critique of the MBH analysis was. It also focused on the PCA analysis, specifically on centering of the PCA analysis. He was basically asked if the MM criticism had statistical merit. He stated that it did. That's not really much of a revelation, no one actually contests that centering affects PCA analysis. Here is a key point: As I described above, the method of centering the data did not affect the shape or statistical significance of the hockey stick in explaining the data. The only thing it did was switch around the principal compontents (all of which were statistically significant anyways). So centering had no affect on the end result. Moreover, the centering chosen by MBH had relevence to what they wanted to test. Wegman's criticism therefore seems to indicate a lack of understanding of the hypothesis being tested. Wegman made the point that there should be more talk between statisticians and climate scientists due to these questions and yes there probably should be. The key is working together, because a statistican alone is blind to what may be an important variable or a priori assumption in a given field." So that's my first comment that addresses Wegman Second, Post 66: "Please refer back to post 52 where I address their critique of the statistical methods and explain what PCA is and why the MM accusation falls short. In addition to my own explanation, I also provide a peer reviewed publication that addresses the criticism. " (referring to your statement about flawed science by MBH) "As this statement is only backed up by one paper (addressed by the literature and which I expounded upon) and a very narrow interpretation of Wegman's comments, I counter there is not sufficient evidence for any such claim." Again, this addresses Wegman as well "To counter the MM claim that using non-centered PCA is bad math, you can reference Principal Component Analysis by Ian Jolliffe. Principal Component Analysis, I.T. Jolliffe" And yet again, this applies to Wegman. Here I cite a widely used textbook on PCA. Post 81: "Wegman was only asked a very narrow question. Basically he was asked if the MM critique had statistical merit. He agreed it did. Yes, centering conventions affect the ordering of PC's, no one actually disputes that. As I have oultined previously, however, MBH had a specific reason for using their non centered PCA (that is, to specifically compare the past 100 years to the current late 20th century trend). It would have been a mistake if they just used a non centered aproach with no a priori reason, but they did have one. " I counter that it is not accurate to say I made a quick blanket dismissal. I actually took the time to explain PCA and what the controvery was about. Where is your evidence supporting Wegman's conclusion? Citations? Analysis? Again, I am going to try to steer this conversation back to a discussion of the evidence. If you have a problem with center weighted vs non center weighted PCA, for example, let's discuss it.
  3. The NAS panel did not lop off 600 years, they said there was less confidence in that period. Your paraphrasing is not accurate. Again, paraphrasing not accurate here. Phrases such as "categorically disagreed" do not mesh with what the NAS reported. If you disagree with me, by all means provide supporting information. Again, you are asking questions I have already addressed. The important assertion here as I previously quoted in my last post on the summary of the NAS statement. It also directly contradicts your "categorically disagreed" statement: I'll ask again, any reason you left this paragraph, that immediatel preceeded the one you quoted, out of your quote? You are making aspersions here. Please note I have refrained from doing so myself. I have listed a lot of papers, scientific explanations, and interpretations, the vast majority of which you have not addressed. Your commentary here is veering from the objective discusion of the science involved and is provided with no citations, documents, or papers to back up your claims here. You did say you wanted to stick the science.
  4. First, I imagine you would agree that Michael Mann, Raymond Bradley and Malcolm Hughes should have the right to defend their own work. As I have mentioned previously, these other papers should be judged based on the strnegth of the research not discounted simply because of who wrote them. I could have simply discounted McIntyre's paper because I thought it was "curious" that he is a mining executive, but I didn't. I took the time to address his conclusions and critiques of the MNBH98 paper. Second, it is not true that all the papers validating MBH 98 have Mann, Bradley, Hughes or any combination of the three as authors. Here's an example by Wahl and Ammann that specifically addresses MBH98 Similarly, the Technical Commentry on von Storch et al. was not authored by Mann, Bradley or Hughes. As far as I am aware it is a relative term and as such (in this case) needs to be taken within the context of the evidence it is associated with. You are overreaching here and taking a whole suite of statments made by the panel and saying they all agree with McIntyre and McKitricks assertions. Here is exactly what the panel said in their summary which I hyperlinked to in post 72 (I suggest paying particular attention to point 3):
  5. I think it relevant to point out that von Storch and Stehr acknowledge the following: Do some media outlets and groups play up the effects of global warming, well probably. But there are clearly also other groups and media outlets on the other end of the spectrum. But that's another discussion and back to the science. Let's continue with Von Storch and his science: I finally read the paper, and two technical comments that resulted from its publication (both in Science). In a nutshell, von Storch et al. set out to test the robustness of the MBH 98 model. They created a simulation based on MBH that introduced artificial white noise at select points into the generated temperature histories (to simulate non-climactic events). They then tested whether the introduction of this noise added more variability to the MBH model. They conclude that their addition of this white noise to the record results in a model with more variability than that presented by MBH. The inference is that the MBH model underestimates the variation in centennial and multi-decadal variations. Here is a link to the von Storch article. http://stephenschneider.stanford.edu/Publi...rchEtAl2004.pdf This publication was followed by the following technical comment in Science Comment by Wahl et al. (Summary of which is excerpted here): Wahl ER; Ritson DM; Ammann CM Environmental Studies and Geology Division, Science Center, Alfred University, Alfred, NY 14802, USA. wahle@alfred.edu So basically, von Storch et al. did not construct the MBH model correctly. Not only does this application not actually simulate the MBH model (which is the whole point of the paper), it is not even a proper one for climate field reconstructions because it inherently introduces error. In a response in Science, von Storch et al. briefly admitted to the above, but now argue that the implementation of the reconstruction procedure does not affect their conclusions when they introduce red noise (basically noise where the value of one year is correlated to the noise from an adjacent year). Response by von Storch et al. http://www.sciencemag.org/cgi/reprint/sci;312/5773/529c.pdf Now von Storch is comparing apples to oranges in going from introducing white noise to introducing red. This is pretty misleading. You can
  6. MBH 98 actually describes this in detail. The determination of which PCA components to retain in the analysis in MBH was accomplished through applying the Preisendorfer N-rule (Preisendorfer et al. 1981). Basically, each PC from the MBH data set is compared to PC's drawn from multiple randomly generated data sets (that contain the same properties as the actual data set). This is called a Monte Carlo simulation (due to the random nature of the data). If the PC's for the real data set explain more of the variation than 99% of the coreesponding randomly generated PC's, they are retained. MBH obtained 2 PC
  7. No....because you kept leaving the goal open in your quest for glory!
  8. I don't think this was mentioned previously, but these guys are serious jumpers. Hopefully you have a hood or something that can prevent a mishap. I lost my only Yellow Head that way when I had my first tank (it was covered with a gladd canopy and the fish managed to jump out from the space between the glass and the hang on power filter. They are really cool fish...I love watching them while diving in the Carribbean.
  9. Nope, that's new (pretty sure anyways). Some of the assertions in the text are contrary to a lot of research studies out there such as "Observational evidence does not support today's computer climate models" and "If, back in the mid-1990s, we knew what we know today about climate, Kyoto would almost certainly not exist, because we would have concluded it was not necessary" and "Global climate changes all the time due to natural causes and the human impact still remains impossible to distinguish from this natural "noise." I gave it a read and looked down the list of scientists. The only ones I recognized were Fred Singer ( he is also critical of the relationship between CFC's and ozone, UV light and skin cancer and tobacco and lung cancer, one heck of a 'denier' this guy) and Richard Lindzen (a legitimate researcher in the field) who have been discussed previously here. Here is my initial opinion, there are enough potential problems with aspects of the Kyoto Protocol to discuss without having to resort to commentary that can't be supported.
  10. Not a problem. I'll give Von Storch a read soon.. but it's soccer tonight...gotta love those 10:40 pm indoor games...
  11. No, I didn't forget., I was addressing MM first and want to read the paper by Von Storch before replying. I'm sorry, that just isn't true and is simply conjecture on your part. The journals take exceptional care to keep out submitting authors who also happen to be editors of a journal from the review process. In fact it would most likely be more difficult for an editor to get a paper in the journal due the other editors wanting to avoid a perception of bias. Reviewers are selected from a group of the author's peers (i.e. there are lots and lots of reviewers for every journal) by the editor (which in this case is clearly not Mann). It is far from easy to have a paper accepted by Nature. Also, I can't find any reference to Michael Mann ever being an editor for Nature in the magazine itself (I'm looking at the editorial table of the current issue right now) or online. Also, please note that the National Acadamies of Sciences had this to say about MBH: opening statement an excerpt:
  12. This article also mentions the effects of CO2 and the critiques of a few scientists. And it is quite a stretch to say global warming has finally been explained due to one paper coming out when there are a heck of a lot of other papers out there (hint..hint.. read the previous threads) and This article is a pretty strong opinion piece with an inflamatory title and should be taken as such. The first sentence alone indicates an elementary lack of understanding of global climate change (unless the author knows this and just wants to fan the flames so to speak). With that said, current climate models do take into account solar activity
  13. Please refer back to post 52 where I address their critique of the statistical methods and explain what PCA is and why the MM accusation falls short. In addition to my own explanation, I also provide a peer reviewed publication that addresses the criticism. Please note also that there seems to be only one peer reviewed (I am assuming peer reviewed as I have honestly never heard of Energy and the Environment) publication from MM . Part of their critique is also based on another manuscript that was submitted as a Brief Communication to Nature. The manuscript was sent out to reviewers who recommended it be rejected. MM claim it was rejected due to lack of space which is not even listed as a reason for rejection by Nature. On the other hand Mann has several peer reviewed articles about this issue. This leads one to question; if these critiques are so valid, why have MM not been able to publish them? As for the claim that they lacked proper data archiving to comply with publication standards, this is untrue and evidenced by the fact they published in many journals (therefore meeting thier standards): Please reference: Corrigendum: Global-scale temperature patterns and climate forcing over the past six centuries Michael E. Mann, Raymond S. Bradley and Malcolm K. Hughes Nature 430, 105(1 July 2004) doi:10.1038/nature02478 for the most updated information Not true. It shows several different studies have indicated anthropogenic effects on global temperature in the late 20th century. This is a very cogent to the discussion at hand. As this statement is only backed up by one paper (addressed by the literature and which I expounded upon) and a very narrow interpretation of Wegman's comments, I counter there is not sufficient evidence for any such claim. Again, an interpretation that lacks sufficient support and is also not accurate. The MM critique the centering of the PCA analysis only provides an alternate interpretation of the centering that does not take into account the data being looked at and also neglects to address selection criteria for the principal components. To counter the MM claim that using non-centered PCA is bad math, you can reference Principal Component Analysis by Ian Jolliffe. Principal Component Analysis, I.T. Jolliffe I'm not sure MM can claim to have published a textbook on the statistic. agreed and ground surface temperatures, continental borehole data, hemispheric surface air temperature trends, ice cores, corals etc. These many studies can't be simply discounted by saying they are using the same proxies that MBH used.
  14. The citation was intended to show a strong rebuttal to the critiques of the MM and the authors (not surprisingly) include Mann, Bradley and Hughes [side note here, just because a given author is on a paper is not sufficient enough reason to doubt the science contained within]. Here are some additional peer reviewed papers that yielded results supporting anthropogenic effects on increased temperature in the late 20th century (many of which do not include Mann, Bradley or Hughes). Bauer, E., M., Claussen, and V. Brovkin, Assessing climate forcings of the earth system for the past millennium, Geophys. Res. Lett., 30 (6), doi: 10.1029/2002GL016639, 2003. Bertrand C., M.F. Loutre, M. Crucifix, and A. Berger, Climate of the Last millennium: a sensitivity study, Tellus, 54(A), 221-244, 2002. Briffa, K.R., T.J. Osborn, F.H. Schweingruber, I.C. Harris, P.D. Jones, S.G. Shiyatov and E.A. Vaganov, Low-frequency temperature variations from a northern tree-ring density network. J. Geophys. Res., 106, 2929 2941, 2001. Brovkin, V., A. Ganopolski, M. Claussen, C. Kubatzki, and V. Petoukhov, Modelling climate response to changes in land use, Global Ecology and Biogeography, 8(6), 509.517, 1999. Crowley, T.J., Causes of Climate Change over the Past 1000 Years, Science, 289, 270-277, 2000. Crowley, T.J., and T. Lowery, How Warm Was the Medieval Warm Period?, Ambio, 29, 51-54, 2000. Esper, J., E.R. Cook and F.H. Schweingruber, Low-frequency signals in long tree-line chronologies for reconstructing past temperature variability, Science, 295, 2250-2253, 2002. Gerber, S., F. Joos, P. Br
  15. #2 Part two of deniers: "Warming is real -- and has benefits -- The Deniers Part II" This part of the article profiles another "denier" (Dr. Richard Tol who released a critique of the The Stern Review of the Economics of Climate Change (Stern et al., 2006). Here is a link to the Stern Report Stern Report Here is a link to Dr. Richard Tol's critique: Tol essay From my read of this essay, Dr. Tol contends with two points made in the Stern essay:
  16. sigh.... in case it was not noticed post 52 is a response to some of the literature out there....
  17. Good enough. I understand the difficulty in finding public copies of some of these papers. Article link-Canada, National Post Some extra commentary on point #1 ("Statistics Needed") raised in the series quoted above. The original paper (published in the social science journal "Energy and the Environment") by McIntyre and McKitrick (lets call them MM) critical of the the "hockey stick" graph in a Nature paper by Mann, Bradley and Hughes (lets call them MBH) questions the analysis used to come up with said graph. The analysis in question is called a Principal Components Analysis (PCA). In a nutshell a PCA analysis is a data transformation that reduces mulitdimensional data sets into components (the principal components) that describe the variance in the data. It is useful in that it allows you to focus on only those variables (principal components, PC1 through whatever) that explain most of the variance in the data (say you pick PC1-5 because PC6-12 are not significant and don't explain the data well). To assist in visualizing this analysis (this gets a wee bit technical, but I am including it for completeness), think of a bunch of points on an scatter (XY, both variables random) graph. Say you need to draw a line that best fits the points on the graph. Now how do you describe your line and the confidence (think error bars) in this line? You can't do it with error bars/confidence intervals because both variables are random and this violates the assumptions of linear regression (It would also difficult to decide which variable to regress on the other). What you can do is define a confidence region, which is described by an ellipse. This ellipse describes the variation in both variables (unlike a regression) and has both a major (the line) and minor axis (perpendicular to that line) that best fits the data (these are called eigenvectors). The shape of the ellipse describes the variation about the line and is represented by the ratio of the lengths of the long and short axes (these are called eigenvalues and represent variance). A good correlation has a really long principal axis and a short minor axis. A bad correlation would have axes close in length (basically a circle) and would not explain the variability in the data well at all. Ok now with that done, imagine a data set in multidimensional space (more than two variables) and you are looking at PCA. Now, however, you are looking at calculating a principal axis through a hyperellipsoid (PC1). Then you look for the second major axis through that hyperellipsoid and you have PC2 and so and so forth. Basically in PCA you take your data set and normalize the mean of the data set to a given value, say 0. The analysis looks for deviations from this mean and the principal componets are variables in the data set that best explain this deviation. You could also look at an interger or interval of a priori interest to your research to see if there is deviation from that value. This is what MBH did in setting the mean over the 20th century to zero (to catch deviations from this period). Here is where MM had a bone to pick. They argued that the entire period should be set with mean O, not just the 20th century. In the MM analysis, the "hockey stick " component came up as PC1 in the MBH analysis and PC 4 in the MM analysis. MM decided that to exclude PC4 (and 3 and 5) and called them artifacts. This was a mistake. Subsequent analysis using MM's methodology has found all three excluded components to be statistically significant, so they should not have thrown them out. In addition, the MM analysis does not fit the entire data set when only the first two PC's are considered. When PC's 3, 4, and 5 are included it matches both the analysis of MBH and the data set. The only way the MM analysis fits the data set is to remove two of the tree ring data sets. You cannot omit data sets from an analysis without darn good reason to do so (statistical outliers are an example). This is an important check on the fit of the data and MM fall short here also. Here is another paper, more recent than those of MM yielded similar results to the MBH paper accounting for some of the criticisms by MM and using a second type of analysis not using PCA analysis of the data sets in question: Rutherford et al 2005 Pages 13 and 32 discuss some of the claims made by MM. Now on to Wegman: I finally got a hold of what his critique of the MBH analysis was. It also focused on the PCA analysis, specifically on centering of the PCA analysis. He was basically asked if the MM criticism had statistical merit. He stated that it did. That's not really much of a revelation, no one actually contests that centering affects PCA analysis. Here is a key point: As I described above, the method of centering the data did not affect the shape or statistical significance of the hockey stick in explaining the data. The only thing it did was switch around the principal compontents (all of which were statistically significant anyways). So centering had no affect on the end result. Moreover, the centering chosen by MBH had relevence to what they wanted to test. Wegman's criticism therefore seems to indicate a lack of understanding of the hypothesis being tested. Wegman made the point that there should be more talk between statisticians and climate scientists due to these questions and yes there probably should be. The key is working together, because a statistican alone is blind to what may be an important variable or a priori assumption in a given field.
  18. A quick note for this discussion. This is not a comment about the research described above, but a commentary on abstracts. Abstracts are interesting in that they provide insight into the most new and breaking research. They should not, however, be given the same weight as journal articles as they are "works in progress" and are almost never subject to peer review. The research of many abstracts never ends up getting published.
  19. There is a lot of commentary on this thread now. I'll only address the above article in this particular reply and comment on other things in subsequent responses. My read on the first few of the 10 points made in the article: 1. The first "denier" is statistician Edward Wegman who convened a three person panel at the behest of the US Energy and Commerce Committee chair Joe Barton. Wegman analyzed the statistics used by Mann et. al (the group that first came up with the "hockey stick" graph of increasing temperature). To really get at what he and his panel concluded I had to look at some outside sources as well as the article cited. The article is a very mixed combination of paraphrasing with actual quotes which makes it difficult to tell what Wegman actually said and what is the interpretation of the author. With that said, what I can glean is that Wegman's panel concluded that Mann et al made an error in the analysis of the data resulting in the graph (and therefore the interpretation that the 1990's and 19998 were the hottest years of the past millennium). Not mentioned by the article is the fact that another group (the National Research Council) also reviewed Mann's work. This group, led by Gerald North said the data could not support the claim that a single year or decade showed increased warming, but did support another claim that the past few decades were the warmest in the past millennium (the Wegman panel did not address this point). The hockey stick has been since superceded by other more robust models so that fact that the statistics were flawed does in itself discount all research providing supporting evidence for global warming. The extended claims made in the article that suggests most climate science is questionable because the authors don't include members of the American Statistical Association are a bit silly. Almost all fields of experimental science include statistics in the analysis of the data. Training in statistics is included in ones education leading up to and including graduate study. There are many biologists in the fields I have been involved with who are extremely well versed in statistics as a necessity of the fields in which they did their research (fisheries, oceanography and ecology for example). You don't need to have a card carrying statistician on your team to publish a paper. With that said, it is also not a bad idea to pay very close attention to your statistics and consider suggestions made about the statistical analyses, especially in those fields where they may be more complicated. There are also many other fields where the statistics used are sometimes not the most correct ones or errors are made in their use (I have seen very basic violations of parametric analysis in certain studies). This is not something unique to climate science. In addition, there is often debate about certain statistical methods even among statisticians with each technique having its strengths and weaknesses. What I have noticed is that in those fields that need more complex statistics, you are going to find researchers well versed in statistics by necessity. In my opinion it is a good thing this analysis happened, and it should be used to improve the science practiced. II. I
  20. Regarding Chris Landsea, this does seem to me to be an unfortunate situation resulting from the commentary made by one scientist associated with the IPCC and the fact that these comments were attributed to the entire body. Clearly, Dr. Trenberth overstepped the current state of the science in suggesting that the 2004 hurricane season may be a harbringer of future activity". It seems that the IPCC may need to reign in the statements made by those scientists who take part in non IPCC events (so that their statements are not taken as an official IPCC position). Nevertheless, I don't think the comments made by one person negate the body of the work of the IPCC. Also, Landsea (from what I have been able to find) has a record of questioning publications indicating increased hurricane intensity since 1950 (including a critique of a paper by Emanuel et al., Nature Aug 4, 2005). The fact that he was asked to serve in the IPCC contradicts the notion of some that it loads itself with sycophants. See a summary of this below. link So, IMO, while Trenberth's statements overstepped the science (and were ill conceived given his IPCCC membership), he also did not pull them out of thin air, he just did not accurately represent the current "state of the art". It is really unfortunate that due to this incident and the aftermath, Dr. Landsea chose to withdraw from the current IPCC as his contribution to the field seems great. The WMO and IPCC statements made about hurricanes reflect the uncertainty in the science connecting global warming to hurricanes, so the concern that Trenberth's influence would result in a innacurate document are not borne out. WMO statement (note the multiple citations of Landsea's papers) WMO stement 2006
  21. Tri Bui, You should be ok. The itching is a normal response and will eventually go away, You should keep an eye out for any inflamation or redness that does not go away as this could signal an infection (in which case you should consult a doctor). I have attached a link for suggested treatment for the itching. fireworm contact treatment David
  22. VicSkimmer, Analysis of ice cores and sediments cores among other things can provide data from extremely distant time periods. I have seen many of these papers in various journals including Science and Nature. Also there are obviously various scales of climate change and flux. The shorter time periods are also harder to predict. There are now some very robust models that have accurately modeled climate variation when provided the raw data. While there is some uncertainty in predicting short scale temperature changes, there is a large majority consensus that the models predicting temperature increase over the next century are accurate. How much this change will be is indeed uncertain. As for predictions of hurricanes, I believe the general idea is that global warming results in higher surface water temperature in regions prone to hurricanes. This in turn provides more energy for the formation of more intense hurricanes. As for the apparent hyperbole about 2006 I don
  23. When looking at these point-counterpoint comparisons, I think it is usefull to keep in mind the source of the information. This is definitely not an even match up: Intergovernmental Panel on Climate Change: A Panel made up of hundreds of scientists, governement officials and other experts to assess peer reviewed literature (i.e. research that has been completed, submitted to a journal and accepted for publication based on its scientific merit) on climate change as well as industry reports and traditional practices. IPCC Timothy Ball: He has a PhD and taught at the university level. Dr. Ball states he has "an extensive background in climatology, especially the reconstruction of past climates and the impact of climate change on human history and the human condition. " I did some searches on him and he appears to have only 5 scientific publications, two of which seem to be his master thesis and doctoral dissertation. That leaves only three peer reviewed scientific articles, published in 1984, 1986 and 1994 (an this paper is in an economics and policy article, not actual climate change research). His papers have been cited a whopping 4 times. This is an exceptionally poor research record considering his varying claims to have been a professor at Winnipeg for 20+ years. Also, there has been a heck of a lot of research on climate change since 1994. He is far from credible. Also, Dr. Ball needs to go back to school as indicated by his statement: "A scientist makes certain assumptions and then produces a theory which is only as valid as the assumptions. " Actually, scientists porpose hypothesis to test their research. A Theory (in scientific usage) is a far more rigorous term describing a framework supported by a large amount of research by many different scientists. Bandit: US thinktank offering cash to dispute UN climate panel: report VicSkimmr: Current climate science research and modeling takes into account natural cyclical variation in temperature. The current temperature changes and projections cannot be explained by this natural variation. The degree to which anthropogenic impact, is, however, not completely clear. I wish it were true that we could not have such a great impact on the earth, but several examples support this ranging from the extinction of the passenger pigeon (it was not thought to be possible to eradicate them as there were once 5 billion), to a hole in the ozone layer (also Dr. Ball apparently says CFC's don't do anything to ozone either, though it is a well describer chemical process). As for what to do about, this is indeed a complicated question where short term economic loss from dealing with the problem now should be compared/contrasted with the cost of dealing with this problem later where the costs are projected to be far higher... David
  24. Grav, I'll frag some zoos for ya. The birdsnest sounds great. Quazi, I have a pavona for you. David
×
×
  • Create New...