A hat tip to David Zaring at The Conglomerate for this post that covers some interesting ground regarding “risk” issue, including regulatory issues and issues regarding product liability and other tort claims. One part of the post also covers a new book on the safety – or lack of safety – of imports into the US

Of perhaps greatest interest, the post educated me to an upcoming (Dec. 6-9) seminar in Baltimore by The Society for Risk Analysis. The conference agenda is here. If you see an interesting item on the agenda, you can click through links on the left side to see detailed abstracts of presentations. Some are of potential global note as they address issues regarding the use of “sponsored” research, risks of nanoparticles (some are said to be be more toxic than asbestos fibers in some settings) and on whether formaldehyde is a carcinogen. To whet your interest, pasted below is the text of one abstract regarding sponsored research:

“M2-E 10:30 AM-Noon Research Funding and Scientific Integrity: Conflicts and Criteria

M2-E.1 10:30 Proposed consensus criteria for assessing the reliability of scientific work. Conrad, Jr. JW*; Conrad Law & Policy Counsel jamie@conradcounsel.com

Abstract: Ultimately, the merits of scientific research findings are judged by the extent to which they are reproduced by other scientists. Such replication can take years, and what constitutes replication in a given case may be disputable for some time. Consequently, the scientific community has developed a variety of shorter-term approaches for assessing scientific work. Some of these approaches are designed to evaluate the validity and significance of the work, particularly in comparison to other studies addressing the same question. (These approaches are frequently termed “weight of evidence” approaches.) Other approaches are addressed to the more limited, but still vitally important, task of evaluating the reliability of the work against concerns that the results may be the product of error or may have been consciously or unconsciously influenced by conflicting interests or biases of the investigator. Some of these latter approaches have become well-established (e.g., peer review, disclosure of competing interests); others are not yet widely accepted (e.g., public registries of proposed research, free access to underlying data). This presentation will survey the approaches being suggested and will propose a set of criteria that, if they became conventionally accepted, would allow all concerned to have confidence in the reliability of scientific work regardless of who conducted or funded it. “