-
Mashup Score: 2Now here’s a tour de force for ya | Statistical Modeling, Causal Inference, and Social Science - 20 day(s) ago
In social science, we’ ll study some topic, then move on to the next thing. For example, Yotam and I did this project on social penumbras and political attitudes, we designed a study, collected data, analyzed the data, wrote it up, eventually it was published — the whole thing took years! and we were very happy with the results — and then we moved on. The idea is that other people will pick up the string. There were lots of little concerns, issues of measurement, causal identification, generalization,
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Infectious DiseaseTweet
-
Mashup Score: 1Intelligence is whatever machines cannot (yet) do | Statistical Modeling, Causal Inference, and Social Science - 24 day(s) ago
I had dinner a few nights ago with Andrew’s former postdoc Aleks Jakulin, who left the green fields of academia for entrepreneurship ages ago. Aleks was telling me he was impressed by the new LLMs, but then asserted that they’ re clearly not intelligent. This reminded me of the old saw in AI that “AI is whatever a machine can’ t do.” In the end, the definition of “intelligent” is a matter of semantics. Semantics is defined by conventional usage, not by fiat (the exception seems to be an astronomical
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, CardiologistsTweet
-
Mashup Score: 0
This letter by Thorland et al. published in the New England Journal of Medicine is rather amusing. It’s unclear to me what their point is, other than the fact that they find the published results for the new COVID drug molnupiravir “statistically implausible.” Background: The pharma company Merck got very promising results for molnupiravir at their interim analysis (~50% reduction in hospitalisation/death) but less promising results at their final analysis (30% reduction). Thorlund et al. were surprised
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Hem/OncsTweet
-
Mashup Score: 1
Kevin Lewis points us to this article by Joachim Vosgerau, Uri Simonsohn, Leif Nelson, and Joseph Simmons, which begins: Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper . . . Here we show that the validity of internal meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (a) all conducted studies were included (i.e., an empty
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Future of MedicineTweet
-
Mashup Score: 3
Following our recent post on the latest Dishonestygate scandal, we got into a discussion of the challenges of simulating fake data and performing a pre-analysis before conducting an experiment. You can see it all in the comments to that post — but not everybody reads the comments, so I wanted to repeat our discussion here. Especially the last line, which I’ ve used as the title of this post. Do you mean to create a dummy dataset and then run the preregistered analysis? I like the idea, and I do it
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Hem/OncsTweet
-
Mashup Score: 0
When rereading this post the other day, I noticed the post that came immediately before. I followed the link and came across the delightful story of a researcher who, after one of his papers was criticized, replied, “We can keep debating this after 11 years, but I’m sure we all have much more pressing things to do (grants? papers? family time? attacking 11-year-old papers by former classmates? guitar practice?)” One of the critics responded with appropriate disdain, writing: This comment exemplifies the
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Hem/OncsTweet
-
Mashup Score: 4“Whistleblowers always get punished” | Statistical Modeling, Causal Inference, and Social Science - 2 month(s) ago
The corollary to all this, and closely related to Javert’s paradox, is the social law: Whistleblowers always get punished. The Javert paradox, as regular readers will recall, goes like this: Suppose you find a problem with published work. If you just point it out once or twice, the authors of the work are likely to do nothing. But if you really pursue the problem, then you look like a Javert, that is, like an obsessive, a “hater,” someone who needs to “get a life.” It’s complicated, because some critics
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, CardiologistsTweet
-
Mashup Score: 1
I wanted to draw your attention to a paper that I’ve just published as a preprint: On the uses and abuses of regression models: a call for reform of statistical practice and teaching (pending publication I hope in a biostat journal). You and I have discussed how to teach regression on a few occasions over the years, but I think with the help of my brilliant colleague Margarita Moreno-Betancur I have finally figured out where the main problems lie – and why a radical rethink is needed. Here is the abstract:
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Hem/OncsTweet
-
Mashup Score: 13
Erik van Zwet, Sander Greenland, Guido Imbens, Simon Schwab, Steve Goodman, and I write: We have examined the primary efficacy results of 23,551 randomized clinical trials from the Cochrane Database of Systematic Reviews. We estimate that the great majority of trials have much lower statistical power for actual effects than the 80 or 90% for the stated effect sizes. Consequently, “statistically significant” estimates tend to seriously overestimate actual treatment effects, “nonsignificant” results often
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Hem/OncsTweet
-
Mashup Score: 13
Erik van Zwet, Sander Greenland, Guido Imbens, Simon Schwab, Steve Goodman, and I write: We have examined the primary efficacy results of 23,551 randomized clinical trials from the Cochrane Database of Systematic Reviews. We estimate that the great majority of trials have much lower statistical power for actual effects than the 80 or 90% for the stated effect sizes. Consequently, “statistically significant” estimates tend to seriously overestimate actual treatment effects, “nonsignificant” results often
Source: statmodeling.stat.columbia.eduCategories: General Medicine News, Hem/OncsTweet
RT @StatModeling: Now here’s a tour de force for ya https://t.co/rMoViDGSzu