Monday, November 4, 2019

Analysis of a Media Source’s Coverage on a Research

The Wall Street Journal published an article regarding a recent psychopharmacology study on depression done by Dr. Hunter that investigated whether pre-medication brain activity corresponded with treatment outcome. In addition, the article discusses the interesting results garnered from the placebo group v. medicated group analysis. While the news piece does a fair job in representing the study’s findings, the author does delve into extrapolations not statistically supported in the actual study. Fifty-one adults who were diagnosed with major depression were used in the study, and this was accurately reported by the news article (Wang, 2006). Hunter et al. investigated whether there were significant differences in â€Å"demographic characteristics, illness history, baseline illness severity, [and] final response[s],† and finding none, pooled the subjects for analysis (2006, p. 1427). This does give the Journal, who must condense the findings for the public, good reason to fail in reporting this. The study is experimental in nature, also using double-blind and randomized assignment to help rid the results of confounding variable input. All of the subjects were given a placebo anti-depressant for a one-week lead-in; after this, half of the individuals were continued on the placebo while the others were given one of two anti-depressants. Electroencephalograph (EEG) readings were taken at the time of enrollment, after the lead-in period, and several times later (over an eight week period). The Wall Street Journal condenses this explanation down, and while the article abandons the jargon of an experimenter, it does give the impression of an experimental method being followed. When the news article explains how the researchers defined their variables they leave out valuable information. The author states that patients with certain brain-patterns â€Å"ended up responding better to antidepressant treatment[s],† but fails to mention how this was evaluated (Wang, 2006, p. 1). A Hamilton depression scale was given to judge improvement, giving reliability to the study’s findings. However, the news piece does accurately report that EEG was also used, in an attempt to find a decrease in prefrontal lobe activity. This study uses a control group, those maintained on the placebo, and compares their EEGs to those of the medicated group, but the main focus of the research was the search for experimental evidence supporting that the commonly used one-week lead-in can predict treatment outcome via brain imaging. The Wall Street Journal article focuses on only a facet of the study, and one that the researcher’s claim to have nonsignificant support for. Wang states that, â€Å"patients who developed this brain-pattern change ended up responding better †¦ than patients who didn’t,† which is misleading to an audience that has not read the actual research (2006, p. 1). While Hunter et al. do find that their EEG scans were a good indicator of treatment success, they also caution that: Although the placebo and medication group analyses yielded different brain regional predictors of outcomes, because of the absence of statistical group interaction we cannot conclude that changes in †¦ [the differing brain regions] †¦ differentially predicted outcomes (2006, p. 430). The news article wrongly insinuates that the study provided evidence for a brain-pattern that is linked to a good treatment outcome in depression. It is certainly true that this study offered outcomes that encourage research in this direction, and that the author also seems to believe that the EEG-pattern found is â€Å"a good indicator† for success, but after reading the actual experiment, Wang seems to have inflated the actual findings. Having critiqued the insinuations of the news piece, the extrapolations made by the author do have some merit. The researchers discovered that both the medicated and the placebo groups had a similar variance â€Å"predicted by the neurophysiological changes occurring during the placebo lead-in phase† (Hunter et al. , 2006, p. 1429). They offered some possible causal factors such as â€Å"pharmacotherapeutic alliance and pretreatment expectations,† these results seem to demonstrate a placebo-treatment effect, which offers even more reason to further investigate how a patient’s treatment induction affects his/her progress (Hunter et al. , 2006, p. 1429). Though not mentioned or referenced in the Wall Street Journal item, the ethical issues surrounding this experiment are noted by Hunter et al. Providing individuals suffering from major depression placebos for eight weeks is risky, using a double-blind procedure makes it even more dangerous. While the IRB board of UCLA did require a 15-25 minute counseling session during each patient’s visit, this is a massive step down from the psychopharmacological and psychotherapeutic support offered at the recruitment area (a psychiatric outpatient hospital) (Hunter et al. 2006). Conversely though, this ethical â€Å"patch† does raise an interesting question for further research, lightly touched on by the study’s authors; if this psychotherapy (however minute) was responsible for a pre-treatment neurophysiological shift, and the shifts that were indicatory of better treatment outcomes could be identified, research could be done to more effectively meld psychotherapy and medic al psychiatry. It is understandable why media reports often leave out details of a research study, often the conclusions and discussion by the author/s of the study are of more interest to the public. However, when a media piece merely latches onto a nonsignificant observation or a suggestion for future research found in the study, the true findings of the experiment are overshadowed by the speculation of the piece’s author. When a media source offers information about a study, it is vital to maintain a skeptical and critical mindset towards the findings until they are corroborated by the primary source. It is important to look for information that supports the generalizability of the study’s findings. In the piece presented above, it is worth noting that the study was done on depressed individuals, other psychopathologies may not have any correlation to the results or conclusions provided. The media also commonly jumps from correlation to causation, whether directly or implicitly. While scientific information is the goal of research, sensitization by the media will usually occur to some degree.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.