Current treatment for erectile dysfunction: an umbrella review of systematic reviews and meta-analyses
Journal: The Aging Male | Published: 2026-03-06 (epub) | Type: Umbrella review of systematic reviews and meta-analyses | PMID:41792626 | DOI: 10.1080/13685538.2026.2640765
Authors: Ma J, Wei J, Li J, Yu M, Lu S, Zeng H, Xu L, Dong Y, Ma Z, Zhang P — all affiliated with Chengdu University of Traditional Chinese Medicine and related institutions in Sichuan, China.
Funding/COI: Not listed on PubMed.
Summary
This is not a trial of any ED treatment. It is an umbrella review — a review of reviews — that pulled together 23 published meta-analyses and graded how trustworthy they are. The useful finding is not "which treatment won." It is that the evidence base behind most current ED treatments is garbage, according to Ma et al., 2026.
Claims
The authors searched PubMed, Web of Science, the Cochrane Library, and Embase for studies published through October 2025, according to Ma et al., 2026.
From 1,191 studies screened, they extracted 23 published meta-analyses covering 36 different interventions for erectile dysfunction, according to Ma et al., 2026.
Those 23 meta-analyses produced 42 summary effects across four subjective outcome measures: IIEF (20 effects), IIEF-5 (9), IIEF-EF (6), and EHS (7), according to Ma et al., 2026.
AMSTAR-2 grading of the 23 meta-analyses: 13 (56.5%) critically low quality, 7 (30.5%) low quality, 2 (8.7%) moderate quality, 1 (4.3%) high quality. That means 87% were rated low or critically low, according to Ma et al., 2026.
The 42 summary effects were worse: 20 (47.6%) rated low quality and 20 (47.6%) very low quality, leaving exactly two summary effects at moderate or high quality. That is 95.2% low or very low, according to Ma et al., 2026.
Both pharmacological and nonpharmacological interventions showed statistically significant improvements, but the authors themselves say those results need caution because patient numbers were limited and the endpoints are subjective, according to Ma et al., 2026.
Study Quality
For a high-level evidence paper, this is refreshingly honest. The authors used an umbrella review design, searched four major databases, and applied AMSTAR-2 to grade methodology instead of treating every meta-analysis as automatically credible. The conclusion matches the numbers rather than overselling them: they explicitly state the evidence base is "predominantly of low or very low quality," according to Ma et al., 2026.
Red Flags
The abstract does not report the total number of patients behind the 23 included meta-analyses. Without that number, the scale of the evidence is hard to judge from the PubMed record alone, according to Ma et al., 2026.
All outcomes are questionnaire-based (IIEF variants) or hardness scores (EHS) — subjective self-report measures, not objective physiological endpoints. The authors acknowledge this, according to Ma et al., 2026.
Covering 36 different interventions in one review is broad, which means the paper describes a messy evidence pile rather than resolving which specific treatment claims survive scrutiny, according to Ma et al., 2026.
No funding source or conflict-of-interest statement appears on the PubMed listing, according to Ma et al., 2026.
All authors are affiliated with Traditional Chinese Medicine institutions. Not inherently a problem, but worth noting given the inclusion of nonpharmacological interventions.
Strengths
Umbrella review is the right format for the question "how solid is the ED treatment literature overall?", according to Ma et al., 2026.
Used AMSTAR-2 to grade methodological quality rather than just counting positive results, according to Ma et al., 2026.
The conclusion does not oversell: the authors explicitly say the reliability of the evidence is poor and call for better-designed studies, according to Ma et al., 2026.
Verdict
A useful reality check on the state of ED research, not a guide to what works. The main finding is that 95.2% of the summary effects across 36 ED interventions were graded low or very low quality. The paper's value is in quantifying how weak the evidence stack is, not in endorsing any particular treatment.