From a Culture of Recklessness to a Culture of Precaution

Published by Catherine L’Ecuyer in El País

France now seeks to shield infants from mobiles, tablets and televisions with warning labels on packaging, alerting caregivers to potential health risks posed by exposure to screens. This draft law—already approved in the Senate but awaiting a vote in the National Assembly—would compel manufacturers to include cautionary notices about developmental hazards for young children (0–3 years).

Is this measure excessive? Ample evidence suggests that screen exposure in early childhood can impair attention, increase impulsivity, diminish vocabulary, and more. In light of such studies, prominent paediatric associations advise against exposing children under two to screens, and recommend limiting screen time to one hour per day for those aged 2–5. But are these studies definitive? Are the recommendations overly cautious? Does occasional or even regular exposure guarantee harm for every child? If we speak only of risk, is that enough grounds to ban or regulate the companies producing these devices?

In truth, the issue transcends the simple question of what will definitely happen if my child uses a gadget. It is really about understanding the contrast between a culture of recklessness and one of caution.

In 1986, the Challenger Space Shuttle exploded just 73 seconds after liftoff, watched in horror by millions of Americans who had tuned in live. It was the gravest disaster in the annals of space exploration, claiming the lives of seven crew members—among them Christa McAuliffe, a schoolteacher selected to broadcast her lessons from orbit as part of an educational outreach initiative.

Subsequently, a presidential commission was convened to investigate the tragedy. Its members were largely aligned with or beholden to NASA and the US government—except for one: Richard Feynman, Nobel Prize–winning physicist, the sole independent voice. The commission’s report was later denounced as overly deferential to those who had appointed it. Engineers who had raised earlier concerns about faulty O-rings—rubber seals vulnerable to freezing temperatures—were silenced; some were dismissed. Only in later years did they speak out in investigative reports, far from the spotlight. Feynman’s dissenting views were buried in Appendix F of the report. But what truly transpired before the launch?

Engineers had warned scorningly that the O-rings risked failing in the anticipated freezing early-morning chill and urged a delay until a solution was found. NASA’s senior leaders challenged them: “Prove beyond doubt the shuttle will explode,” they demanded. NASA refused to delay unless the risk was 100%. They literally shifted the burden of proof. Why?

Political pressure had been mounting: NASA had not achieved a successful launch in years, and the media was critical. Christa’s scheduled lessons from space were poised during weekdays; any delay would shift them to weekends. The O-ring contractor risked reputational harm; NASA risked political embarrassment; the government risked public backlash. Collective expectations were immense, and risk seemed an inconvenient obstacle. So, reversing the burden of proof was the only solution.

In his report, Feynman went beyond recounting events, highlighting NASA’s systematic tendency to downplay risks, its frivolous inclusion of a schoolteacher in an experimental rather than commercial mission, and the stark contrast between NASA leaders’ optimistic risk estimate (1 in 100,000) and that of the engineers (1 in 200). This divergence reflected a culture that welcomed only good news—news that bolstered reputation—while ignoring inconvenient truths. He wrote:

« Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects. Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met. If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources. »

This story resonates, not because smartphones will eventually « explode » in children’s hands, but because it epitomises the reversal of the burden of proof in society’s large-scale experiment. We do not demand that tech companies prove their products are educationally beneficial or harmless to our children; instead, we label cautious voices “technophobes” and put the onus on them to provide proof of harm. Meanwhile, rigorous scientific proof remains slow and costly—failing to keep pace with rapid technological obsolescence. When the evidence finally emerges, the devices are outdated, and new ones take their place. Meanwhile, the damage has already been done.

Moreover, as both subject and object of this experiment, we are the judges and the judged. If we study ourselves, might we lack the objectivity needed to carry out rigorous research and heed its outcomes ? Perhaps this paradox explains our reluctance to embrace healthy scepticism, and our discomfort at confronting evidence that might undermine cherished habits or beliefs. What do we call those who act without inquiry? Reckless. And when economic interests are involved, this recklessness becomes murky indeed.

Feynman succinctly pinpointed the disconnect between empirical data and the decisions of those managing public resources. His concluding appeal rings as relevant today as ever: « For a successful technology, reality must take precedence over public relations, for nature cannot be fooled ».

Leave a comment