In 2020, I ran my MSc research expecting to find that heavy social media users would show attentional bias for social media-related words. I used a Stroop task, the MTUAS and SUQ questionnaires, and collected actual iPhone Screen Time data from 82 participants. I was ready to prove the effect was real.
What I found
The Stroop task showed no attentional bias. F(2,76) = 0.82, p = 0.44. Adding smartphone use as a covariate made no difference. Substituting actual iPhone Screen Time data for self-reported use still showed nothing. The standard prediction in the field, that intense social media users would show selective attention effects, was not supported.
I was confused. The headline finding was a null result.
But something else emerged from the data. The MTUAS Social Media subscale correlated significantly with actual iPhone Screen Time (r = .322, p < .01), pickups (r = .233, p < .05), and notifications (r = .243, p < .05). People could report their social media use fairly accurately. However, the MTUAS Smartphone subscale showed no significant correlation with actual iPhone data (r = .198, ns). One of the most cited instruments in the field failed to predict actual device behaviour.
What I learned since
The measurement validity finding turned out to be more important than the null Stroop result. Researchers like Andrews et al. (2015) and Ellis et al. (2019) had already shown that self-reported device use often doesn't match reality. My data added to this: even within the same study, one subscale worked and another didn't.
If the instruments we use to measure technology use can't reliably predict actual behaviour, then any research built on those instruments, including studies of attention, wellbeing, and cognitive impact, is compromised from the start.
In 2023, researchers validated something different: the Digital Flourishing Scale. Instead of measuring pathology, it measures whether technology supports meaningful relationships, authentic self-expression, and sense of purpose. The Self-Control subscale specifically asks whether you feel in control of your digital life.
The question shifted from "what's wrong with you?" to "is technology helping you live well?"
Why this changes beò
My 2020 self would have built another shame-based app showing scary screen time numbers.
Instead, beò starts from measurement validity. We use instruments that have been validated against actual behaviour. We ask: do you feel like technology is serving your life? Can simple microhabits (breathing, movement, nature, connection) shift that feeling?
I'm not measuring screen time reduction (research shows "digital detox" apps struggle with this anyway). I'm measuring whether you feel more in control. Whether technology feels like it's working for you.
The honest bit
I'm using the latest validated measures (2023, peer-reviewed, published in Journal of Happiness Studies). My 2020 research taught me that the choice of instrument is a consequential methodological decision. When better tools exist, use them.
We're pre-registering our hypotheses, acknowledging our limitations (self-selected sample, no control group, 4-week duration), and sharing what we find, including what doesn't work.
That's the kind of research I want to do.
Read the full 2020 study: How does social media use affect attentional bias? (PsyArXiv preprint)
Help us find out
Join our study and help test whether evidence-based microhabits can help people feel more in control of their relationship with technology.
Join the research