I have a briefcase. Inside the briefcase is a letter that declares how much money you will get from me. The amount of money is a random number between -$100 (you pay me) and $1,000,000 (I pay you).
Do you open the briefcase?
Of course you open the briefcase. That upper end of the interval is life altering and the other end is manageable for almost everyone.
But common statistical practices would guide most people to never open the suitcase because the suitcase might have zero dollars in it. That doesn’t make sense but we do it all the time.
Scientists commonly reject a finding and decide that it isn’t important because p <0.05, which is the same as saying the 95% confidence interval for that finding includes zero. Treating the briefcase as a rough example, they would say something like, on average, people who open briefcases receive $500,000, which was not statistically significant (p>0.05). Or they might say the effect size for opening a suitcase was $500,000; however, the confidence interval contained zero dollars, so they rejected this effect as not significant.
Confidence intervals provide information about how precise a measurement is, about how much uncertainty a measurement contains. Go/no-go tests measure uncertainty and confidently claim that a result is or is not significant. I used money in this example because it is inherently practical and contextual. I know how important $100 and $1 million are to me; I can imagine its impact on my life. It takes a lot of work to provide the same context and interpretation for a difference of 0.1 standard deviations on some exam scores as for 500 grand. How many $500,000 findings have scientists dismissed as not significant?