I used to work with a content production team. Every day they would write and source various articles, and every day they would publish a good deal of material. Then once a week the team would review their latest viewing figures—how many users had read their content, how much time they had spent reading, etc. This could allow them to learn what was going well and what wasn’t.
In that weekly review, if the numbers were good the team would congratulate themselves on a job well done. But if the numbers weren’t good they always explained it as due to factors outside their control: “Well, there wasn’t much going on this week for us to write about,” or “Well, that just happens sometimes,” or “Well, it was very sunny this week, so people spent less time looking at their screens”.
I found this very frustrating. Their appraisal didn’t seem honest from week to week—they took responsibility for the good but not the bad. It also meant they didn’t take the opportunity to learn or make good decisions.
It’s for situations like this that I always try to set success criteria in advance of the event we’re going to have to decide on. That might be timeboxing a piece of work (we’ll continue if we’ve reached a small milestone by this point in time), or using a tripwire (we’ll change course if ever this event happens), or just setting some expectation in advance of what a measurement will have to be for us to classify it “good”, “great”, “okay”, or “disappointing”.
This all allows much more objective decision-making, and we’re much less likely to be steered by subjective, after-the-fact events.