Well done, Tristan! You clearly have done significant due diligence. I particularly love the level of organization you've done here, by subject, and likelihood. During this exercise, were you ever "on the fence" about the likelihood or certain of events? I have to admit as a Data Scientist, I always try to find a quantitative means of arriving at a conclusion (e.g., linear regression, regression trees, etc.) when assessing likelihood. But Futures tends to be a very qualitative practice in nature. So when I do exercises like this, I'm admittedly never sure I trust in my assessments because I don't have formal data to corroborate my conclusion. What are your thoughts on this?
Thanks Tom. I absolutely struggled several times with classification, though most of the time it was between whether or not the statement was meaningfully about the future, rather than high vs low certainty. I think the fuzziness of the exercise is unavoidable - all data is generated/constructed and an imperfect image of reality, but using human language as a way to describe abstractions of intent is an extreme case of this.
Many of these predictions would make good questions on forecasting platforms like Metaculus.
Well done, Tristan! You clearly have done significant due diligence. I particularly love the level of organization you've done here, by subject, and likelihood. During this exercise, were you ever "on the fence" about the likelihood or certain of events? I have to admit as a Data Scientist, I always try to find a quantitative means of arriving at a conclusion (e.g., linear regression, regression trees, etc.) when assessing likelihood. But Futures tends to be a very qualitative practice in nature. So when I do exercises like this, I'm admittedly never sure I trust in my assessments because I don't have formal data to corroborate my conclusion. What are your thoughts on this?
Thanks Tom. I absolutely struggled several times with classification, though most of the time it was between whether or not the statement was meaningfully about the future, rather than high vs low certainty. I think the fuzziness of the exercise is unavoidable - all data is generated/constructed and an imperfect image of reality, but using human language as a way to describe abstractions of intent is an extreme case of this.