- We could be in the top 5% inside our domain and better off spending time scaling the machine rather than perfecting it.
- We could be missing something major and not be able to see it.
- We could be in a very high-leverage position where even a small improvement could have a huge impact.
- We could be missing so much context for the model that we have difficulty seeing how new information fits in thereby limiting our Situational Awareness.
Can't Read the Label From Inside the Jar
By definition it is impossible for any of us to know what we don’t know.
July 2, 2018 2 minute read