A wonderful essay on what’s “obvious” to human and how the fallacy that “obviousness is driven by human bias”, which in itself is error prone, can lead to ungrounded, optimistic euphoria, especially around AI.
Knowing what to observe, what might be relevant and what data to gather in the first place is not a computational task – it’s a human one. The present AI orthodoxy neglects the question- and theory-driven nature of observation and perception.
Read More