Discussion about this post

User's avatar
Matt Smith's avatar

Extremely smart article, love this! Great insights.

Patrick Senti's avatar

I guess this all depends on how judgement is defined / definable, and how it is measured / measurable.

For any definition and measurement that is essentially a function of some data (aka context), subject to some objective measurenent (aka minimized loss), there will eventually be a machine that can be as good as humans, given the same data.

The uniquely human trait, however, will remain that of accountability, at least for a long time to come.

As long as judgement is linked to accountability, AI may be a helpful tool, yet never take the final decision.

That is especially true for the LLM variant of AI, where errors are eseentially unbounded because an LLMs input is unbounded. As a consequence the risk of failure, given AI takes a final decision, is too high for many applications. At least, human review and thus judgment is needed.

This is less of a concern for more classic, tractable AI, i.e. 'classic' machine learning, where error rates can be evaluated in a limitted feature space, and thus the model can be efficiently evaluated ahead of deployment. Thus the risk can be effectively calculated, and if acceptable (another judgment with accountability attached), the decision can be delegated to the AI.

41 more comments...

No posts

Ready for more?