I enjoy that so much of today's AI discourse traces around a still-fresh anxiety that certain forms of labour thought to be protected from automation as little as five years ago—for example coding, visual art, formal and creative writing—might be among the least safe.
Lots of reasons to enjoy this AI discourse. The class-inflected revaluation of these kinds of work is dizzying.
Perhaps being able to perform a cloudy repetition of existing ideas in essay or report form was never that valuable?
Perhaps paint by numbers screenwriting is not that "creative"?
Isn't it an irony if the convergence of "StackOverflow lyfe" with "intellectual property" ends up turning most coding into supervising an automated plagiarism engine?
@attentive
Will the tests of the future rate us against the median of machine intelligence?
Reverse Turing tests; differentiating humans from computers
@attentive
I think the first version might be the most telling.
Reverse Turing tests; differentiating humans from computers
@moliver Two modes of a "reverse Turing test" are:
1. Computer judge, human or computer interlocutor. Computer attempts to classify the entrant correctly.
2. Human judge, human or computer interlocutor. Human seeks to be (mis)recognised as a computer.
Could computer-administered Turing tests along the lines of (1) adversarially winnow what's distinctive about human thought?
#AI #Automation #TuringTest