Abuse: the darker side of human-computer interaction
[abstract] There seems to be something innate in the human/computer relationship that
brings out the dark side of human behaviour. Anecdotal evidence suggests that
of the utterances made to chat-bots or embodied conversational agents (ECA)
in public places, 20-30% are abusive. Why is that? Is it simply that a quarter of
the human population are 'sick' and find abusing a machine to be in some way
therapeutic? If so, it says something about human nature that is quite disturbing
and in need of further study. Perhaps the phenomena is directly caused by a new
technology. In the early days of computer-mediated communication there was a
tendency for people to abuse each other, but this has become far less common.
Will the extent to which people abuse ECAs just naturally become a thing of
the past? The Turing Test has also had a considerable influence on appropriate
behaviour when talking to a computer. To what extent is abuse simply a way
people test the limits of a conversational agent? Perhaps the problem is one of
design: the aesthetics of everyday things is key to their success, and perhaps
abuse is simply one end of a continuum of the 'aesthetics' of interactive things
with human-like behaviour. The extent to which abuse occurs seems to indicate
something fundamental about the way humans interact. Is abuse simply the
most noticeable phenomena connected to something fundamental in the way we
humans communicate?
The purpose of this workshop is to bring together engineers, artists and
scientists who have encountered this phenomenon, who might have given some
thought to why and how it happens, and have some ideas on how pro-active,
agent-based interfaces, typified by chat-bots or ECAs, should respond to abuse.