Gartner’s global chief of research, Chris Howard, explores how anthropomorphising, or considering nonhuman entities as if they were human, is a theme in the heated discussions around artificial general intelligence (AGI).
He also reminds users of the limitations and distractions they impose to fit AI into a human image.
“Since the introduction of generative AI technologies that we've become familiar with, it appears that these machines think, respond, empathise, and so on. But I want to remind you that they are prediction machines. They predict the order of words, which can be contextual and responsive. However, it still doesn't necessarily mean that the machine understands what it's saying, has empathy, or feels a certain way,” Howard explains.
Mutuality
Howard posits the need for mutuality as humans and machines co-evolve and technology becomes more prevalent in improving businesses, industries, and societies.
“There are many different types of intelligence, and maybe our tendency to cast human characteristics onto technology limits us from what it could do. Let's let machines evolve and create an intelligence that interacts with humans to solve the hardest problems we face. And then, it becomes a mutual experience,” he said.
Full video here