Strategic misrecognition and speculative rituals in generative AI
Public conversation around generative AI is saturated with the ‘realness question’: is the software really intelligent? At what point could we say it is thinking? I argue that attempts to define and measure those thresholdsmisses the fire for the smoke. The primary societal impact of realness ques...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
DIGSUM
2024-12-01
|
| Series: | Journal of Digital Social Research |
| Subjects: | |
| Online Access: | https://publicera.kb.se/jdsr/article/view/40474 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1846098197153316864 |
|---|---|
| author | Sun-ha Hong |
| author_facet | Sun-ha Hong |
| author_sort | Sun-ha Hong |
| collection | DOAJ |
| description |
Public conversation around generative AI is saturated with the ‘realness question’: is the software really intelligent? At what point could we say it is thinking? I argue that attempts to define and measure those thresholdsmisses the fire for the smoke. The primary societal impact of realness question comes not from the constantly deferred sentient machine of the future, but its present form as rituals of misrecognition. Persistent confusion between plausible textual output and internal cognitive processes, or the use of mystifying language like ‘learning’ and ‘hallucination’, configure public expectations around what kinds of politics and ethics of genAI are reasonable or plausible. I adapt the notion of abductive agency, originally developed by the anthropologist Alfred Gell, to explain how such misrecognition strategically defines the terms of the AI conversation.
I further argue that such strategic misrecognition is not new or accidental, but a central tradition in the social history of computing and artificial intelligence. This tradition runs through the originary deception of the Turing Test, famously never intended as a rigorous test of artificial intelligence, to the present array of drama and public spectacle in the form of competitions, demonstrations and product launches. The primary impact of this tradition is not to progressively clarify the nature of machine intelligence, but to constantly redefine values like intelligence in order to legitimise and mythologise our newest machines – and their increasingly wealthy and powerful owners.
|
| format | Article |
| id | doaj-art-7a6a8baa2b1941ecb4329d7ac16c2caf |
| institution | Kabale University |
| issn | 2003-1998 |
| language | English |
| publishDate | 2024-12-01 |
| publisher | DIGSUM |
| record_format | Article |
| series | Journal of Digital Social Research |
| spelling | doaj-art-7a6a8baa2b1941ecb4329d7ac16c2caf2025-01-02T01:40:08ZengDIGSUMJournal of Digital Social Research2003-19982024-12-016410.33621/jdsr.v6i440474Strategic misrecognition and speculative rituals in generative AI Sun-ha Hong0Simon Fraser University Public conversation around generative AI is saturated with the ‘realness question’: is the software really intelligent? At what point could we say it is thinking? I argue that attempts to define and measure those thresholdsmisses the fire for the smoke. The primary societal impact of realness question comes not from the constantly deferred sentient machine of the future, but its present form as rituals of misrecognition. Persistent confusion between plausible textual output and internal cognitive processes, or the use of mystifying language like ‘learning’ and ‘hallucination’, configure public expectations around what kinds of politics and ethics of genAI are reasonable or plausible. I adapt the notion of abductive agency, originally developed by the anthropologist Alfred Gell, to explain how such misrecognition strategically defines the terms of the AI conversation. I further argue that such strategic misrecognition is not new or accidental, but a central tradition in the social history of computing and artificial intelligence. This tradition runs through the originary deception of the Turing Test, famously never intended as a rigorous test of artificial intelligence, to the present array of drama and public spectacle in the form of competitions, demonstrations and product launches. The primary impact of this tradition is not to progressively clarify the nature of machine intelligence, but to constantly redefine values like intelligence in order to legitimise and mythologise our newest machines – and their increasingly wealthy and powerful owners. https://publicera.kb.se/jdsr/article/view/40474generative AImachine intelligenceagencyritualspectaclehistory of AI |
| spellingShingle | Sun-ha Hong Strategic misrecognition and speculative rituals in generative AI Journal of Digital Social Research generative AI machine intelligence agency ritual spectacle history of AI |
| title | Strategic misrecognition and speculative rituals in generative AI |
| title_full | Strategic misrecognition and speculative rituals in generative AI |
| title_fullStr | Strategic misrecognition and speculative rituals in generative AI |
| title_full_unstemmed | Strategic misrecognition and speculative rituals in generative AI |
| title_short | Strategic misrecognition and speculative rituals in generative AI |
| title_sort | strategic misrecognition and speculative rituals in generative ai |
| topic | generative AI machine intelligence agency ritual spectacle history of AI |
| url | https://publicera.kb.se/jdsr/article/view/40474 |
| work_keys_str_mv | AT sunhahong strategicmisrecognitionandspeculativeritualsingenerativeai |