AI or your lying eyes

  • Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions — in the form of artificially intelligent systems for detecting deepfakes — will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such technologies depends on institutional trust that is in short supply. Finally, outsourcing the discrimination between the real and the fake to automated, largely opaque systems runs the risk of undermining epistemic autonomy.

Download full text files

Export metadata

Additional Services

Share in Twitter Search Google Scholar
Metadaten
Author:Keith Raymond HarrisORCiDGND
URN:urn:nbn:de:hbz:294-109658
DOI:https://doi.org/10.1007/s13347-024-00700-8
Parent Title (English):Philosophy & technology
Subtitle (English):some shortcomings of artificially intelligent deepfake detectors
Publisher:Springer Netherlands
Place of publication:Dordrecht
Document Type:Article
Language:English
Date of Publication (online):2024/02/29
Date of first Publication:2024/01/10
Publishing Institution:Ruhr-Universität Bochum, Universitätsbibliothek
Tag:Artifcial Intelligence; Deepfakes; Epistemic Autonomy; Misinformation; Self-Trust; Trust
Volume:37
Issue:Article 7
First Page:7-1
Last Page:7-19
Note:
Dieser Beitrag ist auf Grund des DEAL-Springer-Vertrages frei zugänglich.
Institutes/Facilities:Institut für Philosophie II
Dewey Decimal Classification:Philosophie und Psychologie / Philosophie
open_access (DINI-Set):open_access
faculties:Fakultät für Philosophie und Erziehungswissenschaft
Licence (English):License LogoCreative Commons - CC BY 4.0 - Attribution 4.0 International