Varia
Núm. 28 (2025): Archivar emergencias: los documentos en conflictos y catástrofes
El futuro de nuestro pasado: Cómo la inteligencia artificial generativa desafía la tutela de la historia
Law Institute of the Lithuanian Centre for Social Sciences, Vilnius, Lithuania
Secretary-General of the European Data Protection Supervisor. Brussels, Belgium
Biografía
Resumen
El discurso en torno a la inteligencia artificial cada vez está más dominado por perspectivas orientadas al futuro y a los enfoques prospectivos. Este trabajo propone invertir la mirada y situar el centro del debate en la información histórica digitalizada. Con el objetivo de analizar el impacto de la inteligencia artificial generativa en la forma en que las personas reciben y perciben la información sobre nuestro pasado, los autores abordan tres elementos clave del ciclo contemporáneo de la información. El artículo examina las posibles limitaciones de los datos, el rendimiento de los modelos de IA y las conductas humanas hacia las herramientas de IA generativa en la adquisición de información. A través del análisis de las opiniones recogidas en la literatura científica y la presentación de casos prácticos, los autores concluyen que los sistemas de IA podrían convertirse, eventualmente, en ineludibles intermediarios para acceder a la información sobre acontecimientos históricos. Reconociendo los riesgos que se derivan de las alucinaciones de la IA generativa, el artículo analiza también un enfoque holístico para navegar en la era de la información artificial.
The discourse surrounding artificial intelligence is being increasingly dominated by future-oriented perspectives and forward-thinking contributions. The paper proposes to shift the focus backwards and positions digitized historic information at the center of the discussion. Seeking to investigate the impact of generative artificial intelligence on how humans receive and perceive information detailing our past, the authors cover three main elements of the contemporary information cycle. The paper addresses potential limitations of data, AI model performance, and human attitudes towards generative AI tools in acquiring information. By evaluating opinions found in scientific literature and presenting practical cases, the authors conclude that AI systems may eventually become inevitable intermediaries for acquiring information about historic events. Acknowledging the risks posed by generative AI hallucinations, the paper also reviews a holistic approach in navigating the age of artificial information.
Estadísticas globales
ℹ️
Totales acumulados desde su publicación
|
37
Visualizaciones
|
2
Descargas
|
|
39
Total
|
Citas
- Baack, S. (2024). A Critical Analysis of the Largest Source for Generative AI Training Data: Common Crawl. In FAccT '24: Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency, 2199-2208. https://doi.org/10.1145/3630106.3659033
- Barassi, V. (2024). Toward a Theory of AI Errors: Making Sense of Hallucinations, Catastrophic Failures, and the Fallacy of Generative AI. Harvard Data Science Review, (Special Issue 5). https://doi.org/10.1162/99608f92.ad8ebbd4
- Büthe, T., Djeffal, C., Lütge, C., Maasen, S., & Ingersleben-Seip, N. von. (2022). Governing AI – attempting to herd cats? Introduction to the special issue on the Governance of Artificial Intelligence. Journal of European Public Policy, 29(11), 1721–1752. https://doi.org/10.1080/13501763.2022.2126515
- Casey, Ralph D. (1944). GI Roundtable 2: What is Propaganda? https://www.historians.org/resource/gi-roundtable-2-what-is-propaganda-1944/
- Citron, D. K., Chesney, R. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review 107. https://scholarship.law.bu.edu/faculty_scholarship/640/?utm_source=scholarship.law.bu.edu%2Ffaculty_scholarship%2F640&utm_medium=PDF&utm_campaign=PDFCoverPages
- Cossio, M. (2025). A comprehensive taxonomy of hallucinations in Large Language Models. https://doi.org/10.48550/arXiv.2508.01781
- Kidd, C., Birhane, A. (2023). How AI can distort human beliefs. Science 380, 1222-1223 https://doi.org/10.1126/science.adi0248
- Makhortykh, M., Zucker, E. M., Simon, D. J., Bultmann, D., & Ulloa, R. (2023). Shall androids dream of genocides? How generative AI can change the future of memorialization of mass atrocities. Discover Artificial Intelligence, 3(1), 28. https://link.springer.com/article/10.1007/s44163-023-00072-6
- Olanipekun, S. O. (2025) Computational propaganda and misinformation: AI technologies as tools of media manipulation. World Journal of Advanced Research and Reviews, 25(01), 911-923. https://doi.org/10.30574/wjarr.2025.25.1.0131
- Sætra, H. S. (2023) Generative AI: Here to stay, but for good? Technology in Society. Vol. 75. https://doi.org/10.1016/j.techsoc.2023.102372
- Spennemann, D. H. (2025). Delving into: the quantification of AI-generated content on the internet (synthetic data). ArXiv, abs/2504.08755. https://doi.org/10.48550/arXiv.2504.08755
- UNESCO. (2024) AI and the Holocaust: rewriting history? The impact of artificial intelligence on understanding the Holocaust. https://doi.org/10.54675/ZHJC6844
- Villalobos, P., Ho, A., Sevilla, J., Besiroglu, T., Heim, L., Hobbhahn, M. (2024) Will we run out of data? Limits of LLM scaling based on human-generated data https://doi.org/10.48550/arXiv.2211.04325
- Walter, Y. (2025). Artificial influencers and the dead internet theory. AI & Society 40, 239–240. https://doi.org/10.1007/s00146-023-01857-0
- Xu, Z., Jain, S., Kankanhalli, M. (2025). Hallucination is inevitable: An Innate Limitation of Large Language Models. https://doi.org/10.48550/arXiv.2401.11817
- Zannettou, S., Sirivianos, M., Blackburn, J., & Kourtellis, N. (2019). The web of false information: Rumors, fake news, hoaxes, clickbait, and various other shenanigans. Journal of Data and Information Quality, 11(3), 1-37. https://doi.org/10.1145/3309699
- Zhou, J., Zhang, Y., Luo, Qianni, Parker, A. G., Choudhury, M. D. (2023) Synthetic Lies: Understanding AI-Generated Misinformation and Evaluating Algorithmic and Human Solutions. In CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, (Paper 436, pp. 1–20). https://dl.acm.org/doi/10.1145/3544548.3581318
Descargas
Los datos de descargas todavía no están disponibles.