The final speaker in this session at the SEASON 2025 conference is Nora Lindemann, who will focus especially on the role of Large Language Models in the information ecosystem. It’s important to note here that knowledge is embodied and situated, while information is broken-down knowledge that can be transferred through communication. This is also relational, where a relation is a connection between two entities that is constituted by their specific interaction and modulated by power.
Online information access is now rapidly changing, from (intransparently) algorithmically mediated conventional search results to the intrusion of AI-generated summaries into search results pages, or to the replacement of search by a discursive interaction with an AI chatbot. Such AI-generated results may still provide sources for their responses, but these are considerably less likely to be followed by users.
This produces a new networked imaginary of search processes, involving several relational and related aspects. They include the narratives used to describe searcher motivations (such as individualism, objectivity, or convenience); the companies and funders that operate and compete for market share in this search ecosystem, and envisage these narratives; the material infrastructures that these companies operate, including devices, data centres, and the physical network links that connect them; the technological infrastructures (including AI and search algorithms) that operate on this physical layer; the subject knowledge, embedded in people, that these algorithms depend on; the information (and misinformation) produced by these people that search algorithms draw on; the texts and other data that represent such information within the ecosystem’s datastores, and that are produced by the ecosystem in response to user queries; and finally the meaning-making processes that users apply to these outcomes as they encounter them.
The introduction of LLMs into this environment is problematic: they may draw on and reproduce misinformation and hegemonic knowledges; they may enter into feedback loops where synthetically generated texts become input to further synthetic text generation; they change the affordances and subjectivities of users, foregrounding convenience over engagement with information; they disembody and desituate knowledge from people; they pollute the physical, environmental ecosystem; and they result in a concentration of power amongst the operators of search engine and GenAI companies. We must urgently confront these challenges.