Skip to main content
Home
Snurblog — Axel Bruns

Main navigation

  • Home
  • Information
  • Blog
  • Research
  • Publications
  • Presentations
  • Press
  • Creative
  • Search Site

Removing Silent Barriers by Addressing Language Complexity in Search Results

Snurb — Wednesday 24 September 2025 23:26
Search Engines | SEASON 2025 | Liveblog |

The final session on this first day of the SEASON 2025 conference starts with Jennifer Gnyp, whose interest is in integrating language complexity into search; this is critical for inclusive search technologies. Inclusion only works if it is not the task of individuals to adjust and adapt to the societal mainstream, but if society removes the barriers that exclude such individuals; their needs must be considered from the beginning.

Key aspects to consider here are readability and comprehensibility, but readability focusses only on formal features while comprehensibility focusses on content and meaning, and measures the ease of understanding. This requires didactic user testing.

Search engines don’t currently optimise for this; they focus on topic relevance, link structures, and popularity metrics, and to some extent on readability; they may be offering interface accessibility features, assistive technologies, and simplified visual designs, but – even when they claim to offer content in ‘plain language’ – usually fail to consider the linguistic complexity of search results.

So, there is no inclusion if there remains a silent barrier affecting equitable information access; how might we address this further, then? Language complexity assessments might be one component: this should be regarded as a core component of information quality, and used to redefine content relevance. Readability metadata could also be added to source documents, resulting in per-document complexity scores; user profiles could also include specialised complexity needs, and users could be given the ability to filter or rank for complexity scores in content, too.

Some of this is already technically possible; text complexity assessment tools are available and becoming more powerful with the help of transformer-based models. How generalisable such approaches are to the diversity of content available on the Web remains unclear, however.

But such approaches could also introduce new biases, especially through the use of specifically trained models that bring their own biases; personalisation tools might also introduce new avenues for stigmatisation, raise privacy concerns, or lead to undesirable personalised content filtering. It’s crucial here that any new technologies would seek to enable, not constrain, and are thoroughly and carefully benchmarked.

  • 25 views
INFORMATION
BLOG
RESEARCH
PUBLICATIONS
PRESENTATIONS
PRESS
CREATIVE

Recent Work

Presentations and Talks

Beyond Interaction Networks: An Introduction to Practice Mapping (ACSPRI 2024)

» more

Books, Papers, Articles

Untangling the Furball: A Practice Mapping Approach to the Analysis of Multimodal Interactions in Social Networks (Social Media + Society)

» more

Opinion and Press

Inside the Moral Panic at Australia's 'First of Its Kind' Summit about Kids on Social Media (Crikey)

» more

Creative Work

Brightest before Dawn (CD, 2011)

» more

Lecture Series


Gatewatching and News Curation: The Lecture Series

Bluesky profile

Mastodon profile

Queensland University of Technology (QUT) profile

Google Scholar profile

Mixcloud profile

[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence]

Except where otherwise noted, this work is licensed under a Creative Commons BY-NC-SA 4.0 Licence.