Moving Beyond Static AI: The Need for Dynamic Personalisation and Where We Go Next
Why the one-size-fits-all approach to interaction modelling falls short and why we need to re-think the role of the human expert.

Artificial Intelligence is becoming deeply integrated into our daily lives and work. While the potential is enormous, I find that many current AI systems operate with a significant limitation: they tend to use static, one-size-fits-all interaction models. This generic approach often falls short because it doesn't account for the simple fact that humans are diverse and dynamic.
For Human AI Interaction (HAI) to be truly effective, especially in augmenting our capabilities and supporting complex decision making, AI systems need to evolve. They must gain the capability to dynamically understand and adapt to us as individuals. This includes our changing needs, preferences, cognitive states, and levels of expertise.
Over the past few years, significant research has explored how to build these more adaptive systems. I've been thinking a lot about this landscape, and I see several crucial components coming together.
Foundations: Understanding the User and the System
At the core, building adaptive systems requires User Modeling (UM). This involves creating and maintaining representations of individual users based on their interactions. Historically, this evolved from simple stereotypes to sophisticated models built from implicitly gathered behavioural data. The goal is always to use this model to personalise the system's behaviour adapting content, interfaces, or support to better meet user needs.
However, as these models become more complex, often using deep learning, they can become opaque. This is where Explainable AI (XAI) becomes critical. We need methods to make the AI's reasoning understandable, which is essential for trust, debugging, fairness assessment, and meaningful user interaction.
Guiding all of this should be the principles of Human Centered AI (HCAI). This philosophy prioritises human needs, values, and well being, aiming for AI that augments and empowers people, keeping them in control, rather than simply replacing them. The synergy between UM, XAI, and HCAI is not just beneficial; I believe it is essential for creating adaptive systems we can rely on. HCAI provides guiding principles, UM provides the personalisation mechanisms, and XAI delivers necessary transparency.
Furthermore, Personalised Decision Support Systems (DSS), particularly in fields like healthcare, show the potential and challenges of tailoring assistance. They aim to integrate patient specific data and preferences into the decision process, moving towards shared decision making. However, their success hinges heavily on human factors like trust, usability, and workflow integration, again highlighting the need for an HCAI approach.
Techniques for Dynamic User Modelling
Building dynamic user models relies heavily on inferring user profiles from implicit data. Instead of constantly asking users for input, systems analyse interaction patterns like clicks, dwell times, search queries, and sequential behaviour. Capturing these behavioural nuances allows models to update continuously.
Beyond general behaviour, research focuses on modelling specific attributes crucial for adaptation:
- Cognitive Styles and States: Inferring factors like learning style, cognitive load, or attention levels, often using interaction data or even sensors like eye tracking, allows for adapting information presentation or explanation styles. Modelling these factors is crucial for adapting scaffolding based on real time cognitive capacity. However, this remains an emerging frontier, often requiring more intrusive sensing and facing challenges in validity and reliability.
- Expertise Levels: Techniques like knowledge tracing in tutoring systems track skill mastery over time based on performance. Expertise might also be inferred from contribution quality or interaction patterns. Dynamically modelling expertise allows systems to adjust difficulty, guidance, or information presentation.
- Preferences and Interests: This is a well established area, driven by recommenders. Modern methods rely heavily on analysing interaction history using techniques like collaborative filtering, content based analysis, and deep learning models on sequential data to capture both short term and long term preferences.
It is important to recognise the interactive nature of dynamic modelling. The system adapts, changing the user's experience and subsequent behaviour, which feeds back into the model. This inherent feedback loop complicates evaluation and means system adaptations can actively shape the user's state.
Advanced modelling approaches are often necessary:
- Bayesian Methods handle uncertainty well and allow incremental updates. Dynamic Bayesian Networks (DBNs) explicitly model temporal processes. Strengths include incorporating prior knowledge and interpretability.
- Reinforcement Learning (RL) learns optimal policies through interaction and rewards. It can optimise long term engagement or adapt UI elements. Reinforcement Learning from Human Feedback (RLHF) learns rewards from human preferences to align AI behaviour with human values. Challenges include non stationary environments and careful reward design.
- Deep Learning (DL) automatically learns complex patterns from large datasets. Architectures like RNNs, LSTMs, GNNs, and Transformers model sequential or relational data effectively. DL enables universal user models but often lacks interpretability.
- Federated Learning (FL) enhances privacy by training models locally on user devices, aggregating only model updates centrally.
Choosing the right technique involves trade-offs between interpretability, accuracy, data needs, and computational cost.
Adaptive Interaction Strategies in Practice
These dynamic models enable various adaptive strategies:
- Real time Onboarding: Systems personalise the initial user experience, tailoring content, task lists, and support based on inferred skills or observed behavior. The goal is improved engagement, faster learning, and better retention. A balance with human interaction is often needed to build rapport.
- Intelligent Tutoring Systems (ITS): These systems personalise instruction by tailoring problem sequences, feedback, and hints based on the learner's evolving state, often using knowledge tracing and model tracing. Open Learner Models can visualize the system's assessment for student reflection. Authoring ITS remains complex, driving research into simpler authoring tools and self improving systems.
- Adaptive Scaffolding: Providing temporary, tailored support for complex tasks. AI can adjust the level and type of scaffolding (e.g., linguistic style, workflow guidance, co writing assistance, collaboration coaching) in real time based on inferred user state. The challenge is providing enough support without hindering learning or undermining agency.
The Interplay: UM, XAI, and HCAI
Effective adaptation requires these fields to work together. We need explainable user models so users and developers understand the system's representation of the user. We also need personalised explanations for system actions (like recommendations), tailored to user characteristics or context to build trust and enable informed decisions. Explanations can be personalised based on user traits (personality, knowledge), context, desired level of detail, or style/format.
Designing these systems through an HCAI lens means balancing adaptation with user control, keeping the user informed, providing feedback mechanisms, and allowing overrides. Interactive machine learning is one way users can participate in refining models. A major gap remains in explaining the dynamic evolution of user models and adaptive strategies over time.
Ethical Considerations are Non Negotiable
The power of dynamic personalisation brings significant ethical responsibilities.
- Bias and Fairness: Algorithmic bias can be introduced through data or design and amplified by feedback loops. Static fairness metrics are often insufficient for dynamic systems; we need to consider long term or dynamic fairness. Addressing this requires accurately modelling feedback loops, evaluating long term trade-offs, handling intersectionality, and ensuring transparency over time. Mitigation involves technical solutions, diverse data, audits, and deep engagement with the socio technical context.
- Transparency and Accountability: Users need to understand how their data is used and why systems adapt. XAI provides mechanisms for transparency. Accountability, establishing responsibility for outcomes, is challenging in complex adaptive systems and requires transparency, audit trails, and clear governance. Explaining the evolving process of adaptation over time is a key challenge.
- User Autonomy and Control: Opaque algorithms, biased personalisation, filter bubbles, or overly proactive systems can undermine user agency. Strategies include transparency, user controls over data and personalization, and override mechanisms. There's a core tension between maximizing personalization effectiveness (often via implicit data) and maximizing user control and transparency.
My Perspective: The Real Frontier is Architectural Innovation
Reviewing this landscape reinforces my belief about where the most critical and exciting research needs to happen next. While improving dynamic user modelling and adaptive strategies within existing frameworks is valuable, I believe the truly transformative work lies in developing fundamentally new model architectures.
The future, as I see it, isn't just about bigger, general purpose models. It's about creating architectures that are:
- Smaller, Focused, and Purpose Built: Designed specifically for certain decision support or augmentation tasks, enabling deeper specialisation.
- Inherently Adaptive to Real Time Feedback: Architected from first principles to rapidly integrate and respond to fine grained human feedback, including implicit signals of understanding and intent.
- Geared for Human In The Loop Collaboration: Moving beyond simple reaction to past data, towards systems that can interpret and act on signals related to the user's current cognitive state, decision process, and intentions during interaction.
The work surveyed here on dynamic personalisation provides the essential context and identifies necessary components. But I view it as the foundation upon which we must build these next generation architectures. The challenge is to move beyond retrofitting adaptation onto existing models and instead design new classes of models where sensitivity to real time human signals and adaptive capability are core architectural properties. That, I believe, is the research frontier that holds the most promise for genuinely augmenting human intelligence and capability through AI.
References
- User Modeling and User Profiling: A Comprehensive Survey - arXiv, accessed on April 26, 2025,
- Understanding Human-Centred Al: a review of its defining elements and a research agenda, accessed on April 26, 2025,
- (PDF) Integrating HCI Principles in Al: A Review of Human-Centered Artificial Intelligence Applications and Challenges - ResearchGate, accessed on April 26, 2025,
- Al and human-robot interaction: A review of recent advances and challenges - GSC Online Press, accessed on April 26, 2025,
- User Modeling and User Profiling: A Comprehensive Survey - arXiv, accessed on April 26, 2025,
- Explainable Al and Reinforcement Learning-A Systematic Review of Current Approaches and Trends - Frontiers, accessed on April 26, 2025,
- Explainable Al: A Review of Machine Learning Interpretability Methods - PMC, accessed on April 26, 2025,
- Human-centered explainable artificial intelligence: An Annual Review of Information Science and Technology (ARIST) paper - ASIS&T Digital Library, accessed on April 26, 2025,
- User-Centered Evaluation of Explainable Artificial Intelligence (XAI): A Systematic Literature Review - ResearchGate, accessed on April 26, 2025,
- Human-Centered Artificial Intelligence: A Review - Longdom Publishing SL, accessed on April 26, 2025,
- What is critical for human-centered Al at work? - Toward an interdisciplinary theory - PMC, accessed on April 26, 2025,
- The Pursuit of Fairness in Artificial Intelligence Models: A Survey - arXiv, accessed on April 26, 2025,
- On explaining recommendations with Large Language Models: a review - Frontiers, accessed on April 26, 2025,