Evaluation evaluated...still a long way to go

Evaluation is a seriously underdeveloped area in Interactive Story-telling (IS). Evaluation will expose limitations if technology is immature or content is scanty. The researcher needs to be clear what is being evaluated and to know the means to be used during the evaluation process.

A common sense approach of evaluation in interactive systems, is to measure the impact these systems have; but what does the word 'impact' mean for Interactive Story-telling systems?

In traditional narrative, the Aristotelian impact or tension is focused on the plot of the story. The word interaction entails the user's involvement; it is more appropriate for evaluation in interactive narrative to focus on the 'impact' on the user.

However, this reveals the fundamental problem within Story-telling - to what extent should evaluation focus on the impact on the user instead of the impact of the plot in the story?

This issue remains unresolved, and a hot contentious topic currently explored by experts within academia and the entertainment industry, the answer eluding everyone so far... 

Computer Scientists will often compare systems within the same research field; but evaluating a system only against its objectives is not enough; a thorough comparison between other similar systems is likely to be more effective.

In order to compare two or more IS systems though, the purposes these systems serve must be aligned; before you decide on the evaluation technique to use. For example, you would not want to compare an IS system intended for evoking the user's emotions with another IS system that uses re-planning to move the narrative back and forwards towards an intended plot; as clearly underlying purposes are not aligned.

A generic approach as a means to classify evaluation could be to develop conceptual models for IS systems; a more abstract method than the implementation formalisms often being used. Another approach could also be to explore the definition of conceptual dramatic trajectories in more detail.

RIDERS network aims at bringing together researchers so that different ideas and techniques regarding evaluation, can be discussed. Please contact us if you have any thoughts on evaluation or interesting research to add, the network would be happy to read your feature submission...

In addition, the RIDERS network is interested in hearing about evaluation methods being used by other disciplines, such as those of ethnography, psychology, or literature and film. 

Lecturer, Computer Science, Heriot Watt University and Assistant Investigator for RIDERS


Sandy’s main research interests are Intelligent agents and synthetic actors, pedagogical application of interactive storytelling, EEG bio-monitoring, emergent narrative / interactive drama authoring, development of evaluation methodologies for interactive dramas, and affective computing. His research programmes focus on interactive narrative and emotion mechanisms for intelligent agents, and the development of research processes that diversify knowledge of adjacent fields such as gaming, digital entertainment, graphical arts and human computer interactions.


Sandy also works with Professor Aylett leading the RIDERS network project.



RIDERS | Heriot Watt University (Riccarton Campus) | Currie | Midlothian | EH14 4AP