The system provides emotion coordinates (based on Russell's circumplex model) from text input or actions, with persistent emotional memory per entity. Think NPCs that remember how they feel about specific players or events.
I pre-trained a DistilBERT model on ~1k video game dialogues (Skyrim, Cyberpunk, etc.) and plan to extract and evaluate 100k+ dialogues soon. However studio/team can manually add dialogues to enrich their own dataset.
The matrix doesn't generate dialogue, it only analyzes content. When you pass text or an action, it returns emotion coordinates on the valence (pleasant/unpleasant) and arousal(energetic/calm) scale. For example:
- [0.00, 0.00] = neutral
- [0.29, 0.80] = excited
- [-0.50, -0.30] = sad/tired
I made a quick visualizer here to help understand https://valence-arousal-visualizer.vercel.app/
The system helps select which dialogue/action to play based on emotional state:
- Player says something bad to NPC → system detects negative valence → game picks from "angry dialogue pool"
- NPC remembers past positive interactions → system returns positive valence → friendlier responses available
So, the devs still write the dialogues or choose the next actions, but the matrix helps manage NPC emotional states and memory dynamically.
Here's the project structure to better understand how it works:
- src/config: Helper utilities for NPC configuration setup
- src/module: The core engine with emotion prediction, memory storage, and entity management
- src/api: FFI layer with pub extern "C" to bridge our modules with C/C++ game engines and modding tools (Unity, Unreal, etc.)
To implement it, just call `build.sh`, it will create DLL files that you can use to call the matrix functions directly in C++/C/C#.
I'd love feedback on code quality and overall architecture.
Feel free to be honest about the good, the bad, and the ugly. PRs welcome if you want to contribute!
> Theoretically, how would you imagine separating memories? Would you use memories clusters with different weight for example ?
There are, of course, a lot of possibilities. I was always fascinated by the idea to use something like a Semantic Web as the basic structure to represent memory, because it is so flexible. Every atomic memory element is a subject-predicate-object or subject-predicate-value triple. We can build bigger memory structures just by repeatedly using the same entities for subjects or objects. The predicates may be predefined ontologies and/or themselves be represented as (higher order) subjects/objects modelled by very fundamentel predicates (like "<A> is a subtype of <B>"). We could model the fading of memory by removing atomic memory elements from the set. We could model the blurring of memory by slightly changing an atomic memory element (like "<Bob> has the haircolour <brown>" into "<Bob> has the haircolour <dark-brown>". For decision making, we could use an algorithm that explores the set memory elements, draws conclusion, prioritieses them according to some evalution function. The possibilites here are endless.
In this model, the combination of certain predicates, the structures resulting from them and of particular algorithms that operate on them are constituting a memory cluster. For example, if the NPC is hungry, this may activate an algorithm that looks for the predicate "<x> is of type of <Food>", then for every x found looks for "<x[1]> is located at <y>", then for every location tries to determine a path to that location by connecting memory elements ("The cheese is in the fridge", "The fridge is in the kitchen", "The kitchen is adjacent to the living room", "The armchair is in the living room", "The NPC sits in the armchair").
Going back to my old example of getting food offered I dislike when I am very hungry, this could be modelled as: "<Offer> is of type <Event>", "<Entity-4758> is of type <Offer>", "<Entity-4758> has actor <PC>", "<Entity-4758> has receiver <NPC-6587>", "<Entity-4758> has transfer-item <Item-8974>", "transfer-item <Item-8974> is of type <Apple>", "<NPC-6587> dislikes <Apple>", "<Entity-4758> has point in time <T-7125>", "<NPC-6587> has physical state <P-7894>", "<P-7894> has point in time <T-7125>", "<P-7894> is of type <Hunger>", "<P-7894> has intensity '0.95'" (rang from 0..1).[1]
To simulate remembering the situation, we can than run an evaluation function on this data set that gives us a valence/arousal vector. This function may have other parameters (for example the current mood of the NPC) that can modify the outcome of the evaluation. Or we can simulate the fading of memory by randomly modifying the value of predicate "<P-7894> has intensity '0.95'" towards neutral, or the object of "transfer-item <Item-8974> is of type <Apple>" into <Banana>. Both of which would change a later evaluation of event <Entity-4758>.
Not all of the predicates from my example represent memory in the strict sense. Some represent built in ontologies that should not change ("<Offer> is of type <Event>") or treated as immutable facts to keep things simple ("<NPC-6587> dislikes <Apple>"). Our algorithms that modify memory must not change them (unless we want to simulate that a NPC starts to become crazy).
https://www.researchgate.net/publication/235361517_A_Circump...
Kind of reminds me of the social interaction puzzle in oblivion :)
Sweet so it can be used in unreal engine it would be awsome to see this used for a local llm game that can generate it's own unique NPCs