My Master Thesis Blog

Trustworthiness in virtual characters

Who am I?



My work here is done

I have now finished my report, presentation and opposition and I received an A for the work!

I would like to thank everyone participating in the pilot studies, pre-study and main study as well as a special thanks to Robin Palmberg, who I worked closely with during the later stages of the thesis work and my supervisor Christopher Peters for helping me along the way.

If you are interested at reading the report, you can find it here.

I've put in an enourmous amount of time into this work and it has been both challenging and tough and I know some parts that could use a bit more work but I'm really happy with the results! Since this was my last course at KTH, it will be time for me find a job. But first, I'm going to take a much needed vacation.

I'm really excited about the future!


July 19, 2016

The blog is back up!

After two weeks of conducting the prestudy and the main study, the blog is now back up!

I'm right now in the midst of analyzing the data from the main study to see what the results show. The studies has gone excellent and I did them in collaboration with Robin Palmberg, who is a master thesis student with a similar research question. We managed to get 22 participants for both the prestudy and the main study.


Looking at a close-up on a virtual character on the 4k-screen


Here are some pictures from the main study:




In the upcomming two weeks I will be writing my report and then at the 21st of June I will present my findings in the VIC-studio.


May 31, 2016

Demo Behaviour Scene

I've been putting together a demo scene consisting of different characters and different behaviours. These will need to be tweaked and added depth to but it's a really good start.

There are 4 different behaviours right now. The first on is angry, second one is neutral, third one is happy and the last is insecure. I'm especially going to try to tweak the happy-looking behaviour because it looks a bit weird and stiff right now. Also, I'm going to change more REM (Realistic Eye Movement) variables to makes the values more extreme. For example, when looking away, the virtual characters should really look away and for the insecure behaviour, I want the head and blinking to be more twitchy.

Demo scene of 4 different appearances and 4 different behaviours.



May 5, 2016

Updated User Study

Here is my updated user study description and research question.


May 2, 2016

Demo Scene

I'm currently working on my research question and the final user study. I have not worked out all the details and my research question is still preliminary, as is the specifics of the user study.

Demo scene of the trust game for the user study in the incongruent group



April 25, 2016

Character Scene Demo v1

With the help of the MCS Female Lite model and the Realistic Eye Movement (REM) asset I've created a demo scene of 3 characters. One with a trustworthy behaviour, one with a neutral behaviour and one with an untrustworthy behaviour (see video demo below). The factors I've manipulated in these 3 characters are attractiveness, facial expressions, nervousness, eye-contact length intervals and probability as well as head movement speed. It's just an initial demo of what can be accomplished using the two assets.

If I got my hands on the full version of the MCS female model I would be able to add blinking and better looking and more facial expressions and apperances which could be incredibly useful.

There are a wide variety of other factors that can be investigated and added as well. For example, using the REM asset you can specifiy points of interests for the models to gaze upon. This can be useful since some trust models specifiy an increased eye gaze upwards in trustworthy characters and an increased eye gaze downwards for untrustworthy ones (Source).

Other features includes changing the apperance and animation / idle-pose of the models.

Demo scene with trustworthy, neutral and untrustworthy behaviour from left to right



April 19, 2016

Realistic Eye Movement with the MCS model

I have got my hands on the Realistic Eye Movement asset today and I've started trying it out. I've managed to get it working with the MCS Female Lite model and it look's really good.

MCS Female Lite with the REM asset


The REM asset supports eye blinking as well but for that to work I will need the full version of the MCS model. The asset includes a bunch of different features: you can specify things like nervousness, points of interests, look at player ratio and many other. Another great thing about this asset is that it uses human movement data from published academic papers to produce the animations - this is something I have to take a closer look at - but from trying it out it looks really convincing.


April 15, 2016

A Quick Feasibility Study - Autodesk or MCS

After discovering the Autodesk tool to create characters I've done a quick feasibility study to investigate which models I should use.

MCS Autodesk
Blendshapes Several blendshapes and the full version have more. Some look a bit unrealistic Some pretty good blendshapes
Shaders Good, with some problems Very shiny and non-realistic
REM integration Unsure - the full version should work Working
Customization Very low in terms of clothes, hairstyles and apperance. There are blendshapes manipulating facial and body attributes Great. You can create an endless amount of different characters
Cost Expensive. The full version of one model costs 50$ Free

MCS Male model to the left and a character created in Autodesk Character Creator to the right


Looking at the table there are a lot of factors that are speaking against using the MCS models. The free version of MCS lack the blendshapes required to use the Realistic Eye Momvement (REM) asset and several blendshapes make the character look kinda dumb. The Autodesk characters have more realistic looking expressions and can easily be used together with the REM asset. The full version of the MCS does have a lot more to offer though. With plenty more blendshapes for expressions and customization and a high possibility of working well with the REM asset, it is the way to go. The most important thing in my project is that the characters look good and the default shaders for the autodesk characters make them look far from as good as the MCS model. At this stage I'm only planning on using 3 models as well, so the lack of diversity and customization isn't an issue for me and since the school and Chris can supply me with the full version of MCS, the cost isn't an issue either.


April 15, 2016

Autodesk Character Creator

With the Autodesk Character Creator you can can create a wide variety of different virtual characters and can then easily export them with working blendshapes for different expressions directly to unity. With a student account you are allowed to utilize all perks of the application. You can change and interpolate between different featured models both for facial and body features. You can also change things like clothes, hair and eye-color. For my project these models might be even better than the MCS models and they're free unlike the full version of the MCS models. They can also easily be configured together with the Realistic Eye Movement asset which is a great pro and as far as I've tried, the blendshapes are better than the MCS lite models for good-looking expressions.


You start of by choosing from a set of models


You can then change and manipulate the model extensively


I've also taken a look at different animations and idle stances that exists on Unity's asset store. Sadly, there's not that many that are useful to my project but I've managed to find a few.

Example of a set of blendshapes manipulated to make the character look disgusted together with a crossed-arms stance


I've found a sample pack with some mocap animations which I tested out as well.

Mocap labeled "happy walking away"



April 13, 2016

Feasibility Matrix - Extension

The Semaine Project has built four different virtual agents with different characteristics. Poppy is outgoing (extraverted) and optimistic; Spike is angry and argumentative; Prudence is pragmatic and practical; and Obadiah is gloomy and depressed.

I've taken a closer look at the criteria for which Poppy is built to see if I can base some of my expressions and behaviours from the Semaine project agents. Here is a list of characteristics of Poppy (source is in pdf) as well as possible other emotions that can be used for the MCS Female Lite model I'm currently investigating. This matrix can be updated with more characteristics of Poppy and the three other agents, as well as for other models like the Male Lite version and the two cost-versions of MCS which have a lot more to offer. Additional emotions and expressions can be added as well but I've chosen some really basic emotions to start off from.

Conversation with "Poppy"


Mars 25, 2016

Feasibility Matrix

I have created a feasibility matrix of possible variables to investigate in my study together with how feasible they are to implement and execute. This is the first step in demarcating the project. A second step would be to create a detailed feasibility matrix of the variables that I have chosen. For example, if I decide that "facial expressions" is something I will investigate, it would be good to create a feasbility matrix with specific expressions.

The feasibility matrix can be found here.


Mars 23, 2016

The Mona Lisa Effect

The eyes of portraits often seem to follow observers as they move and this is called the Mona Lisa effect. I tried it out of the 4k-screen today and it was apparent on the big screen as well.

The eyes of the character seems to follow me when I move


The left two characters seems to look at me but not the right one


By utilizing the Mona Lisa effect I won't need to restrict the user to a chair in the middle of the room (see picture below) when I manipulate the gaze of the characters. I can let the user walk around freely because it will look like the characters maintain eye contact as long as the characters are looking straight into the camera.

The setup in the Visualization Studio


It's important to have the character's eyes look into the camera, the head can look slightly away but as long as the eyes are looking into the camera it looks like the character is looking at you, regardless of where you are in the room.

The character's head is looking away but the Mona Lisa effect is still apparent


I also tried varying the amount of characters to see how many would be viable to have in the multi-character scenario. I came to the conclusion that the best amount would be three and MAX four characters, any more and it becomes time-consuming and hard to be able to distinguish between different facial expressions and gaze.


Mars 18, 2016

A multi-character game scenario

Today I've spent most of my time going through the asset store and looking through possible models and characters to use for my scenario. I had a meeting yesterday with Chris, where we discussed the next steps of my thesis. Instead of having a game scenario where you interact with one character, it leans towards having several characters with different appearances and behaviour. You will not dive directly into the trust game but first choose which of the characters you want to cooperate and play the trust game with.

Example scene with different characters


When choosing a partner from a crowd (a set of people with different behaviour and apperances), it's important that you can, as in real life, see the facial expressions and gaze of several characters at the same time. This is a perfect opportunity to utilize the big 4k-screen. As I tested two days ago, there's no problem distinguishing a character's expression on the 4k-screen from a full-body view. Because of this, I can study body behaviour together with facial expressions or gaze in several characters at once which could yield some really interesting results.


Mars 3, 2016

Trust Game and trying the scenarios on the 4k-screen

I have now created a test scene where you can manipulate gaze and head-movement, field of view, camera angles and a couple of blend shapes in-game. This is to be used as a testbed for how to setup the game scenario.

The Test Scene


I have also created an instance of the Trust Game scenario. I don't know if this is the game I will be using for the final user testing but it's a good start to try some things out. The game is playable and you play against three different "behaviours" in the character. The first is neutral with eye-contact, the second is neutral without eye-contact and the third is happy with eye-contact (see screeenshots). The money you give the character is tripled (as per the game instructions) and you get back half of it from the first behaviour (neutal + eye-contact), nothing (neutral + no eye-contact) from the second behaviour and two thirds from the third behaviour (happy + eye-contact). There are some problems with having a scenario like this though. One problem is how I should decide how much to give back, and how to do this so it doesn't affect next-comming behaviours. Say for example you give much money back from characters mainataining eye-contact and the game is 10 instances of different behaviours. The player can then learn before the end that eye-contact equals more money back and this might mess up my testing results. It could be a good idea to hide how much you get back until the end.

The Trust Game - Choosing how much to bet


The Trust Game - The characters response of the bet


The Trust Game - The second characters response


The Trust Game - The End

I also visited the Visualization Studio to test my Unity scenarios on the big screen. It was hard to set it up to make it as realistic as possible. The angles, field of view and camera placement is some of the factors that I need to adjust and from the test scene I tried out a bunch of different settings..

The test scenario up on the 4k-screen


The test scenario up on the 4k-screen with other settings


When testing it out on the 4k-screen a lot of questions popped up in my mind. Something with the poker table in-game troubled me, I think it was because it felt misplaced. I did try to put a physical table in front of the screen and remove the one in-game because that might give the scenario a bit more depth and maybe make it feel more immersive.

The character looking at an object on a physical table


There's no problem seeing facial behaviour in the full-body character on the 4k-screen


During my stay Eric and Björn helped me set it up on the 4k-screen and Björn tipped me that it could be smart to have the player sit down in front of the screen in the user study. That way, I know where the player is at all times and can easier control stuff like eye-contact. It was great was to have someone else look at the scenarios. It's easy to get stuck in my own point of view so having some feedback on what you're working on is never something bad!


Mars 1, 2016

The first game scenario - Update

Previous to this project I have had little experience with animation in Unity so the last days I've been looking into this topic. I have managed to set up a character with an idling animation handled by a state machine. From the state machine I can change if I want to change any animation but for now, I'm satisfied with the idling animation. I have also created two simple scripts that complements the idling animation with either eye movement or head movement (two key ingredients in gaze).

I'm satisfied with using the MCS Male Lite asset from the Unity asset store since it looks great and can easily be manipulated and animated using Unity's Mecanim animation system. The model has plenty of blendshapes to change the appearance and expressions which could be useful later on.

The model looking right


The model looking down-left with his eyes


Using the blendshape "snarl" on the model

My goal right now is to set up a testing scenario that I can take a closer look at on the 4k-screen later this week. I want to know how well you can perceive the character on the big screen and how visible the eye-movement will be like without any close-ups on the face.

In terms of game scenario I'm leaning towards the classic Trust Game where you get an intial amount of money, choose how much you should keep and how much you should give the virtual character. The money will be tripled and the virtual character will then give back some amount of the money recieved. This will be a good scenario to test for how much you think the virtual character can be trusted to cooperate. The interaction in the trust game scenario is somehwat poor though, but it could be a good starting point. I would like to implement a more interactive game after this, possibly with several virtual characters.


Februrari 23, 2016

Animations Tools

It's been a while since my last post so here goes.

For the last couple of weeks I've been looking into relevant research for my thesis, tools to use and how to setup the project. The tools I've been looking at are in regard to the animation of my virtual character in Unity and the ones I've been looking closer at are the Virtual Human Toolkit which utlizes Smartbody, Facegen or using assets from Unity's asset store. After some testing I've decided to most likely use the MCS Male Lite assets, at least as a starting point. I have little previous experience when it comes to animation so I think it will be a fun experience messing around with the animation from scratch. The Virtual Human Toolkit is a big overkill for this type of project since it utilizes a lot of different 3rd party programs that I won't be using and Facegen is only for faces and I'll be doing something with a full-body virtual character.

A test scene I did in Unity with the MCS Male Lite

I met with Chris yesterday as well. It was he who suggested the MCS Male asset and when playing around with it today, I think it will be a good starting point for the Unity scenario. The next couple of days I'm going to set up some sort of basic scenario in Unity with a virtual character and some sort of trust game. Next week I'll be testing it on the 4k-screen in the Visualization Studio since I will mainly be utilizing the 4k-screen for my user study later on.

The pro of using the 4k-screen is that it's big and no close-up on the face will be needed in order for users to be able to perceive facial animations and gaze while playing, or at least I hope no close-up will be needed. By having such a big screen, the scenario will feel more immersive and hopefully it'll will work as a window into the virtual world. There are some cons by using it as well. One is that not many have access to such a big screen. The results I achieve from my user study will correlate to a big screen and won't be usable on regular computer screens. It could be interesting to compare my results with similar studies to see if there's a difference in perception depending on screen size or to test that myself but that's something to consider later on.

I will also need to figure out some sort of game scenario to be used in the first iteration. Many articles I've read on similar studies have used some sort of trust game, similar to this. The problem with this is that the interactive experience will be quite poor in my opionion. Also, if I am to explore how gaze affects perception, it would be a good idea to have things the virtual characater can gaze upon during the game. After discussing this with Chris, the direction is leaning towards some sort of card game on a table where the character sits in front of you and that there's something going on the table in front of you. The game will need to emphasize on trust in some way in order to have an objective way of testing this other than asking user on perceived trust after the experience. I will be looking closer on this in the next couple of days.

After I've set up some basic game scenario on the 4k-screen I will meet Chris again and we will discuss the next steps of my thesis. Somewhere here it'll be time for me to create the first iteration of the detailed specification and a time plan for my thesis.


Februrari 18, 2016

User Study - Perception of Virtual Faces

Today I attended a user study that had to do with our perception of virtual faces. Chris, my supervisor, got me to sign up for the study since it's similar to what I will be doing and it was conducted by one of his master students.

The user study was about looking at different male faces on a computer screen for 500 milliseconds and then deciding how my perception of them were in terms of a few categories like trustworthiness, dominance and education level.

The faces were very similar and there were about 70 of them, so it took a while!

From what I got from my own responses, I marked the wider faces as less trustworthy and connected the faces with a gray beard with a higher education. I read some studies last week that Chris sent me, that were about how men with a greater facial width are more likely to exploit the trust in others and that they are also perceived as less trustworthy (one of the papers), so it was interesting to see that I made the same connection.

Something that I thought was hard with the test was that the faces looked almost exactly the same, with exceptions in beard color/length and face dimension. There might have been more manipulated factors but not that I noticed clearly. I didn't mark anyone as "Very trustworthy" and since the faces looked fairly similar they all got very similar scores.

It was interesting to see how the test was set up and to chat a little bit with Evmorfia, who held the user study. I asked if it was okay to get in contact with her if I needed some help or to discuss things and she said that I could definitely do that!


January 26, 2016

My first post

The blog is up and running!

It's gonna need more work, but I'll keep on updating it on the fly. I've just set up a simple outline so I can put up my first post.

Anyway, this is where I'll put up any progress on my master thesis. I have just sent in my proposal to bilda and hopefully it'll be accepted and I can start working on it.

My thesis will be about investigating how different factors in appearance and behaviour impact our perception of a virtual agent when it comes to trustworthiness and cooperativeness in a game scenario. I haven't worked out all of the details just yet but that's what I'll be working on next, together with a time plan. I'm gonna need to figure out what my game scenario will be like, how the game will play out and what factors of the virtual agent that I will manipulate. I will also need to figure out if I'm gonna use the 4k-screen in the visualization studio at KTH for interaction with the virtual agent or if I'm going to use Virtual Reality (VR) or Augmented Reality (AR). There's more to figure out but those are probably the most important as of this moment. The deadline for the detailed specification is in two weeks I think, but I'll receive more information on that early next week.

Anyway, I'm pumped to start working with my thesis. I feel like this project will fit me perfectly.

You can find my master thesis propsal here.


January 21, 2016