Here you can find all the assignment texts used in the course. Please note that these are for the coming students to get an idea of the work in the course, and for inspiration to other teachers of VR. Thereby, they might not be up-to-date, and if you are taking the course, you should refer to the assignment texts in Absalon.
The assignments are designed primarily by Thomas van Gemert: please credit him if you use any of their content.
The students are asked to prepare with the Environment Setup by the first day of the course. Assignments 1-3 are due every two weeks, and the project at the end of the course (end of week 8). Below, you can find the following assignment texts:
You can do the tasks for Unity and the Headset in parallel, but you need to have Developer Mode enabled before you can build and run your Unity applications on your headset.
As a general tip: a laptop/computer with a fast SSD and CPU are highly recommended, as Unity can be incredibly slow while dealing with packages or updating rendering settings. Furthermore, a long USB cable (USB3 is best) is super-handy for using Oculus Link to develop and test the application from within the Unity Editor (this uses the Desktop VR setup, we describe the Android setup here).
Install the Unity Hub. This will allow you to manage and download Unity versions, although for specific version you'll have to visit the website https://unity3d.com/get-unity/download/archive to find a specific version and download it using the installer or Unity Hub (recommended). https://store.unity.com/download?ref=personal
Install Unity version 2020.3 LTS via the Unity Hub. Default settings should be OK, unless you want to use your own Script editor/IDE. Also install Android build support for this version.
Create an empty Unity 3D project in this version through New Project > All templates > 3D core.
Change your project’s build platform to Android in the build settings in Unity, and change the Texture compression to ASTC.
Configure Unity’s project settings (https://developer.oculus.com/documentation/unity/unity-conf-settings/
Note that the Project Settings->Player page has two tabs: one for Desktop development and one for Android development (see the icons and hover cursor for desc.). You may ignore all Desktop settings for now but make sure you change to the Android tab and adjust the settings there!
Read and follow the instructions on the page, but the main ones to get a successful build are as follows:
Remove the Vulkan API from the Graphics API settings ("Other Settings")
Set minimum API level to 23.
Enable VR Support by using the XR Plugin Management system in "Project Settings", and install the Oculus Plugin.
Install and Import the Oculus Integration asset from the Unity Asset Store to your project.
https://assetstore.unity.com/packages/tools/integration/oculus-integration-82022
Upgrade the Oculus XR plugins when asked, and for now you can select the OpenXR back-end. Accept all default choices when possible.
For the duration of this course, you may want to check out the Oculus documentation for Unity: https://developer.oculus.com/documentation/unity/unity-overview/ (navigate topics using the left menu).
Developing for this headset requires an Oculus account (https://developer.oculus.com/manage/) and one smartphone with that account logged in. This smartphone will need to Oculus app to pair the headset to the account. After this you can share and use the headset freely between different members and computers. Please note that we advise to not make any purchases on the headset or the account, as the headset will be reset after the course (although you will retain access to your own Oculus account and purchases). Each group is responsible for taking care that one of the group members has such an account and a smartphone.
In order to build and run your own applications on the Oculus Quest 2 you need to enable developer mode. You can do this in the Oculus app on your mobile device after you have paired the device.
Pair the Oculus Quest 2 to a mobile device and go through the whole setup process. Follow the instructions on screen.
Enable developer mode in the app, see: https://developer.oculus.com/documentation/native/android/mobile-device-setup/
You need to join an Organization first to enable developer mode. We have created an Organization for this course: "XXXXX" that you can join. Give your Meta username (NOTE it's case-sensitive) to the TAs in the first class to be added (or DM on Absalon).
When you connect your Oculus Quest 2 to your computer with Developer mode enabled you will see a pop-up in the Quest 2 asking for permission to run USB debugging from your computer. Select "Allow" or "Always allow from this computer". This pop-up may only come when you are trying to actually build an app in Unity (see below).
Now that you have the Oculus Integration package added you can look at [Assignment 1: First Steps - Task 1] and try if you can build the Locomotion example scene. In short:
In Unity, go to the Assets folder, Oculus, SampleFramework, Usage and open the "Locomotion" scene.
Go the Menu, Build Settings, add the Locomotion scene.
Press "Build & Run".
Pro Tip: Any time you change the scene or it's name you have to add the scene again under "Build Settings", or the current scene will not be built.
This assignment consists of 4 tasks that will guide you through a first setup of a Virtual Environment in Unity and a final task (5) requiring you to document your work and design choices in a report.
Introduction
Travel techniques are a key component of any Virtual Reality application. While some applications require little travel (e.g. Superhot VR) and others have you travel large distances (e.g. Half Life: Alyx), they all share the need for a method of translating your physical pose and control input to a virtual position and orientation. Perhaps the most straight-forward one is self-propelled motion (i.e. “walking”) in which your physical pose corresponds directly to your virtual pose due to the transfer function of the tracking system. While safe and natural, this method of locomotion is also limited by the physical space available and thus virtual travel becomes limited or even impossible. Thus, there is a need for travel techniques that allow you to travel farther in the virtual world than the physical world allows directly. In the lectures you will have seen several examples of such techniques.
Luckily, some common locomotion techniques are easy to implement using existing packages or online resources. In this assignment you will create your own virtual environment (VE) in Unity to get comfortable with the Unity game engine and implement and test two locomotion techniques from the Oculus SDK. Finally, you will add one or more targets to your VE that change their properties when you arrive to the target using the Unity C# Scripting engine.
Report
You must combine your work in a report for this assignment. For all assignments and the final project report you will use the IEEE VGTC LaTeX template that is used for the IEEE: Virtual Reality conference and journal. You can find the template (Open Access) here: http://junctionpublishing.org/vgtc/Tasks/camera.html
The template serves as a guide for the use of the template and the layout of your final project report. For the assignments you may remove the sample content and use your own headings. It might be wise to save the example files for reference when you're writing other assignments and the project report later.
The structure for this assignment report is as follows: a short introduction where you state the topic and purpose of this assignment and the organization of your report. You then discuss the required points under “Report” for each task. You may do this in separate headings (e.g., as the tasks 1-4) or in one coherent story but keep readability in mind.
In the [Environment Setup] you will have imported the Oculus Integration Package into a temporary project. You can now open that project again or start a new project and import the Oculus Integration package again from the Asset Store. You will then open the Locomotion sample scene that you can find under Oculus->SampleFramework->Usage in your Assets folder and build the application with this scene. This will allow you to try out some default locomotion techniques provided by the Oculus PlayerController.
Requirements
Read the documentation for the Sample Locomotion scene: https://developer.oculus.com/documentation/unity/unity-sf-locomotion/
Build the application with the “Locomotion” scene.
Try all 4 locomotion modes: Node teleport, Stick-aimed teleport, Strafing with joystick and joystick walking/rotation.
Report
What locomotion technique do you think is best? Why?
Can you think of a scenario where the best technique may not be ideal?
How can different settings for teleportation and walking influence the effectiveness of locomotion and the experience?
Tips
Unity doesn’t always add new scenes to the build automatically: go to File->Build Settings and make sure the scene you want to use is added and selected there.
Read the documentation: it explains both the controls of the sample scene and how you can adapt the PlayerController to your own environment.
Now that you have tried some default locomotion techniques it is time to create your own environment that can serve as a basis for future assignments and even the project. Create a new scene and create a basic environment that is larger than the physical space you have available (so that there is a need for a locomotion technique) and features some corners and elevation. The design of the environment is up to you and may already be a very simple version of what you want to use in your project at this course. You can for example create a house, a garden, a forest, a hangar, etc. The scene does not have to look good yet, although you may want to start playing around with lighting and materials already. Note that this is the simplest task in terms of requirements and report but might be the most work depending on your experience level with Unity.
Requirements
The environment should feature some sort of elevation (e.g. a floor in a house, or hills in a forest).
The environment should be easily recognizable as a realistic environment (i.e. 3 grey walls does not count).
Report
A short description of your environment and its purpose. Please include a figure of your environment on the report and refer to that when describing the environment.
What do you think the best locomotion technique is for your environment? Why?
Is your environment an open space or a combination of smaller spaces (i.e. rooms)? How do you think that will affect navigation with the default locomotion techniques?
Tips
If you haven’t already, now is a good time to look up some Unity tutorials.
The Unity asset store has many free assets to help get you started, and there are many example scenes out there (especially in tutorials) that can give you inspiration and help you get started by borrowing some of their assets.
This virtual environment has to be your own work, created from scratch. You thus cannot simply copy an existing scene from anywhere.
Now that you have your virtual environment it is time to add some locomotion so you can explore your world in VR! For this we will be using the Oculus LocomotionController. It is your job in this task to add a PlayerController to your scene that represents your avatar and add the necessary Oculus SDK components so that you can walk or teleport around in your own scene. See the example scene from Task 1 as a reference and check the documentation linked there as well. In this task you will add both joystick-based movement and teleportation. Since the Oculus example code is a bit of a mess it can be tricky to get it working even though the basic setup is quite simple. You should generally follow these steps. Note that you may *not* copy/use the included PlayerController prefab and your PlayerController should start from scratch. However, you may use this as a reference and you may use the prefabs included in the sample scene.
Example Steps
Add the OVRCameraRig PreFab to your scene (Assets/Oculus/VR/Prefabs/OVRCameraRig.prefab).
This replaces the Main Camera, which you can delete.
Place the OVRCameraRig GameObject at or just above your ground surface.
You should now be able to look around when you build and run the app.
Add simple joystick-based locomotion first. Add the following components to OVRCameraRig:
Add Simple Capsule With Stick Movement script. After that drag the OVRCameraRig game object from the Hierarchy to the Simple Capsule With Stick Movement script CameraRig field. Play around with the Rotation Angle and Speed values.
Add a Rigidbody.
Add a CapsuleCollider.
Add hands to your scene: check out how this is done in the PlayerController prefab under the Tracking Space child game object.
Add teleportation. Read the documentation linked in Task 1 and add a LocomotionController object with a Locomotion Controller and Locomotion Teleport component. Note that for teleportation to work you can choose which components you want to combine from the required 5 levels: Input, Aim, Target, Orientation and Transistion.
Disable the Stick Movement component you added in step 1. Disabling a component is often preferred to removing it completely. In this case you need the functionality in the Teleport Controller, but you likely do not want to have both locomotion types naively combined.
Wherever the LocomotionController object checks for collisions you may want to set the layer mask to "Everything" right now for testing. Be sure to learn about collision tests and layers though at a later time, since these are very powerful tools for Unity development.
Use the TeleportDestination prefab included in the example, it'll do nicely for now and you can customize it's look to your liking.
Play around with the different options provided by the Teleport components: Can you find a different or better way to activate teleportation or aim? Can you add a curved laser instead of a straight one?
If you want to get the Blink transition to work you'll need to edit/replace the provided Blink transition script so that it matches this one: https://pastebin.com/jhmtvwww
Then, you'll need to add a OVRScreenFade component to the MainCamera and configure it to your liking. (Update 29/11: the API for OVRScreenFade has been updated may not work anymore. If you wish to implement Blink, the TAs can help you in the right direction.)
Requirements
You can walk around (physically) and look around (head rotation).
You can move around and make "snap turns" using the joysticks. You have configured the speed and rotation amount to a comfortable level and justify this in the report.
You can teleport around your scene and change the destination orientation (e.g. facing in the direction of the aiming arrow upon arrival).
You have basic collision detection so that you cannot simply teleport through walls, etc. (unless this is intended in the design, in which case please elaborate in the report).
Explore your environment and make sure everything looks the way you want it to, make adjustments if necessary.
You will need to show working version of both the joystick-based walking and the teleportation.
Report
What locomotion technique did you choose to use for your scene? Why?
Did you tune some parameters of the technique? Which, and why? For example: what speed parameter value did you use for joystick walking and why did you choose these parameters?
How easy/difficult is it to move around in your scene? Why?
Consider a possible project idea: can you think of a better locomotion technique for your project/scene that is not supported by default in the Oculus SDK?
Tips
If you do not see the aiming laser or target as expected, check the collision detection settings and the controls you're using to control the teleportation phase.
You can add the included PlayerController prefab to your scene as a benchmark or sanity check. Note that you do not need most of the components and objects in the PlayerController object.
If the provided LocomotionController looks confusing take note of what components are enabled/disabled. Disabled components will not have an effect on the scene but may be shown in the editor for easy access or switching.
The most basic collision detection is already provided in the example components as well and simply checks whether the target location has enough space for the player object with capsule collider.
Finally, you should be able to have some targets in your scene that tell you when you’ve reached them. This task requires you to write a custom script(s) that act as the components that place and handle target marker objects.
Requirements
You should have at least 3 GameObjects that act as target locations in your scene. The design and placement of these targets is up to you. You may place these sequentially: i.e., target #2 spawns after you reach target #1.
When your player avatar gets to a target location, the target should change color/disappear/explode or something similar to indicate that you’ve reached the target.
The targets should be placed at a reasonable distance from the player and each other, so that you need to use a locomotion technique to reach them.
By pressing a button on the controller, you reset the scene so that the targets are visible again in either their original or new positions.
Report
A short description of the targets in the scene and their behavior.
Tips
Task 2, 3 and 4 can be done in parallel to some degree. For example, you could create and test the target system in a simple scene and copy the result to your final scene. Similarly, you could add locomotion to a simple test scene and then easily implement the working version into your final scene.
Finally, you will write a short evaluation of this assignment and how you think you did:
What was the most challenging part of this assignment?
What did you learn?
Who worked on which parts of the submission? Write a short statement of contribution for each of you. This can include who did the programming, design, techniques, UI and other factors, reading and reflecting on the related literature, thinking and writing parts of the report, etc.
Submission
Your final submission for this assignment will consist of three parts:
The final report documenting your work, choices and evaluation as specified in the tasks above. The report should be a PDF file with your names and group number in the document and a filename comprising your last names, group number and assignment number. For example “Group0_Gemert__Bergstrom_Assignment1.pdf”.
The text in the report should be about 1.5 pages long (2 page hard limit), and on top of that, you can include a figure(s) and a reference list. Submit the report as a PDF according to the submission instructions below and do not forget to add all your names and group number!
A link or other means of identification to the repository in your GitLab group (see "GitLab and Unity" below) as well as a commit hash representing the final submission of your application. Note that this commit will have to be timestamped *before* the deadline. The repository should contain the work of Task 4 (which is an iteration upon Task 2 and 3), you do not have to show work for Task 1.
The repository at this commit should contain all source code and assets required to successfully build the application
Furthermore, it should contain a “Build/” folder in which you provide a working executable/package of your application that we can run as-is
The repository should comprise a README.md file with you names, group number and any relevant instructions for using/building you application
A video with audio commentary in which you demo your work. The video should be in MP4 format and max 30 seconds long. For this assignment you should especially demonstrate your work at Task 4.
You will submit these three parts on Absalon where you should be able to upload a video, a PDF, and provide a text entry for the commit.
Assessment
To pass the assignment, the following criterias must be completed:
All Tasks are sufficiently completed. This requires the completion of all points under “Requirements” for each task.
If a requirement can not be completed, the report must contain an explanation including the steps taken to fulfill this requirement and why you think it has not worked. The assessors will decide whether this explanation suffices.
The final report must include an answer for all points under “Report”.
Where applicable, the reporting must reflect an understanding of the syllabus for this module.
The final report must include at least one reference to the syllabus (excluding slides) for this module.
Submission to GitLab before the deadline including the README.md file with instructions.
Submission of the Video.
Your application must be able to build and be deployed on yours and ours HMD.
GitLab and Unity
In order to have easy access to your applications and submissions of your assignments we have created a Group on GitLab titled “XXXXX”. Your group will be assigned and given access to a subgroup in which you can create and maintain repositories for your work. All final submissions of assignments and project are to be done by making the full source (minus temp files etc.) of the application as well as a final build package/application available in your Subgroup on GitLab. How you organize the repositories is up to you: you may use a separate repo for each assignment or iteratively work and submit within the same repo. If you haven't yet, write the TAs on Absalon with your group number and GitLab username for access to the GitLab.
The final submission should always be a commit hash in the master branch of the repo you specify in the submission on Absalon. Your submission should also always include a README.md file with the names of all group members and the assignment this submission refers to. If you need to specify any instructions for running, building or using your application you will specify those in the same README.md file. We will not check other files or try to solve build errors related to the undocumented use of exotic packages, missing assets, etc. Lastly, remember that your work should be done in or at least fully compatible with Unity 2020.3 (see Environment Setup) and the Oculus Quest 2.
Object selection and manipulation are one of the most common forms of interaction in VR, and also some travel techniques involve object selection and manipulation. In this assignment, you will implement some typical approaches to object selection and manipulation, and some more advanced variants of techniques, as well as some simple games and tasks in which to use your techniques.
When manipulating objects in VR, it is common to want some kind of physics-based interaction. For example, the ability to pick up and throw objects is a staple of modern VR experiences, and we expect thrown objects to fall according with the laws of physics. It is also very common to need to detect when one object touches another, for example when the user's hand touches a specific surface. Thankfully, Unity has a built-in physics engine that makes this easy. In this task, you will familiarize yourself with the parts of the physics engine you need to manipulate objects.
Requirements
Add three cubes to your environment from the first assignment. The cubes should measure 0.1x0.1x0.1, 0.5x0.5x0.5 and 1x1x1 meters, and should be placed hovering above the ground. Add a rigidbody component to them. The rigidbody is what enables physics interaction. Observe what happens when you enter play mode.
Add a rigidbody to your avatar's hands. Make sure that it is kinematic. Also make sure that your hands each have a collider (a sphere collider scaled to the approximate size of the hand works fine). Finally, make the hand colliders triggers.
Add a script to the cubes. In this script, use the OnTriggerEnter(Collider other) callback to define what happens when the cubes collide with something. In this callback, check if the other collider is the hand, and if so, add some force in a random direction to the rigidbody. Add a public float that you multiply this force with; you can then change this float in the inspector and have a different force multiplier for each cube. You can now punch cubes around your environment!
Try manipulating different aspects of the rigidbodies when they collide with the hands or other objects. For example:
What happens if you both add force to the rigidbody and set useGravity to false?
How about adding force to a different cube than the one you are punching?
What if you change the scale of a cube when it collides with another cube?
Instead of adding a random force, try adding force based on the angular velocity of the hand's rigidbody so that the added force matches the punch. Can you find some configuration that feels good?
Report
What was your favorite physics configuration for the cubes? Why did you like it? Was it the same for all the cubes?
What do you think would be required to make punching the cubes feel realistic?
What is an example of a situation where aiming for realism is not a good idea in this kind of interaction?
Tips
In the default Unity scale, one unit equals one meter. For example, the default primitive cube with a scale of 1x1x1 is one meter on all sides.
There are different callbacks for different collider types. When one of the two colliding colliders is a trigger, OnTriggerEnter() will be invoked. If none are triggers, On OnCollisionEnter() will be invoked. Make a collider a trigger if you want it to be able to clip through other physics objects. For example, this is often used for hands.
Now that you have familiarized yourself with the physics needed to manipulate objects, it is time to put them to use. In this task, you will be implementing the ability to pick up and release objects in a few different ways, and then using them for a small game where you throw balls.
Requirements
Add a table surface to your environment. On this table, add a few small objects. You decide what these objects should be, but they should have different shapes and should be no bigger than your hand. Make sure they have a collider and rigidbody.
Implement functionality so that the user can pick up objects by touching them and pressing the trigger on the controller, and release them by releasing the trigger. You can do this using Oculus’ OVR framework (i.e., the “Grabbable” scripts), or implement your own version.
Implement this same functionality using the integrated hand tracking in the Quest, so that you can pick up objects by closing your hand and release them by opening it. Similarly, you should use the built-in functionality in the OVR framework for this. Set up your project so that you can easily disable and enable integrated hand tracking or controller-based hand tracking.
Now, create a small game where the user must pick up the objects on the table and throw them into some targets. To create a target object, for example, you could create a cylinder, make it thin, give it a transparent material, face it towards the user, make its collider a trigger, and add a script that increments a score when the table objects collide with it.
Using what you have learned so far, implement a creative way to make the game easier. Then, implement a creative way to make it harder. Implement them in such a way that you can enable and disable them.
Report
What is your favorite way of interacting with objects in this task? Why?
How may visual cues in your game influence throwing performance? How can you modify those to aid perception of target object (such as its distance or alignment)?
Reflect on advantages and disadvantages of using the integrated hand tracking and controller-based hand tracking for object manipulation.
Why did you choose to make the game easier and harder in the ways that you did?
Tips
If you add non-primitive objects to throw around, you can give them a mesh collider which will conform to the mesh’s shape. This is less performant and more prone to clipping than primitive colliders. If you experience issues, try making them convex, or approximating a primitive collider onto the mesh.
There are many good tutorials on YouTube for how to pick up objects using the OVR framework.
Static variables in Unity work much the same as you have learned in object-oriented programming, so it can be a good way to keep track of something like a global score.
You can use the official Oculus documentation guide to set up Hand Tracking in Unity (https://developer.oculus.com/documentation/unity/unity-handtracking/). Keep in mind that you can also set up Hand Tracking in other ways that you will probably find online. That's completely fine as long as you reference the source of the tutorial in your report.
It is now time to put all that you have learned together and implement your own object manipulation technique. In this task, you also have to design some tasks in which your technique can improve or hamper the user's performance.
Requirements
Implement a technique that allows the user to manipulate distant objects (i.e., objects that are out of arm’s reach). Your technique should be a variation on raycasting, but you are encouraged to get creative with how you use raycasting. For example, you could use a raycast to move a hand at a distance, or use multiple rays or a non-linear ray.
Create a task where it is beneficial to use your technique to manipulate objects. The task should require the manipulation of multiple objects.
Create a separate task where it is not beneficial to use your technique to manipulate objects. The task should require the manipulation of multiple objects.
Report
Reflect on your technique. Why did you design it as you did?
How does the technique influence the user's control on their hand or cursor?
Why did your task performance improve or decline with your technique? How would you refine your technique to better help the user to perform the worse task?
How would you combine your technique with other techniques to enable new or improved forms of object manipulation?
Tips
You can use a thin cylinder to visualize your raycast. Unity also has built-in raycasting methods.
Some of the OVR objects have many scripts that affect its transform which limits your ability to do the same. To help with this, you can parent it to an empty game object and manipulate that transform instead.
A task is simply a short sequence of actions with a specific goal. For example, it could be to move an object from one place to another, to discriminate between multiple objects that are closely adjacent, or pick up a sequence of objects in the correct order.
An example of a technique you can use for the object distant manipulation can be found in the Oculus documentation guide for Distance Grabbing (https://developer.oculus.com/documentation/unity/unity-sf-distancegrab/. This is just an example and of course you can use any other technique you think of, or found online. If you use another online tutorial remember to reference it in your report.
This final task is a report-only task. As the final section in your report, you should pitch an idea for an interaction technique that you want to develop for your project. Your interaction technique needs to address one of the challenges of interacting with VR, which we discuss during the course. It can be, for instance, about helping a VR user to do something, learn something, experience something, and do that more efficiently, effectively, comfortably, and so on. You can get ideas on this both from the readings (think about what has been the purpose of the research when you read a paper or watch a video), but also from your own head (when you use VR, think of questions that emerge, like "Wouldn't it be nice if I could move farther while maintaining presence or visual awereness, or select that object far away with a better speed-accuracy trade off").
The description of your idea in the report should cover the following points:
What do you want to do?
Why do you think this is an interesting idea? For example, the interest can come from:
how does it differ from the state of the art of VR interactions?
what improvements does it offer?
Finally, you will write a short evaluation of this assignment and how you think you did:
What was the most challenging part of this assignment?
What did you learn?
Who worked on which parts of the submission? Write a short statement of contribution for each of you. This can include who did the programming, design, techniques, UI and other factors, reading and reflecting on the related literature, thinking and writing parts of the report, etc.
Submission
Your final submission for this assignment will consist of three parts:
The final report documenting your work, choices and evaluation as specified in the tasks above. The report should be a PDF file with your names and group number in the document and a filename comprising your last names, group number and assignment number. For example “Group0_Gemert__Bergstrom_Assignment1.pdf”.
The text in the report should be about 1.5 pages long (2 page hard limit), and on top of that, you can include a figure(s) and a reference list. Submit the report as a PDF according to the submission instructions below and do not forget to add all your names and group number!
A link or other means of identification to the repository in your GitLab group (see "GitLab and Unity" below) as well as a commit hash representing the final submission of your application. Note that this commit will have to be timestamped *before* the deadline. The repository should contain the work of Task 4 (which is an iteration upon Task 2 and 3), you do not have to show work for Task 1.
The repository at this commit should contain all source code and assets required to successfully build the application
Furthermore, it should contain a “Build/” folder in which you provide a working executable/package of your application that we can run as-is
The repository should comprise a README.md file with you names, group number and any relevant instructions for using/building you application
A video with audio commentary in which you demo your work. The video should be in MP4 format and max 45 seconds long. For this assignment you should especially demonstrate your work at Task 1-3.
You will submit these three parts on Absalon where you should be able to upload a video, a PDF, and provide a text entry for the commit.
Assessment
To pass the assignment, the following criteria must be completed:
All Tasks are sufficiently completed. This requires the completion of all points under “Requirements” for each task.
If a requirement can not be completed, the report must contain an explanation including the steps taken to fulfill this requirement and why you think it has not worked. The assessors will decide whether this explanation suffices.
The final report must include an answer for all points under “Report”.
Where applicable, the reporting must reflect an understanding of the syllabus for this module.
The final report must include at least one reference to the syllabus (excluding slides) for this module.
Submission to GitLab before the deadline including the README.md file with instructions.
Submission of the Video.
Your application must be able to build and be deployed on yours and ours HMD.
To evaluate VR interaction techniques, you often need to log data. A common format for this is Comma-Separated Values (.csv) files. In this assignment, you have to create a general-purpose data logging Unity component that can write observations to a CSV file, and implement a few common logging scenarios using your component. You can modify this component to log the measures for your project.
A CSV file is a plain text file wherein each line is an observation of a number of fields; these fields always occur in the same order. The first line in the file, the schema, defines the order and names of these fields. Each value in the file is separated by a delimiter, typically either a comma, a semicolon or a tabular character.
A sample structure of a CSV file could look like this:
UserId;TrialId;Timestamp;Event;PosX;PosY;PosZ
0;1;637770796001281552;PlayerPosition;-0.1464635;1;-0.000312534
0;1;637770796001321444;PlayerPosition;-0.1539453;1;-0.0009537541
0;1;637770796001361328;PlayerPosition;-0.1616363;1;-0.001804266
0;1;637770796001391255;PlayerPosition;-0.1695578;1;-0.00288516
0;1;637770796001431147;PlayerPosition;-0.1777081;1;-0.004194984
0;1;637770796001461074;PlayerPosition;-0.1860429;1;-0.005689242
…
This task takes you through the process of writing a data logging script component, to ensure everyone has a mostly identical code-base to begin with. Note that this task has no report component.
Create a component (script) called DataLogger. This component should allow you to log an entry, start the logging, stop the logging, and write the logged values to a file in CSV format.
The DataLogger has as parameters a separator charactor (e.g., ',', ';', or '-'),
a file name base to identify the log (e.g., "PositionLog_"),
and a way to specify the header of the CSV file (the very first line, see example above).
It also has StartLogging(), StopLogging(), and Log() methods that are accessible by your other components.
The overall logic of the DataLogger is that another component in your scene can call the StartLogging() method on the DataLogger (e.g., upon button press), and then whenever Log(x, y, z) is called the DataLogger converts the x,y,z values to strings and stores them in a single line with each value separated by the separator. For example, a PlayerObject can call Log(x,y,z) every frame to log the change in position. You can then call StopLogging() to stop listening for Log() calls and commit the data to a file.
The DataLogger should have a "header" parameter that specifies the columns that will be written to file. This header should always be the first line in any new log file. For example, you can create an array of strings where you name each column, and then concatenate the items to a string with the separator. You may follow the example above, which is set up to log 3 position values for the playerPosition.
Create the public method "StartLogging". The StartLogging method should change an isLogging flag only when the DataLogger is not currently logging and is not currently writing to file. The StartLogging() method instantiates a new List<string> object where you will store the logged rows for that session.
Create a method "Log" that takes parameters "int userId, int trialId, long timestamp, string eventName, params string[] items". The final array can be used to log any number of data points. Each call to Log() signifies one line in the log. The Log() method converts each data point to a string where needed, and concatenates the items using the separator. The result is a single string (a single line) which is stored in the List<string> object (see the example above for what such a row string looks like).
The StopLogging() method changes isLogging flag if currently logging, and then writes the data to a file. To write the data to a file, use a StreamWriter object (see Microsoft Docs) and it's WriteLine() method for the recorded entries in the List object. As a file path you can use the Application.persistentDataPath that Unity uses and the file name base you specified, for example:
string filepath = Application.persistentDataPath + "/" + filenameBase + System.DateTime.Now.ToString("ddmmyyyy-HHmmss") + ".csv";
Test the component by logging some changing value (e.g., the position of a moving cube) and making sure you can read the correct logged values on your PC.
Tips
If you want to understand C#'s file writing better, the .NET documentation is useful, for example:
https://docs.microsoft.com/en-us/dotnet/api/system.io.streamwriter
https://docs.microsoft.com/en-us/dotnet/csharp/how-to/concatenate-multiple-strings
https://docs.microsoft.com/en-us/dotnet/standard/base-types/custom-date-and-time-format-strings
https://docs.unity3d.com/ScriptReference/Application-persistentDataPath.html
Make sure you do not accidentally write trailing separators to the end of the line! Remove this before commiting the line, or set up your concatenation so that the final item does not end with a separator. Otherwise, your data will not be read properly by most tools.
It's a good idea to implement some checks and errors to prevent corrupted data: make sure the StreamWriter closes correctly, prevent access to the Logger while data is being written, close the List<> properly when a new log starts, etc. Also, if an entry is empty you may want to skip or throw an error instead of spamming empty lines.
You can get the current timestamp from the Unix epoch like so: DateTimeOffset.UtcNow.ToUnixTimeMilliseconds();
When converting float values to string, the compiler uses localization settings to determine the decimal separator. For example, in Denmark de decimal separator is a comma, but in American English regions it's a point. This can lead to corrupt data if you record data in one region and try to read it in another. To avoid this most ToString converters can take a localization parameter where you decide what format to use. To avoid the problem altogether you can pass a CultureInfo.InvariantCulture property to the ToString method.
To enable writing to the SD card you should do the following: File->Build Settings->Player Settings->Other Settings->Write Permissions Set it to External(SDCard). Keep in mind that usually the first time you write to the SD card of the Quest (after you application has finished running) you need to disconnect the Quest (remove the USB cable) from your computer and reconnect it for the directory to appear.
It is common when evaluating VR interaction techniques to need to log the movements of the user's body. Typically, this needs to happen a given number of times every second. With the Quest 2, you have three tracking points by default; the head and two hands. Hand tracking gives you access to more tracking points on the hands, but often it is enough to log the position of the hands themselves.
To practice the logging of movements in a fun way, we'll be implementing a simple method to write your name in thin air and plot the result on your PC. In this application you will press a button to start logging, then write your name in the air in VR, and stop recording. You can then use the logged data to see the result, for example by plotting the (x, y) coordinates in Python or Excel.
Create a script component that uses your DataLogger to log the XYZ localPosition of the Anchor object for your writing hand. Your writing hand is whatever hand you prefer to write with; track the associated Anchor object. Set it up so that when StartLogging() is called the component calls Log() every for every frame (i.e., using the native refresh rate as the sample rate). StopLogging() disables this logging. Remember to specify the header.
Set up a system to start and stop the logging. Using the buttons on the other hand may be a good idea: assign a button to call StartLogging() and StopLogging(). See https://developer.oculus.com/documentation/unity/unity-ovrinput/
Test the application: in VR, start logging, then write your name in front of you using the logged controller. Stop logging, download the data, and plot the position data in a graph to see your writing.
Optional
In the current implementation you write each sample directly to file. When you start recording more variables the ToString conversion and writing quickly become too slow to keep up with the frame rate, thus slowing down your application. It may be better to append to append the strings to a list and only write them to a file asynchronously when StopLogging is called. Try to implement this. You can go one step further: append each variable to a data structure and also save the ToString conversion until the end.
Implement logic to control the sample rate: instead of calling Log() every Update(), write a control mechanism that instead calls Log() every N seconds, where N is a public variable.
Enable hand tracking, and instead track the fingertip.
Report
The graphs with the names you wrote in VR, of all your group members. You may use whatever plotting tools works for you, such as Pyplot, Notebooks, R, Excel, etc. Note that this graph must be created from the logged data, so a screenshot of in-VR drawing does not suffice.
How do you record the position of the writing hand anchor? How do you control start/stop logging?
Did you experience any issues with the tracking, and if so, can you reason why?
Can you draw other things, using your head or other hand? If so, show these graphs as well and comment.
What is the difference between tracking the localPosition and the Position components of the Transform? When would you use which?
Tips
A common way to do something N times a second in Unity is to use the update loop with a simple timer. Unity has a value, Time.deltaTime, which corresponds to the time in milliseconds since the previous update. If you have a timer float in the component, you can increment it by Time.deltaTime every frame, and then when the timer equals or exceeds (1 / sampleRate), invoke your log method and reset the timer to 0. For more information on Time.deltaTime, see the Unity documentation: https://docs.unity3d.com/ScriptReference/Time-deltaTime.html
Another way to do something N times in a second in Unity is to use a coroutine. The syntax is a bit more advanced, but coroutines have highly useful general-purpose yield instructions that will come in handy if you plan to keep working with Unity. For example, the WaitForSeconds() yield instruction would be appropriate for this task. For more information on coroutines, see the Unity documentation:
https://docs.unity3d.com/Manual/Coroutines.html
Think about the refresh rate of your HMD when determining how many times a second you should log something. As a rule of thumb, it is good to match your sample rate to your refresh rate for detailed logging. However, you may not need to log that many observations a second, depending on what you are trying to measure.
This is a report-only task. The purpose of this task is to start putting into concise and clear writing how you intend to evaluate the technique you are developing for your project. In order to evaluate something, you need to measure the performance of a variable of you choosing. For example, this could be pointing accuracy, movement speed, VR sickness, etc. Touch on the following points:
Start by summarizing your project and relevant measures in 4-5 sentences.
What will you consider as your primary measure, why?
How do you intend to implement this measure?
What other measures could be relevant, and why did you choose not to implement them?
If you had access to an advanced HCI lab with equipment like we have in Sigurdsgade (industrial full-body motion tracking, biometrics, etc.), and could buy any kind of tracking equipment you could want, how would this affect your choice of measures?
Finally, you will write a short evaluation of this assignment and how you think you did:
What was the most challenging part of this assignment?
What did you learn?
Who worked on which parts of the submission? Write a short statement of contribution for each of you. This can include who did the programming, design, techniques, UI and other factors, reading and reflecting on the related literature, thinking and writing parts of the report, etc.
Submission
Your final submission for this assignment will consist of three parts:
The final report documenting your work, choices and evaluation as specified in the tasks above. The report should be a PDF file with your names and group number in the document and a filename comprising your last names, group number and assignment number. For example “Group0_Gemert__Bergstrom_Assignment3.pdf”.
The text in the report should be about 1.5 pages long (2 page hard limit), and on top of that, you can include a figure(s) and a reference list. Submit the report as a PDF according to the submission instructions below and do not forget to add all your names and group number!
A link or other means of identification to the repository in your GitLab group (see "GitLab and Unity" below) as well as a commit hash representing the final submission of your application. Note that this commit will have to be timestamped *before* the deadline. The repository should contain the work of Task 4 (which is an iteration upon Task 2 and 3), you do not have to show work for Task 1.
The repository at this commit should contain all source code and assets required to successfully build the application
Furthermore, it should contain a “Build/” folder in which you provide a working executable/package of your application that we can run as-is
The repository should comprise a README.md file with you names, group number and any relevant instructions for using/building you application
A video with audio commentary in which you demo your work. The video should be in MP4 format and max 30 seconds long. For this assignment you should especially demonstrate your work at Task 2; consider how to illustrate and edit the video for this, for instance, videoing the real world of a VR user performing the task, the view in VR while drawing, and integrating the picture about the result of drawing into the video.
You will submit these three parts on Absalon where you should be able to upload a video, a PDF, and provide a text entry for the commit.
Assessment
To pass the assignment, the following criteria must be completed:
All Tasks are sufficiently completed. This requires the completion of all points under “Requirements” for each task.
If a requirement can not be completed, the report must contain an explanation including the steps taken to fulfill this requirement and why you think it has not worked. The assessors will decide whether this explanation suffices.
The final report must include an answer for all points under “Report”.
Where applicable, the reporting must reflect an understanding of the syllabus for this module.
The final report must include at least one reference to the syllabus (excluding slides) for this module.
Submission to GitLab before the deadline including the README.md file with instructions.
Submission of the Video.
Your application must be able to build and be deployed on yours and ours HMD.
In the project, you will design, develop, and evaluate an interaction technique for VR. The type of the technique can be chosen freely but we recommend checking the idea's feasibility with the teaching team based on the initial idea (Project Day 1) and based on the project plan (presented at the Project Day 2).
The project runs throughout the course and is done in the same study groups as the assignments. The purpose of the assignments is to support the project work. The assignments are designed in a way that they represent the minimum requirements for learning to develop interaction techniques for VR (the main purpose of this course). However, the assignments also leave much open for free choice: you can use that free choice to adapt your assignments close to ready building blocks for your project. Even if this may require a little more thinking and implementation work during the assignments, you will be rewarded by not needing to re-invent and re-implement everything from the scratch at the end of the course.
The project work consists of three main components:
Designing an interaction technique. Your interaction technique needs to address one of the challenges of interacting with VR and a related quality of interaction, which we discuss during the course. It can be, for instance, about helping a VR user to do something, learn something, experience something, and do that more accurately or faster, with more presence or body ownership, and so on. You can get ideas on this both from the readings (think about what has been the purpose of the research when you read a paper), but also from your own head (when you use VR, think of questions that emerge, like "Wouldn't it be nice if I could..."). We will also help with initial ideas at the 1st project day.
Implementation. You need to develop the technique in a way that addresses the goal you designed it for. This is an iterative process, based on the quality and measure you learn, and continuously re-iterating your design and "piloting" (i.e., testing it out) yourself to find what just doesn't work for people, and what could. You will present the technique and get feedback on fine-tuning it at the 2nd project day.
Evaluation. You need to evaluate your interaction technique in a user experiment so that you are able to teach the world something about how to better design this class of techniques. The evaluation can be based on, for instance, implementing some variations of the technique (such that you can teach future designers and developers which things work better than others with this type of a technique), or about implementing different techniques (such that you can teach others about when your technique is e.g., better than some commonly used one, and when not). As participants in your evaluation, you need to have at least 6 people (e.g., your friends and roommates or other students from the course). We will help with designing the evaluation on the 3rd project day.
The report should follow a structure similar to that of the research papers. Instead of dividing it to the tasks as you do during the assignments, the project should present a coherent whole of the entire process of designing, developing, and evaluating an interaction technique for VR. In your report, you should also motivate your project, and discuss what can it teach for the future work around the topic. You should do this both based on the research papers, and real world applications and problems. For the latter, you need to discuss your project in relation to the company cases presented in the classes.
The structure often includes the sections: Abstract, Introduction, Background, Technique, Experiment, Discussion, and Conclusion. The names of these may vary, and you are free to adjust those and the structure. We recommend using the research papers we read throughout the course as guides: they are all peer reviewed prior to publication by the international research community, so a similar style (e.g., including related work, etc.) will help to clearly deliver what you thought, did, and learnt during the project.
In addition, the report should contain a contribution statement. Write a short statement of contribution for each of the members of the group. This can include parts on the programming, the design of techniques, reporting and related work, the evaluation, and other factors, that you think contributed to the project as a whole.
Your final submission for the project will be a similar package as in the assignments, consisting of three parts:
The final report documenting your project work, choices and evaluation as described above. For the report you will use the same template as in the assignments. The report can be max. 8 pages long, excluding references. The report should be a PDF file with your names and group number in the document, and a filename comprising your last names, group number and project. For example: “Group0_Gemert_Bergstrom_Project.pdf”
A link or other means of identification to the repository in your GitLab group as well as a commit hash representing the final submission of your application. Note that this commit will have to be timestamped *before* the deadline. The repository should contain the work of the final implementation.
The repository at this commit should contain all source code and assets required to successfully build the application.
Furthermore, it should contain a “Build/” folder in which you provide a working executable/package of your application that we can run as-is.
The repository should comprise a README.md file with you names, group number and any relevant instructions for using/building you application.
The README file should furthermore contain instructions regarding the control, the goal, and task in the application, so that the assessors can try it.
A video (max 2.5 minutes) with audio commentary in which you demo your work. For this, you should also demonstrate your study with the application in addition to the actual technique (and it's variations).
You will submit these three parts on Absalon where you should be able to upload a video, a PDF, and provide a text entry for the commit.