Tutorial written by Amanda End and Circuit Stream.

Welcome to Section Four of the HTC Vive Virtual Reality Programming Tutorial. In this section, we’re going to go over some basic examples of things you might want to implement in your VR Game, using the interactable framework that we created in previous sections. You can use these examples for your own virtual reality design. There’s a lot more you can do with an object besides picking it up and throwing it!
We’re going to cover the following examples:

Those are some pretty thrilling objects, if I do say so myself. Are you excited? You should be!

Simple Button Script

 

 
Let’s start with a simple interactable script for a button. We just want a simple button that, when we pull the trigger with our controller it clicks down; and when we release the button, we want something to happen.

The Model

The model I’m using is just two square primitives, the “button” bit a child of the base. The script we’re building goes on the base. I put a collider on the “button” bit, and a rigid body on the button base. Since we don’t need the rigid body for physics (just a way for the Input script to know where to find the interactable script) set the rigid body to kinematic and uncheck gravity.

 
In this screenshot the script we’re going to create is already created and attached, so don’t worry if you can’t find that component! (VRIO_Button).

Set up child class

Now we’re going to create a new child class! You may have already converted your pickup script to use our interactable base class, in which case you already know what to do! But if not, here’s your chance to give it a go. Don’t forget to inherit from VRInteractableObject instead of MonoBehaviour in your class declaration.

using UnityEngine;
using Valve.VR;

public class VRIO_Button : VRInteractableObject
{
	public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
	{
		//Button Press Down Code will go here
	}

	public override void ButtonPressUp(EVRButtonId button, VRControllerInput controller)
	{
		//Button Press Up Code will go here
	}
}

Since we want to know when a button is pressed and released, Let’s override both of the ButtonPressDown and ButtonPressUp methods. Don’t forget the override keyword in the method declaration!

Note: Normally, when you override a function from a base class, you’d include the method: base.MethodName(); in order to call whatever logic happens in the base class as well as the logic that happens in the child class. I omitted it here because I don’t plan to put any logic in the base class. I probably could have left it in anyway. Here’s to bad habits.

Restrict to Trigger Button

Next we want to make sure that our action only happens when the user presses a certain button, not every button. In this case, we’ll use the trigger. Like before, we’ll set up a class variable with a button enum, and check if the button sent to me by the method matches it. We’ll make it public again so we can change it later in the inspector if we want.

public EVRButtonId buttonToTrigger = EVRButtonId.k_EButton_SteamVR_Trigger;

public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
{
	//If button pressed is desired "trigger" button
	if (button == buttonToTrigger)
	{

	}
}

public override void ButtonPressUp(EVRButtonId button, VRControllerInput controller)
{
	//If button released is desired "trigger" button
	if (button == buttonToTrigger)
	{

	}
}

Note that in the version of SteamVR/OpenVR/Unity that I’m using, a public field of the button enum won’t quite match up properly. The default enum or whichever one you select in the inspector won’t be the one displayed. Your selection did work! But it just displays wrong (for reasons I won’t go into involving numbered enums).

Animating the Button

Really, there are tons of ways you can do this. Since it’s not super relevant to our VR skills, I’ll just briefly explain how I did it. If you want more time to study it, feel free to look at the code in full provided in the project. What we’re doing here storing a two target positions (Vector3s) for the button – pressed position and released position.

In the ButtonPressDown method we’re setting the target position to the button’s pressed position, and in the ButtonPressUp method, setting it to the button’s rest position. Then, in the update loop, if the button isn’t in the target position, we’re using MoveToward to lerp the Button’s transform (the inner bit, not the base) toward the target destination with a maximum speed.

There are lots of ways you could animate the button, including using Unity’s Animation Tools.

Animating the Button Code Section

public void Update()
{
	//Check to see if button is in the same position as its destination position
	if (button.localPosition != currentButtonDestination)
	{
		//If its not, lerp toward it at a predefined speed.
		//Remember to multiply movements in Update by Time.deltaTime, so that things don't move faster 
		//on computers with higher framerates
		Vector3 position = Vector3.MoveTowards(button.localPosition, currentButtonDestination, buttonClickSpeed * Time.deltaTime);
		button.localPosition = position;
	}
}

public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
{
	//If button is desired "trigger" button
	if (button == buttonToTrigger)
	{
		//Set button's destination position to the "down" position
		currentButtonDestination = buttonDownPos;
	}
}

public override void ButtonPressUp(EVRButtonId button, VRControllerInput controller)
{
	//Set button's destination position to the "up" position
	if (button == buttonToTrigger)
	{
		currentButtonDestination = buttonStartPos;
	}
}

Making the Button Do Something

Again, a little out of scope for the lesson, but if you’re curious about Events but don’t know a lot about them, this is a good application for one. Below you’ll see the declaration of a static event and it’s delegate. Once it’s defined, we just need to invoke it when we want to tell everything that our button has been pressed.

public delegate void ButtonPress();
public static event ButtonPress OnButtonPress;

public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
{
	//If button is desired "trigger" button
	if (button == buttonToTrigger)
	{
		//Set button's destination position to the "down" position
		currentButtonDestination = buttonDownPos;

		TriggerButtonPress();
	}
}

Anything that wants to know when our button is pressed can subscribe to the event like so:

public void Awake()
{
	VRIO_Button.OnButtonPress += MethodToTrigger;
}

protected void MethodToTrigger()
{
	//This method will be called any time the button is pressed,
	//and the event is Invoked. Note that since the event is static,
	//this method will fire any time ANY button of that type is pressed.
}

Events are incredibly useful, and if this is your first time using them, I highly suggest doing some more research on them.

Voila! That’s how we can use our new framework to create a button!

Slide Lever

 

 
Now lets ramp up a bit and cover how you can use the controller’s position inside an interaction script to make a complex interactable. Let’s create a lever that has three “stops” along a slide. The script we’re going to build will allow the user to grab the handle and move the handle along the slide, and when they let go, it will snap into place.

The Model

Since we aren’t including models (there aren’t any, I just built these out of primitives!) use whatever model you’d like. If you’d like my silly primitive lever, feel free to copy it from the project. Whatever you use, it needs to be broken into at least two basic parts: the lever, and the base. One collider should be placed on the handle. The interaction script we’re building will go on the main parent object, along with a rigid body so our input script knows where to find it.We’re again going to set the rigid body to no gravity and kinematic, since we don’t want it to move.

Set up child class, just as before

You know what to do!

Track the Hand that Grabs

 
When our player grabs the handle, we want to keep a reference of the controller’s transform (which holds it’s position), so we can update the lever’s position accordingly.

public class VRIO_SlideLever : VRInteractableObject
{
	public EVRButtonId buttonToTrigger = EVRButtonId.k_EButton_SteamVR_Trigger;

	protected Transform controllerTransform;

	public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
	{
		//If button pressed is desired "trigger" button
		if (button == buttonToTrigger)
		{
			controllerTransform = controller.gameObject.transform;
		}
	}

	public override void ButtonPressUp(EVRButtonId button, VRControllerInput controller)
	{
		//If button pressed is desired "trigger" button
		if (button == buttonToTrigger)
		{
			controllerTransform = null;
		}
	}
}

Match lever position to Controller (within limits)

On our update loop, if there is a reference to a transform (which means that the lever has been grabbed), we’re going to get the hand’s position relative to the handle using InverseTransformPoint, which converts the global position of the hand into the handle’s local position – meaning it’s position relative to the handle. You should also create a public class variable for the lever’s Transform, so you can assign it in the inspector – we don’t want to assume that where the script is attached is the object with the transform we want to move.

public EVRButtonId buttonToTrigger = EVRButtonId.k_EButton_SteamVR_Trigger;
public Transform lever;
public float minZ;
public float maxZ;

protected Transform controllerTransform;

public void Update()
{
	//If the user is "grabbing" the lever
	if (controllerTransform != null)
	{
		//Get the controller's position relative to the lever (lever's local position)
		float zPos = transform.InverseTransformPoint(controllerTransform.position).z;

		//Get the lever's current local position
		Vector3 position = lever.transform.localPosition;

		//Set lever's z position to the Z of the converted controller position
		//Clamp it so the lever doesn't go too far either way
		position.z = Mathf.Clamp(zPos, minZ, maxZ);

		//Set lever to new position
		lever.transform.localPosition = position;
	}
}

Once we have those coordinates, we’re going to strip out the y and z coordinates since we only want to move the lever along the x axis. Then we’re going to get the lever’s current position (making sure to use localPosition) and overriding it’s z with the one we grabbed from the handle. To ensure that this z position isn’t past the ends of our lever’s track, we’ll clamp that position using two class variables, minZ and maxZ. Since every lever is different, we’ll make these public variables and set them in the inspector.

Setting Min and Max Z Values

Once you have your minZ and maxZ defined, you can go back to your object in Unity and figure out what those values are. Move your lever to it’s leftmost position, and copy it’s Z position. Then go back to your parent object (the one with our interactable script on it) and paste that value into your minZ. If your object is symmetrical like mine and it’s max value is the same distance as your min, you can just reverse the sign to get the maxZ.

Make it Snap

We have a functioning lever now (try it out!) but it doesn’t snap into place when we let go. We want ours to snap into place (to limit the number of “positions” it has) so we’re going to set up three snap points, and push the lever into any of those positions when they get close enough.

public Transform lever;
public float minZ;
public float maxZ;
public float[] anchorPoints;
public float snapDistance = 0.05f;

First we define a public float array so we can store our three snap positions in the inspector. Once we have our variable, we can set them similar to how we set up our minZ and maxZ. In this case, our min and max are two of our snap points, and the last one is zero. If your lever is different, you may have to move the lever around like we did in the previous step to find your positions.

 

When the user releases the lever, We want to check all of our snap points to see if it’s close to any of them, so we’re iterating through them and checking the distance between the handle’s local position and the snap point. If it’s below a certain distance (you can create another public float to store this distance) from any of our snapping points, we can just use that snap point to assign our lever directly to that position.

public override void ButtonPressUp(EVRButtonId button, VRControllerInput controller)
{
	//If button pressed is desired "trigger" button
	if (button == buttonToTrigger)
	{
		controllerTransform = null;

		//Attempt to snap lever into a slot
		SnapToPosition();
	}
}

protected void SnapToPosition()
{
	//Cycle through each predefined anchor point
	for (int i = 0; i < anchorPoints.Length; i++)
	{
		//If lever is within "snapping distance" of that anchor point
		if (Mathf.Abs(lever.localPosition.z - anchorPoints[i]) < snapDistance)
		{
			//Get current lever position and update z pos to anchor point
			Vector3 position = lever.transform.localPosition;
			position.z = anchorPoints[i];
			lever.transform.localPosition = position;

			//Break so it stops checking other anchor points
			break;
		}
	}
}

Make it Do Something

Let’s use an event again! This time, our event needs to take an integer as an argument (which will represent which position the lever is in), so we need to include that when we define our delegate.

public delegate void SlideLeverEvent(int position);
public static event SlideLeverEvent OnLeverSnap;

protected void SnapToPosition()
{
	//Cycle through each predefined anchor point
	for (int i = 0; i < anchorPoints.Length; i++)
	{
		//If lever is within "snapping distance" of that anchor point
		if (Mathf.Abs(lever.localPosition.z - anchorPoints[i]) < snapDistance)
		{
			//Get current lever position and update z pos to anchor point
			Vector3 position = lever.transform.localPosition;
			position.z = anchorPoints[i];
			lever.transform.localPosition = position;

			//Call lever snap event
			if (OnLeverSnap != null)
				OnLeverSnap(i);
				
			//Break so it stops checking other anchor points
			break;
		}
	}
}

Note that this time, when you subscribe to the OnLeverSnap from another script, you will have to assign a method that takes the same arguments as the delegate. In this case, one integer.

And that completes our lever! One downside to this lever, however, is that we can only move the lever by grabbing it. If we were to pick up an object in the scene and push it against the lever, it would not move. This might be an advantage for you, but if you want to be able to push around your lever, you’ll need to take a more physics based approach. Don’t worry if you have no idea where to start, that’s what we’re going to cover next!

Pull Lever (a physics based method)

 

 
If you want to use physics to make a complex interactable object, you can do that as well. This will allow you to create an object that you could grab to manipulate, but could also push around with other objects in the room, as you’d be able to in real life.

The Model

Again, I made my model out of primitives, but you can use whatever model you want, as long as it has two basic shapes: your “handle” and your base. Both should be children of an empty GameObject. Both should have rigidbodies, neither using gravity, and the one in the base being kinematic. This will make the handle movable, but the base not.

The handle should have one Collider, so we can grab it. The script we’re going to write should go on the handle where the Rigidbody is. The handle should be positioned in the middle of the track. The handle will also have a script on it similar to our parented pickup script, but I’ll get into that in a little bit.

Moving the Handle

Instead of using a script and value to drive and constrain the handle’s movement as we did with the slide lever, we’re going to use a configurable joint on the handle. The first time you see one of these, you might be a little intimidated. I didn’t use them for a long time because the other joints look complex enough, and this one is packed with even MORE properties! No need to worry though, we won’t have to mess with everything in there; I’ll outline the settings you need to worry about. However, they are rather fun, so the next time you’ve got some spare time, you should play around with them!

 
To sum up, I locked motion for everything but the X axis, set a Linear Limit of 0.05 (the distance from the “middle” handle position to the limits of the track – this may be different or your handle), the Target Position (the handle’s default position, in this case the position at the “top”), and the Position Spring so it will spring back to the default position when you let go.

This sets the lever up to be able to move between the top of the slide and the bottom, but with a preferred position at the top. Since this is all done with physics, you can grab a cube and push it against the lever and it will work. Note that if you plan to make your lever moveable (like if you make the base non-kinematic and give it a script so you can pick the whole thing up and move it), you should set the Connected Body to the base’s Rigidbody.

Make it Grabbable

Currently, since our hands are not physical objects, we can’t grab the handle and pull it down. So we need a script similar to the pickup script we wrote in Module 2. We only went over how to make the parenting pickup script, but in our overview we covered two other types: Jointed and Physics. The example project contains the code for all three, if you’d like to take a look. The parented method won’t work for this situation, because if you remember, the parented method does not respect physics constraints.

We need one that works with the physics system, which both of the other two do. I personally chose the Physics method. So we need to take that script and attach it to the handle. This way we can “pick up” the handle, but since it’s constrained by a joint, we’re very limited in how we can move it around, which is what we want.

Make it Do Something

At this point, your lever is functional. But nothing happens yet when you pull the lever. Since we don’t want to add anything extra to our Physics Pickup script, as not every object we want to pick up is a lever, we’re going to create a second script. This class does not have to inherit from the VRInteractableObject script, since it won’t need any information from the controllers.

This script will simply detect when the lever is “down” and fires an event. Attach that script to your parent object. We usually attach our interactable script to the object with the RigidBody, but since this one isn’t interactable, we can just put it on the parent where it’s easy to find.Now that we’ve got our script, we’ll start with some class variables, similar to the first two examples.

In this case, we want a reference to the lever’s Transform (so we can check on it’s position), the triggerXPosition which will hold the x position at which the lever is in its “down” position, and the triggerThreshold which is how far it can be away from that position and still be considered held down. Once you have these defined, go ahead and assign them in the inspector.

 
Our script will need to check the position of the handle in the Update loop, and if it’s within triggerThreshold distance of the pulled position (set by triggerXPosition), it triggers the pull event. Remember, make sure you’re using localPosition. Once the handle goes back out of range of the pulled position, we also want it to trigger the release event. We’re going to use our friends the events again – one for the lever pulled, and one for the lever released.

using UnityEngine;

public class VRIO_PullLever : MonoBehaviour
{
	[Header("Lever")]
	public Transform lever;
	public float triggerXPosition;
	public float triggerThreshold = 0.02f;

	protected bool pulled = false;

	public delegate void PullLeverEvent();
	public static event PullLeverEvent OnLeverPull;

	public delegate void ReleaseLeverEvent();
	public static event ReleaseLeverEvent OnLeverRelease;

	public void Update()
	{
		//If lever has not been "pulled" and is in the threshold distance of pulled position.
		if (!pulled && Mathf.Abs(lever.localPosition.x - triggerXPosition) < triggerThreshold) { //Set pulled to true and fire event pulled = true; if (OnLeverPull != null) OnLeverPull(); } //If lever has been "pulled" and lever leaves threshold distance of pulled position if (pulled && Mathf.Abs(lever.localPosition.x - triggerXPosition) > triggerThreshold)
		{
			//Set pulled to false and fire event
			pulled = false;

			if (OnLeverRelease != null)
				OnLeverRelease();
		}
	}
}

Once you have that script completed, you’re ready to use your pull lever!

Projectile “Pew Pew” Device

 

 
Alright, let’s combine a few different things we’ve already covered and combine them with some new stuff to set up something common to many videogames – a gun.

The Model

Yeah, that’s a gun. It’s made of primitives, okay? And I’m not a huge fan of guns so pink and blocky works just fine! You can use your own gun model or copy the one I created from the provided project. All of my primitives are in a “display” parent GameObject so they’re out of the way. Whatever model you use, I’d suggest making the GameObject that has the Mesh Renderer a child of a parent GameObject, similar to what I’ve done in the image below. It’s generally good practice to have your display objects not be the top parent.

We’ll also want to add a Rigidbody to the parent object, setting Use Gravity to true and Is Kinematic to false. Then add a Collider – if you’re using a model you can attach a MeshCollider to the display GameObject, or you can go simple like me and just make a big BoxCollider on the parent. Next, throw on an Audio Source.

Last, attach the Parented Pickup script that you made in Section 3! Mine is named VRIO_Parented. You may notice that there are a couple extra children in our Gun, but I’ll cover those in the next few steps.

The Interaction Point

At this point, you can pick up and throw around your object, just like you could with the cubes at the start of our course. However, that’s not quite right – it is realistic to be able to pick up a gun however you want, but functionally, we usually want the gun to snap to a “shooting” position in the player’s hand.

You don’t have to do this (it might be hilarious to watch someone try to pick up gun in VR in the perfect way so it’s aim-able) but I recommend it. To accomplish this, we’re first going to need a reference to the position our hand should snap to. An empty GameObject’s transform will give us this information – so create a child in the gun and call it InteractionPoint.

 
The blue arrow on our transform should be pointing “forward” and the green arrow “up”. It may look here like the transform is a bit tilted, but that’s just because of the angle you hold the controller at. Getting this right may take some guess and check once we have the code implemented.

The Projectile Exit Point

The second child that we saw above is the transform we’re using to mark the exit point of our bullet, or the muzzle. The blue arrow should pointing forward and the green arrow pointing up.

Change the Pickup Button to the Grip

If you haven’t updated your “pickup” script to include the ability to assign which button you want to pick up with, you might want to do that in order to follow along. I explained how to do so back in Section 3: “Expanding the Interactables”. Once you have that ability, go to the component on your gun and change the Pickup Button to the Grip Buttons. Remember that after you choose one, it will display the wrong item on the dropdown. But don’t worry, your selection was properly recorded.

Implementing the Interaction point

Now we need to make another adjustment to our pickup script that allows us to utilize the interaction point we created before. Create a public Transform variable so you can assign the interaction point in the inspector, and then assign it.

 
Then update the pickup script to set the position of the object after parenting to the inverse of the transform point, if one exists.

protected void ParentedPickup(VRControllerInput controller)
{
	//Make object kinematic
	//(Not effected by physics, but still able to effect other objects with physics)
	rigidBody.isKinematic = true;

	//Parent object to hand
	transform.SetParent(controller.gameObject.transform);

	//If there is an interaction point, snap object to that point
	if (InteractionPoint != null)
	{
		//Set the position of the object to the inverse of the interaction point's local position.
		transform.localPosition = -InteractionPoint.localPosition;

		//Set the local rotation of the object to the inverse of the rotation of the interaction point.
		//When you're setting your interaction point the blue arrow (Z) should be pointing in the direction you want your hand to be pointing
		//and the green arrow (Y) should be pointing "up".
		transform.localRotation = Quaternion.Inverse(InteractionPoint.localRotation);
	}	
}

Having a hard time wrapping your head around how that works? It’s ok, you’re not alone. It’s a little hard to explain, too.

Local Position Interactions

Our interaction point’s localPosition is its distance from the origin (or zero) of the parent GameObject (the gun) along all three axes. Its localRotation is the relative rotation as compared to the parent GameObject. Since the interaction point is where we want our hand to be positioned within the object, the interaction point’s localPosition and localRotation are the position and the rotation that we want our hand to be at relative to the gun.

The below image is of the gun attached to the hand as a child, and with a local position of 0, 0, 0. This means that the gun’s zero point is in the same spot as the hand’s zero point.

 
As you can see, our interaction point is up and to the right of the gun’s zero. So to get there, we want our hand to move up and to the right. The problem with that is, we can’t move our hand, we’re moving the gun. So, instead of moving our hand up and to the right, we need to move the gun in the inverse direction, or down and to the left. Also, the interaction point is rotated counterclockwise from zero, so we’re going to rotate our gun clockwise.

Now That Feels Awkward

This might look real awkward, but once you’re holding the Vive controller it’s the position that feels most natural. Now, obviously, “down and to the left” and “counterclockwise” aren’t things we can write out in code – especially since those directions are relative to how we’re looking at the gun. That’s where our localPosition, localRotation, and the “inverse” part comes in.

When you have a Vector3, which is just a set of coordinates, all you need to do to get the inverse is multiply it by -1, or just add the – sign to the beginning of the variable. To get the opposite rotation is a little tougher to explain, but all you really need to know is that Quaternions (a crazy clever way to store rotational data, you should read about them!) have an Inverse function, so you can just use that.

Make it Pew Pew

Now that we can pick up our gun and hold it properly, we need to make it fire when we pull the trigger. So let’s create a second interaction script (another child class!) that will take care of firing the bullet. Just like with our pull lever, we don’t want to add this logic to our pickup script, since not all objects using the pickup script will be guns.

using UnityEngine;
using Valve.VR;

public class VRIO_Gun : VRInteractableObject
{
	public EVRButtonId fireButton = EVRButtonId.k_EButton_SteamVR_Trigger;
	public Transform projectileExitPoint;
	public GameObject bulletPrefab;
	public float bulletSpeed = 400;

	public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
	{
		//If button is desired "fire" button
		if (button == fireButton)
		{
			//Shoot
			ShootBullet();
		}
	}

	protected void ShootBullet()
	{
		//Create bullet and set it to muzzle's position and rotation
		GameObject bullet = Instantiate(bulletPrefab);
		bullet.transform.position = projectileExitPoint.position;
		bullet.transform.rotation = projectileExitPoint.rotation;

		//Add force to bullet
		Rigidbody bulletRigidbody = bullet.GetComponent();
		bulletRigidbody.AddForce(transform.forward * bulletSpeed);
	}
}

 

One More Thing to Cover

 
Alright. Lets cover this super fast! Our class variables are fireButton, which is a public enum of the button we want the user to press in order to fire the gun, projectileExitPoint, which is my fancy way of saying “muzzle”, which we will assign that muzzle transform we made earlier to in the inspector, bulletPrefab, which is a reference that you can drag and drop the prefab of your bullet into (don’t know about prefabs? Read about them!) and bulletSpeed which is the speed you want your bullet to travel. Make it public like the rest so you can adjust it in the inspector.

Once we’ve got our variables defined and assigned, we can put together our methods. We’ll override ButtonPressDown from the VRInteractableObject parent class, check to see if the button pressed is our previously defined fireButton, and if so, we’ll call ShootBullet. The shoot bullet method just creates an instance of your bullet prefab, sets its position to the muzzle’s position (here we’re using world position instead of local, since the bullet isn’t a child of the gun), and then applying force to the bullet’s Rigidbody (which should be a part of your bullet’s prefab, along with a collider). And there you have it! Gun script!

Note, that this setup does have a weakness: there is nothing to ensure that you’re holding the object before you fire. So you can just put your hand inside the gun and pull the trigger, and it’ll fire from the table or wherever it’s resting. Just like real life! (yay?) There are a few ways to account for this, but I’ll let you figure it out – you’re a pro now! (Hint: what if the gun script was a child of the pickup script?)

Haptic Feedback

Haptics are a really familiar method for providing feedback to the user. Who didn’t freak out when they got their hands on their first Rumble Pack? So let’s provide a little feedback when the user fires our gun.

public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
{
	//If button is desired "fire" button
	if (button == fireButton)
	{
		//Shoot
		ShootBullet();

		//Haptic pulse
		controller.device.TriggerHapticPulse(2000);
	}
}

The default value that you feed it is 500, which is super light, so you’ll have to play around with the value you give until it feels right.

Audio Feedback

Let’s go for broke and also provide some audio feedback for when the bullet is fired. The simplest implementation for this is to add an audio source to the GameObject, assign the sound you want in the inspector, and play it when the trigger is pulled.

public override void ButtonPressDown(EVRButtonId button, VRControllerInput controller)
{
	//If button is desired "fire" button
	if (button == fireButton)
	{
		//Shoot
		ShootBullet();

		//Haptic pulse
		controller.device.TriggerHapticPulse(2000);

		//Trigger audio
		gunAudioSource.Play();
	}
}

Bits and Bobs

I added a few more embellishments to my gun script – mostly the ability to switch it between automatic fire and single shot, and then hooking the toggle of that up to the button we discussed earlier. If you want to check it out, look at the script in the example project!

And that wraps up our section on basic interactables! As you can see, you can take your interaction scripts and manipulate them to do lots of different things – go experiment and have fun!

Previously: Expanding the Interactables Class.
Next Up: Input Via Raycasting.

Learn how to design VR apps with our full virtual reality Unity3d training. Visit our courses page for more info.