Let’s have a look at three virtual camera algorithms!
3rd Person Follow Algorithm
Just like a third-person view adventure game, we can set a virtual camera to do the same! The 3rd Person Follow Body Algorithm keeps the camera at a constant position and distance to the Follow target. It requires a Follow target to have the option to use the algorithm.
3rd Person Follow Properties
The responsiveness of the camera tracking the target. The input value is the time it takes for the camera to catch up to the target’s new position.
The Shoulder Offset property positions the shoulder pivot to the Follow target.
Vertical Arm Length
The Vertical Arm Length property affects the Follow target’s screen position when the camera rotates vertically.
The Camera Side property specifies which shoulder the camera is on. When sliding from left to right, the shoulder camera will move to the right which is an exact copy of the opposite side but positive.
The Camera Distance property specifies the distance between the Follow target and the camera.
Camera Collision Filter
The Camera Collision Filter specifies which layers will be included or excluded for the camera to collide with.
The Ignore Tag property will ignore any tags checked. It is recommended to use the Follow target’s tag.
The Camera Radius property specifies how close the camera can get to collidable obstacles without adjusting its position.
Orbital Transposer Algorithm
This virtual camera Body algorithm moves the camera around the Follow target. It is recommended to use the Look At target with the Aim Composer algorithm. The algorithm also allows you to rotate around freely without any code!
Orbital Transposer Properties
Choosing a binding mode property interprets the offset from the target. I wrote about Binding Mode and its modes here.
The Follow Offset property gives the position offset to attempt to maintain from the Follow target.
XYZ, Yaw, Pitch, Roll Damping
The Damping properties specify how responsively the camera tries to maintain the offset of x, y, and z. A small value makes the camera more responsive. A large value more slowly. Depending on the Binding Mode, a couple of the properties will not show.
The Heading properties specify how to calculate the heading of the Follow target.
- Definition: Choose Position Delta to calculate heading based on the difference in the position of the target from the last update and the current frame. Choose Velocity to use the velocity of the Rigidbody of the target. If the target does not have a rigidbody, reverts to Position Delta. Choose Target Forward to use the target’s local forward axis as the heading. Choose World Forward to use a constant world-space Forward as heading.
- Velocity Filter Strength: Controls the smoothing of the velocity when using Position Delta or Velocity in Definition.
- Bias: Angular offset in the orbit to place the camera, relative to the heading.
Recenter To Target Heading
The Recenter To Target Heading controls the automatic recentering when there is no input. To enable, we must have Enabled checked. Wait Time is the seconds when the camera waits before recentering. Recentering Time is the maximum angular speed of recentering.
The X Axis properties control the behavior of the camera in response to the player’s input.
- Value: The current value of the axis, in degrees.
- Min Value: The minimum value for the axis.
- Max Value: The maximum value for the axis.
- Wrap: If checked, then the axis will wrap around at the Min and Max values, forming a loop.
- Max Speed: The maximum speed of this axis in degrees/second.
- Speed Mode: How the axis responds to input.
- Accel Time: The amount of time in seconds to accelerate to MaxSpeed.
- Decel Time: The amount of time in seconds to accelerate the axis to zero.
- Input Axis Name: The name of the axis as specified in the Unity Input Manager.
- Input Axis Value: The value of the input axis. A value of zero means no input.
- Invert: Check to invert the raw value of the input axis.
The Transposer algorithm moves the virtual camera in a fixed offset to the Follow target. It can be interpreted in various ways, depending on the Binding Mode.
Transposer Binding Mode
The Binding Modes are the coordinate space to use to Interpret the offset from the target. The Binding modes are all pretty much the same except for Simple Follow With World Up.
Lock To Target On Assign
The Lock To Target On Assign mode makes the orientation of the virtual camera match the local frame of the Follow target. The camera does not rotate along with the target.
Lock To Target With World Up
The Lock To Target With World Up makes the virtual camera use the local frame of the Follow target with tilt and roll set to zero. It ignores all target rotations except yaw.
Lock To Target No Roll
The Lock To Target No Roll mode makes the virtual camera use the local frame of the Follow target, with roll set to zero.
Lock To Target
The Lock To Target mode makes the virtual camera use the local frame of the Follow target. When the target rotates, the camera moves with it to maintain the offset and to maintain the same view of the target.
The World Space mode is interpreted in world space relative to the origin of the Follow target. The camera will not change position when the target rotates.
Simple Follow With World Up
The Simple Follow With World Up mode interprets the offset and damping values in camera-local space. The camera emulates the action a human camera operator would take when instructed to follow a target.
Now that we have an understanding of the 3rd Person Follow, Orbital, and Transposer algorithm properties, we can start applying them to our games! If you are making a 3rd person adventure game, look into using the 3rd Person Follow algorithm. Combined with Orbital, we have a better 3rd Person view system to look around the target’s surroundings. The Transposer algorithm I am not so sure. Any ideas?