Area of Interest Part 2

Published by

on

In the last post I hinted that the AoI system will probably span a few posts. While I had the initial design and built the sensors in Jolt Samples, I quickly noticed a problem, my character wasn’t triggering the sensor volumes.

Pairing Jolt’s CharacterVirtual with a Rigid Body

It turns out the CharacterVirtual classes don’t actually have a rigid body, meaning they don’t trigger sensor volumes. The recommendation is to add a Character or a rigid body to your class which manages the CharacterVirtual. I opt’d to pair my character with a capsule rigid body, instead of a full Character class.

In my ControllableCharacter class constructor I added the following:

// Pair a Capsule with the Character for ray casting and sensor hits
// see: https://github.com/jrouwe/JoltPhysics/discussions/856 
// and  https://github.com/jrouwe/JoltPhysics/discussions/239 for more details 
auto ShrinkCapsuleSize = .02f;

JPH::BodyCreationSettings CapsuleSettings(new JPH::CapsuleShape(.5f * CharacterHeightStanding-ShrinkCapsuleSize, CharacterRadiusStanding-ShrinkCapsuleSize), PosVec, JPH::Quat::sIdentity(), JPH::EMotionType::Kinematic, physics::Layers::CHARACTER);

CapsuleSettings.mGravityFactor = 0.f;
CharacterCapsule = System->GetBodyInterface().CreateBody(CapsuleSettings);

System->GetBodyInterface().AddBody(CharacterCapsule->GetID(), JPH::EActivation::Activate);

This will create a capsule that just slightly sits inside the CharacterVirtual‘s collision capsule. Note I had to create a new layer so that this rigid body doesn’t collide with the CharacterVirtual.

We will also need to update the Capsule’s Velocity, we do this in the PrePhysicsUpdate:

// Update Character Capsule location as well
RVec3 NewPosition = PhysicsCharacter->GetPosition();
Vec3 Velocity = Vec3(NewPosition - OldPosition) / DeltaTime;

CharacterCapsule->SetLinearVelocity(Velocity);

Here’s the ObjectLayerFilter that will determine collision between layers:

/// Class that determines if two object layers can collide
class ObjectLayerPairFilterImpl : public ObjectLayerPairFilter
{
public:
	virtual bool ShouldCollide(ObjectLayer InObject1, ObjectLayer InObject2) const override
	{
		switch (InObject1)
		{
		case Layers::NON_MOVING:
            // Non moving only collides with moving
			return InObject2 == Layers::MOVING; 
		case Layers::MOVING:
            // Moving collides with everything 
			return InObject2 == Layers::NON_MOVING || InObject2 == Layers::MOVING || InObject2 == Layers::SENSOR; 
		case Layers::CHARACTER:
			// TODO: We may only want characters to collide when PvP'ing/fighting which will greatly complicate this class
			return InObject2 == Layers::CHARACTER || InObject2 == Layers::NON_MOVING || InObject2 == Layers::SENSOR;
		default:
			JPH_ASSERT(false);
			return false;
		}
	}
};

Now our flecs level::Level module can inherit from the JPH::ContactListener and start receiving contacts between characters and the sensors!

struct Level : public JPH::ContactListener 
{
	// ...
	/**
	 * @brief Called by the physics system by multiple threads
	 * 
	 * @param InBody1 
	 * @param InBody2 
	 * @param InManifold 
	 * @param InIOSettings 
	 */
	virtual void OnContactAdded(const Body &InBody1, const Body &InBody2, const ContactManifold &InManifold, ContactSettings &InIOSettings) override;

	/**
	 * @brief Called by the physics system by multiple threads
	 * 
	 * @param InSubShapePair 
	 */
	virtual void OnContactRemoved(const SubShapeIDPair &InSubShapePair) override;
	// ...
};

Tracking characters

Now that our character is setup to trigger volumes on entering, it’s time to figure out where and how to manage visibility. Each character will be given a flecs component called Visibility that contains an unordered set.

struct Visibility
{
	std::unordered_set<flecs::entity, units::FlecsEntityHash> Neighbors;
};

Since this is a set, we need to supply a hash operator for comparisons, luckily flecs entity ids are enough:

struct FlecsEntityHash 
{
	std::size_t operator()(const flecs::entity& Entity) const 
	{
		return 3+Entity.id()*53;
	}
};

Our level will track every time a character enters a sensor, and a flecs system will then be executed during each frame to copy it from the sensor to the Visibility component’s set for that character.

Our process for tracking characters is now:

  1. Initialize world and landscape, and add a local and multi sensor per landscape chunk:
auto Chunk = GameWorld.entity()
	.set<level::MapChunk>({LandscapeBody->GetID()})
	.set<level::LocalSensor>({LocalSensor})
	.set<level::MultiSensor>({MultiSensor});
  1. Run flecs GameWorld.progress() to tick our world
  2. This causes our PhysicsUpdate flecs system to run
  3. Call PhysicsSystem->Update(...) which utilizes many threads to do collision detections and call sensors if a character’s rigid body enters the volume
  4. OnContactAdded is called with a new character, check the following:
    1. Make sure the OnContact event includes a sensor
    2. Check that one of the layers of the two bodies is a character object by comparing both objects layers with: physics::Layers::CHARACTER
    3. Use a mutex to lock the rest of the OnContactAdded call as we can be called by multiple physics solver threads
    4. Find the character by looking up it’s body id in a flecs find call.
    5. Query all local and multi sensors and add the character flecs entity to it’s set of characters provided the sensor objects match.
  5. OnContactRemoved is called, do the same as above, but erase the character from the set of characters instead of insert.
  6. Flecs will then call our SetupPlayerVisibility system which:
    1. Is defined as a write<character::Visibility> as we will mutate a component we are not actually searching.
    2. Iterates over all map chunks & sensors
    3. Lookup the character entity in flecs by calling: Character.get_mut<character::Visibility>();
    4. Write to this characters Neighbors property (also a set) for all local sensors we are apart of
    5. Write to this characters Neighbors property for all multi sensors we are apart of

Now each character has a populated set of characters that are in it’s general vicinity! What we will want to do in the future, prior to the server sending these neighboring characters data, is to do a ray cast to validate they can visibly see these other characters. I presume this will be a bit more expensive so limiting the amount of iterations we have to do is crucial.

To come up with this logic, I actually originally wrote it in python so I could test/iterate faster. The above logic for determining characters visibility using multi-sensors can be simplified to the following python code:

characters = [
    {'id': 1, 'visible': set()},
    {'id': 2, 'visible': set()},
    {'id': 3, 'visible': set()},
    {'id': 4, 'visible': set()},
    {'id': 5, 'visible': set()},
    {'id': 6, 'visible': set()},
]

mapchunks = [
    set((1,2,3)), 
    set((3,4,6)),
    set((5,2,1,3))
]

for ch in characters:
    for chunk in mapchunks:
        if ch['id'] in chunk:
            ch['visible'] |= chunk

    print(f"player: {ch['id']} has {len(ch['visible'])} visible players: {ch['visible']}")

# output:
# player: 1 has 4 visible players: {1, 2, 3, 5}
# player: 2 has 4 visible players: {1, 2, 3, 5}
# player: 3 has 6 visible players: {1, 2, 3, 4, 5, 6}
# player: 4 has 3 visible players: {3, 4, 6}
# player: 5 has 4 visible players: {1, 2, 3, 5}
# player: 6 has 3 visible players: {3, 4, 6}

I’m pretty sure using some adjacency list or some fancy matrix maths we could create a more efficient data structure/algorithm for calculations. That being said, this should be good enough for a minimal Interest Management system for characters. Now I just need to come up with the server message(s) notifying clients of other clients in the vicinity and we can start connecting multiple clients together!