Libuv, uvw, flecs oh my!

Published by

on

Adding in the glue

I spent most of the weekend getting the project setup (remember it’s all open source!) , which means fighting with CMake. I also did a lot of reading of libuv vs boost asio for an event based networking system. I finally settled on uvw which is a c++ wrapper of libuv. I initially struggled to get libuv to build properly, all because I forgot a single line in the CMakeLists.txt that actually tells uvw to build as a library (and include libuv along with it):

# Have uvw download/configure libuv for us BUILD_UVW_LIBS is required!
set(FETCH_LIBUV ON)
set(FIND_LIBUV ON)
set(BUILD_UVW_LIBS ON)
...

FetchContent_Declare(
  uvw
  GIT_REPOSITORY https://github.com/skypjack/uvw.git
  GIT_TAG v3.1.0_libuv_v1.45
)

Without the BUILD_UVW_LIBS ON setting, I was getting a ‘uv.h header file missing’ along with /usr/bin/ld being unable to link to it. But now it’s all set. flecs integration went super smooth and I figured out how to get the libuv event loop and the ecs world.progress() to work well together. Below is basically the ‘server loop’:

// Track server time
auto serverTime = std::chrono::duration<double, std::milli>(0.f);
// 30hz server tick rate
const auto serverTick = std::chrono::duration<double, std::milli>(1000.f/30.f);

while (1)
{
	// Calculate start time
	const auto tickStart = std::chrono::steady_clock::now();
	
	// run the libuv event loop
	const int ret = loop->run(uvw::details::uvw_run_mode::NOWAIT);
	
	// tick ECS world right here...
	ecs.progress(); 
	
	// see how long our event loop (reading network/events) and ecs world took
	const auto iterationTime = std::chrono::duration<double, std::milli>(std::chrono::steady_clock::now() - tickStart);
	
	// now figure out how much to sleep (and clamp between 0 and max serverTick)
	const auto sleepTime = std::clamp(serverTick - iterationTime, std::chrono::duration<double, std::milli>(0.f), serverTick);
	
	// TODO send responses to clients
	
	// Sleep so our server tick rate is predicatable
	std::this_thread::sleep_for(sleepTime);
	
	// Add it to our total server time.
	serverTime += serverTick;
}

I also played around with some client code, I tried first to make my own threads to blast out messages, but I quickly encountered a problem . Seems the thread destructor calls join() which if you call twice on a thread apparently crashes your system, Neat. I then switched to a worker queue which libuv has built-in and I apparently glanced over. Here’s client code for blasting out messages:

{
...
	for (size_t i = 0; i < 100; i++)
	{
		auto req = loop->resource<uvw::work_req>([]() {});
		req->on<uvw::work_event>([&loop, &clientTick, &msgCount](const auto &, auto &) {
		
			auto client = loop->resource<uvw::udp_handle>();
			
			char data[4];
			for (size_t messageId = 0; messageId < 100; messageId++)
			{
				data[3] = (messageId >> 24) & 0xFF;
				data[2] = (messageId >> 16) & 0xFF;
				data[1] = (messageId >> 8) & 0xFF;
				data[0] = messageId & 0xFF;
				
				client->send(uvw::socket_address{"127.0.0.1", 4242}, data, 4);
		
			    // Run the loop once
				loop->run(uvw::loop::run_mode::ONCE);
				
				// Sleep a tiny bit otherwise, Chaos.	
				std::this_thread::sleep_for(std::chrono::duration<double, std::milli>(1));
			}
			
			client->close();
		});

		req->queue();
	}

    loop->run();
    ...

Using this little worker queue loop my server loop was processing about 30 messages (just 4 bytes) per tick with an average tick processing time taking 0.355378ms. Not bad considering we have a budget of 33ms per tick! My server loop is also running a single thread, so I can probably squeeze more UDP packet processing out of that, and I’m sure my janky client code could be optimized as well. But hey it’s a good start.

What’s next

I need to refactor this server/client into a library so I can add tests for it. I also want to know what the theoretical max of how many udp packets I should be able to process per tick. I’m not 100% sure the current event loop I have setup will work properly. Maybe I should instead use a libuv timer and just tick the server loop tick? Maybe I should create a thread pool and have each thread listen and then bunch up all collected packets and pass them to the main server loop instead? So many decisions/things to try!

Until next time.

Previous Post
Next Post