Understanding Hytale's Multi-World Threading Model
If you’ve worked with Minecraft plugins, you’re used to a simple rule: everything runs on one thread. In Hytale, that changes completely. Each World runs on its own thread, which means your plugin code must follow new rules to avoid crashes and race conditions.
Prefer video? Watch this tutorial on YouTube.
What You’ll Learn
- What is multi-threading? — Threads, parallelism, and race conditions
- The old world — How single-threaded servers work (and their limits)
- Hytale’s solution — Multi-world threading architecture
- The new rules — What’s safe, what crashes, and why
- Protecting your data — Thread-safe types for plugin state
- Patterns for modders — Real code examples from Hytale
We’ll use a Global Kill Counter plugin as our running example — a simple plugin that tracks kills across all worlds. This seemingly simple task reveals all the threading challenges you’ll face.
What is Multi-Threading?
Before we dive into Hytale’s architecture, let’s build a solid understanding of what threads are and why they matter. If you’re already comfortable with concurrency concepts, feel free to skip to the next section. But even experienced developers might appreciate the mental models we’ll establish here - they’ll help us reason about Hytale’s specific challenges later.
The Kitchen Analogy
Think of a thread as a chef in a restaurant kitchen. A single-threaded program is like a kitchen with one chef. That chef has to do everything: prep ingredients, cook the appetizers, plate the main course, and prepare dessert. No matter how talented they are, they can only do one thing at a time. If the steak needs 10 minutes to cook, the chef stands at the grill for 10 minutes. The salad waits. The soup goes cold.
Single-Threaded Kitchen (One Chef)
==================================
Time →
┌─────────────────────────────────────────────────────────────────┐
│ [Prep Salad] → [Cook Steak] → [Make Sauce] → [Plate Dessert] │
│ Chef 1 Chef 1 Chef 1 Chef 1 │
└─────────────────────────────────────────────────────────────────┘
Total time: Sum of all tasks
A multi-threaded program is like a kitchen with multiple chefs. While one chef grills the steak, another preps the salad, and a third works on dessert. Tasks that would run sequentially now run in parallel. The total time shrinks dramatically.
Multi-Threaded Kitchen (Four Chefs)
===================================
Time →
┌─────────────────────┐
│ [Prep Salad] │ Chef 1
├─────────────────────┤
│ [Cook Steak] │ Chef 2
├─────────────────────┤
│ [Make Sauce] │ Chef 3
├─────────────────────┤
│ [Plate Dessert] │ Chef 4
└─────────────────────┘
Total time: Duration of longest task
This is the promise of multi-threading: parallelism. Instead of executing instructions one after another, we execute them simultaneously. On a modern CPU with 8 or 16 cores, we can have 8 or 16 threads truly running at the same time. That’s 8 or 16 chefs in our kitchen.
The Coordination Problem
But here’s where it gets interesting. What happens when two chefs need the same knife?
Chef A is chopping onions. Chef B needs to slice tomatoes. There’s one chef’s knife. If they both grab for it at the same time, something bad happens. Maybe one gets the knife and the other waits. Maybe they collide and the knife goes flying. Maybe they each grab one end and pull.
In programming terms, that knife is a shared resource. When multiple threads need to access or modify the same piece of data, we have a coordination problem. This is the fundamental challenge of concurrent programming - not the parallelism itself, but managing the points where parallel execution touches shared state.
The following diagram shows what happens when two chefs need the same knife. Both reach for it simultaneously, leading to a conflict that must be resolved:
sequenceDiagram
participant Chef1 as Chef 1 (Thread A)
participant Knife as Knife (Shared Resource)
participant Chef2 as Chef 2 (Thread B)
Chef1->>Knife: Reaches for knife
Chef2->>Knife: Reaches for knife
Note over Knife: Conflict! Who gets it?
alt Chef 1 wins
Knife-->>Chef1: Gets knife
Chef2->>Chef2: Waits...
Chef1->>Knife: Returns knife
Knife-->>Chef2: Gets knife
else Chef 2 wins
Knife-->>Chef2: Gets knife
Chef1->>Chef1: Waits...
Chef2->>Knife: Returns knife
Knife-->>Chef1: Gets knife
end
In a real kitchen, chefs communicate. “I need the knife next.” “Give me two minutes.” They establish protocols. They wait their turn. In multi-threaded programming, we need similar protocols - mechanisms to ensure threads don’t corrupt shared data by accessing it simultaneously.
What is a Race Condition?
When two threads access shared data without proper coordination, we get a race condition. The outcome depends on which thread “wins the race” - which one happens to execute first. The result is unpredictable, often wrong, and maddeningly difficult to debug.
Let’s look at the simplest possible example: incrementing a counter.
// Shared state
int killCount = 0;
// This runs on multiple threads
void onPlayerKill() {
killCount++;
}
This looks completely innocent. It’s one line of code. How could it possibly be wrong?
Here’s the problem: killCount++ isn’t actually one operation. At the CPU level, it’s three:
- Read: Load the current value of
killCountfrom memory into a register - Increment: Add 1 to that register
- Write: Store the new value back to memory
// What killCount++ actually does:
int temp = killCount; // Step 1: READ
temp = temp + 1; // Step 2: INCREMENT
killCount = temp; // Step 3: WRITE
When two threads execute these three steps concurrently, their operations can interleave. Let’s trace through what happens when Thread A and Thread B both try to increment killCount from 0:
Initial state: killCount = 0
Thread A Thread B
──────── ────────
READ killCount (gets 0)
READ killCount (gets 0)
INCREMENT (0 → 1)
INCREMENT (0 → 1)
WRITE killCount = 1
WRITE killCount = 1
Final state: killCount = 1 ← WRONG! Should be 2
Both threads read 0, both increment to 1, both write 1. We processed two kill events, but only counted one. The other kill vanished into the void.
This is a race condition. The threads “raced” to complete their operations, and the loser’s update was silently overwritten. The truly insidious part? This doesn’t happen every time. Sometimes Thread A completes all three steps before Thread B starts, and you get the correct result. The bug only manifests when the operations happen to interleave in just the wrong way. You might run the code a thousand times and see correct results. Then on the thousand-and-first run, in production, with real players, it fails.
Why Race Conditions Are Hard to Find
Race conditions are among the most difficult bugs to diagnose for several reasons:
Non-deterministic: The bug depends on timing, which varies based on CPU load, garbage collection pauses, network latency, and countless other factors. The same code produces different results on different runs.
Heisenbugs: Adding debug logging or breakpoints changes the timing, which can make the bug disappear while you’re trying to observe it. The act of measurement affects the result.
Low probability: If the race window is microseconds wide, you might need millions of operations before you hit it. The bug lurks, waiting for your busiest hour to strike.
Cascading effects: A single corrupted value can propagate through your entire system, causing failures far from the original race condition. You end up debugging the symptom, not the cause.
Concurrency vs. Parallelism
Two terms that often get confused are concurrency and parallelism. They’re related but distinct concepts.
Concurrency is about dealing with multiple things at once. It’s a property of your program’s structure. A concurrent program is designed to handle multiple tasks, managing their execution, switching between them, and coordinating their access to shared resources.
Parallelism is about doing multiple things at once. It’s a property of your program’s execution. Parallel execution means multiple operations physically happening at the same instant, typically on different CPU cores.
You can have concurrency without parallelism. A single-core CPU can run a concurrent program by rapidly switching between threads - the illusion of simultaneity. You can also have parallelism without concurrency (though it is less common) - for instance, a GPU executing the same operation on thousands of data points simultaneously, with no coordination needed because each operation is independent.
In practice, multi-threaded programs on modern hardware have both: a concurrent design that enables parallel execution.
flowchart LR
subgraph Concurrency["Concurrency (Structure)"]
direction TB
T1[Task 1]
T2[Task 2]
T3[Task 3]
end
subgraph Parallelism["Parallelism (Execution)"]
direction TB
C1[Core 1: Task 1]
C2[Core 2: Task 2]
C3[Core 3: Task 3]
end
T1 --> C1
T2 --> C2
T3 --> C3
Thread Safety
When we say code is thread-safe, we mean it behaves correctly when called from multiple threads simultaneously. No race conditions. No corrupted data. No matter how the operations interleave, the result is what you’d expect.
Making code thread-safe typically involves one or more of these strategies:
Immutability: Data that never changes can be safely shared. If the knife is welded to the counter and no chef can move it, there’s no conflict - they can all see it without problems. Immutable data structures are inherently thread-safe.
Isolation: Each thread gets its own copy of the data. Every chef has their own knife. No sharing means no conflicts. This is the approach Hytale takes with its multi-world threading model, as we’ll see.
Synchronization: Threads take turns accessing shared data. We put a lock on the knife drawer - only one chef can open it at a time. Others wait. This works but introduces performance overhead and complexity.
Atomic operations: Special CPU instructions that complete in one indivisible step. Instead of READ-INCREMENT-WRITE, we have a single ATOMIC-INCREMENT that can’t be interrupted. Like a chef who can grab, use, and return the knife in the blink of an eye - no one else can interfere.
Each approach has trade-offs. Immutability requires careful design. Isolation uses more memory and complicates data sharing. Synchronization adds overhead and risks deadlocks. Atomic operations are limited to simple operations. Choosing the right approach for each situation is the art of concurrent programming.
Key Concepts Reference
Here’s a quick reference for the threading concepts we’ve covered:
| Concept | Definition | Kitchen Analogy |
|---|---|---|
| Thread | An independent sequence of execution that can run in parallel with other threads | A chef in the kitchen |
| Concurrency | Structuring a program to handle multiple tasks, managing their coordination | A kitchen designed for multiple chefs |
| Parallelism | Actually executing multiple operations at the same instant | Multiple chefs cooking simultaneously |
| Race Condition | A bug where the outcome depends on unpredictable timing of thread operations | Two chefs grabbing the same knife |
| Thread-Safe | Code that behaves correctly when accessed by multiple threads | A kitchen workflow where chefs never collide |
| Shared State | Data that multiple threads can access | The single knife everyone needs |
| Atomic Operation | An operation that completes in one indivisible step, cannot be interrupted | Grabbing and using a knife instantly |
Why Does This Matter for Hytale?
Traditional game servers like Minecraft’s Spigot run on a single thread. All game logic - player movement, mob AI, block updates, plugin code - executes sequentially on one thread. This makes programming simple: no race conditions, no synchronization needed, no thread-safety concerns. But it also means performance is fundamentally limited. One thread means one core. One core means one limit on how much the server can do per tick.
Hytale breaks this model. Each World runs on its own thread. If you have 4 worlds, you have 4 threads, potentially using 4 CPU cores. The server scales with world count. But this architectural choice introduces all the complexity we’ve discussed: shared state, race conditions, and the need for thread-safety.
The naive kill counter we showed earlier? In Hytale, that code would fail. Two players killing mobs in different worlds would trigger onPlayerKill() on different threads simultaneously. The race condition would corrupt your count. You’d lose kills, report wrong statistics, maybe crash entirely.
Understanding threading isn’t optional in Hytale - it’s required for writing correct plugins. The good news is that Hytale provides patterns and tools to make this manageable. The better news is that once you understand the concepts, the rules are straightforward. Let’s see how traditional servers work, and then how Hytale does things differently.
The Old World: Single-Threaded Servers
To understand why Hytale’s threading model matters, we first need to understand how traditional game servers work. This is not just historical context - it is the mental model most plugin developers bring with them, and unlearning it is half the battle.
The Game Loop
At its core, every game server is just a loop. Each iteration of that loop is called a tick, and each tick does three things:
- Process input - Read player actions (movement, clicks, chat messages)
- Update state - Apply game logic (physics, AI, damage, spawning)
- Send results - Broadcast the new state to connected players
┌─────────────────────────────────────────────────────────┐
│ THE GAME LOOP │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ READ │───▶│ UPDATE │───▶│ SEND │ │
│ │ INPUT │ │ STATE │ │ RESULTS │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ ▲ │ │
│ └───────────────────────────────────┘ │
│ (repeat) │
└─────────────────────────────────────────────────────────┘
This loop runs continuously while the server is online. The speed at which it runs determines how responsive the game feels.
TPS: Ticks Per Second
TPS (Ticks Per Second) measures how many times the game loop completes per second. Higher TPS means more frequent updates, which translates to smoother gameplay - but also more CPU usage.
The relationship between TPS and tick duration is simple arithmetic:
Tick Duration = 1000ms / TPS
Examples:
- 20 TPS = 1000ms / 20 = 50ms per tick
- 30 TPS = 1000ms / 30 = 33ms per tick
- 60 TPS = 1000ms / 60 = 16ms per tick
Different games choose different TPS based on their needs:
| Game | TPS | Tick Duration | Why? |
|---|---|---|---|
| Minecraft | 20 | 50ms | Block-based, turn-ish feel is acceptable |
| Hytale (default) | 30 | 33ms | Faster combat, smoother movement |
| Fast-paced shooters | 60+ | 16ms | Twitch reflexes require precision |
The target TPS is a budget. If the server targets 20 TPS, every tick must complete in under 50 milliseconds. Miss that deadline, and the server starts lagging.
The Single-Thread Bottleneck
Here’s where traditional servers hit their limit. In Minecraft and most similar games, one thread handles everything. All worlds, all players, all entities - processed sequentially on a single CPU core.
Picture a server with three worlds: a massive adventure world with complex builds and AI, a small minigame arena, and a quiet hub for players to gather. In a single-threaded model, the server processes them one after another:
┌─────────────────────────────────────────────────────────────────┐
│ SINGLE-THREADED TICK │
│ │
│ ┌─────────────────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Adventure │ │ Arena │ │ Hub │ │
│ │ (40ms) │ │ (5ms) │ │ (5ms) │ │
│ └─────────────────────┘ └─────────┘ └─────────┘ │
│ ◀────────────────────────────────────────────────▶ │
│ Total: 50ms │
└─────────────────────────────────────────────────────────────────┘
Each world waits its turn. The Arena and Hub finish in 5ms each, but they can’t start until Adventure completes its heavy 40ms of processing. The worlds are coupled even though they have nothing to do with each other.
When Worlds Collide (With Your Tick Budget)
Now imagine a player in the Adventure world builds an elaborate redstone contraption, or a mod spawns thousands of entities for a boss fight. That world’s processing time balloons from 40ms to 60ms:
┌─────────────────────────────────────────────────────────────────────────┐
│ OVERLOADED TICK │
│ │
│ ┌───────────────────────────────────┐ ┌─────────┐ ┌─────────┐ │
│ │ Adventure │ │ Arena │ │ Hub │ │
│ │ (60ms) │ │ (5ms) │ │ (5ms) │ │
│ └───────────────────────────────────┘ └─────────┘ └─────────┘ │
│ ◀────────────────────────────────────────────────────────────────▶ │
│ Total: 70ms (OVER BUDGET!) │
└─────────────────────────────────────────────────────────────────────────┘
The tick now takes 70ms instead of the target 50ms. Every world on the server experiences lag - even the players peacefully chatting in the Hub who have nothing to do with the Adventure world’s chaos.
This is the fundamental problem: one world’s heavy workload punishes all other worlds.
What Lag Feels Like
Players experience tick delays as specific frustrations:
Rubber-banding: You walk forward, then suddenly snap back to where you were. The server’s tick fell behind, and when it caught up, it corrected your position based on stale data.
Delayed actions: You click to attack a mob, but the hit registers half a second late. Your input was processed in one tick, but the response didn’t reach you until several ticks later.
Inventory desync: You move an item in your inventory, but it jumps back. The server was too busy to acknowledge your action in time, so your client’s prediction got overruled.
Entity stuttering: NPCs move in jerky steps instead of smooth motion. Each movement update is spaced too far apart in real time.
These symptoms all trace back to the same root cause: the server couldn’t complete its tick fast enough, so the game world fell out of sync with player expectations.
The Beauty of Simplicity (When It Works)
Despite its limitations, single-threaded execution has one massive advantage: simplicity. When everything runs on one thread, you never have to worry about concurrent access. No race conditions. No deadlocks. No synchronization nightmares.
Consider a simple kill counter that tracks deaths across the entire server:
public class KillCounter {
private int totalKills = 0;
public void onEntityDeath(Entity entity) {
totalKills++;
}
public int getTotalKills() {
return totalKills;
}
}
In a single-threaded world, this code is perfectly safe. Only one thing happens at a time. When onEntityDeath runs, nothing else is touching totalKills. When getTotalKills runs, no other code is modifying the value. The operations are serialized - they happen one after another, never simultaneously.
This simplicity extends to all plugin code. You can read and write to any data structure without locks. You can iterate over collections while adding to them (in some cases). You can trust that the game state won’t change underneath you mid-operation.
The Global Kill Counter Problem
Now let’s preview what happens in Hytale. Remember that simple kill counter and the race condition we discussed earlier? In Hytale, that exact problem occurs when worlds run in parallel:
┌─────────────────────────────────────────────────────────────────┐
│ MULTI-THREADED TICK │
│ │
│ Thread 1: ┌──────────────────────┐ │
│ │ Adventure │ Kill happens! totalKills++│
│ └──────────────────────┘ │
│ │
│ Thread 2: ┌─────────────────────────────┐ │
│ │ Arena │ Kill happens! │
│ └─────────────────────────────┘ totalKills++ │
│ │
│ Thread 3: ┌──────────────┐ │
│ │ Hub │ │
│ └──────────────┘ │
│ │
│ What value does totalKills have? It depends on timing! │
└─────────────────────────────────────────────────────────────────┘
Two kills happen at nearly the same instant - one in Adventure, one in Arena. Both threads try to increment totalKills simultaneously. Since totalKills++ involves three steps (read-increment-write), both threads can read the same value, both add 1, and both write the same result - losing a kill.
Our simple, elegant single-threaded counter is now broken. Fixing it requires thread-safe types - which we’ll explore in the “Protecting Your Plugin Data” section.
Why Not Just Use More TPS?
You might wonder: if single-threading is so limited, why not just run at higher TPS on a beefy CPU? There are two problems.
First, Amdahl’s Law: single-threaded performance has hard limits. Even the fastest CPU in the world can only execute one instruction at a time per core. Once you’ve optimized your code as far as it can go, you hit a ceiling. Meanwhile, adding more players, more entities, and more worlds keeps increasing the workload.
Second, latency vs. throughput: increasing TPS reduces the time between updates, but it doesn’t help when a single tick takes too long. If your Adventure world genuinely needs 60ms to process all its AI and physics, no amount of faster ticks will help - you need parallel processing.
The solution is clear: to scale game servers beyond single-core limits, you need multiple threads. But that solution comes with new rules, new dangers, and new patterns to learn. That’s what Hytale brings - and what we’ll explore next.
Hytale’s Solution: Multi-World Threading
Hytale takes a fundamentally different approach: each World gets its own thread. This is not just a minor optimization - it is a complete architectural shift that changes how you think about game servers.
One Thread Per World
Instead of cramming everything onto a single thread, Hytale assigns a dedicated thread to each World:
Traditional Server:
┌─────────────────────────────────────────────────────┐
│ Main Thread │
│ [World A tick] → [World B tick] → [World C tick] │
│ (sequential - each world waits for the last) │
└─────────────────────────────────────────────────────┘
Hytale Server:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Thread A │ │ Thread B │ │ Thread C │
│ [World A] │ │ [World B] │ │ [World C] │
│ (parallel) │ │ (parallel) │ │ (parallel) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
The implications are immediate and profound. If World A has a complex redstone contraption causing lag, World B doesn’t notice - they run in parallel on separate CPU cores. Players in the hub world won’t experience stuttering because someone in the survival world triggered an expensive operation.
This is true parallelism. On a quad-core CPU, four worlds can execute simultaneously. On an 8-core server, eight worlds run at full speed. The server scales with hardware rather than hitting a single-thread ceiling.
The Server Hierarchy
To understand how this works, let’s look at Hytale’s server architecture from top to bottom:
flowchart TB
subgraph MainThread["Main Thread"]
HS["HytaleServer (Singleton)"]
SE["SCHEDULED_EXECUTOR<br/>(background tasks)"]
end
subgraph NoThread["No Own Thread"]
U["Universe (Singleton)"]
PM["ConcurrentHashMap<UUID, PlayerRef><br/>(all connected players)"]
WM["Map<String, World><br/>(all worlds)"]
end
subgraph WorldThreads["Separate Threads"]
subgraph WA["World A Thread"]
WA_W["World 'hub'"]
WA_E["EntityStore"]
WA_C["ChunkStore"]
WA_Q["taskQueue"]
end
subgraph WB["World B Thread"]
WB_W["World 'survival'"]
WB_E["EntityStore"]
WB_C["ChunkStore"]
WB_Q["taskQueue"]
end
subgraph WC["World C Thread"]
WC_W["World 'minigame'"]
WC_E["EntityStore"]
WC_C["ChunkStore"]
WC_Q["taskQueue"]
end
end
HS --> U
HS --> SE
U --> PM
U --> WM
WM --> WA_W
WM --> WB_W
WM --> WC_W
Let’s break down each layer:
HytaleServer is a singleton that lives on the main thread. It bootstraps the server, manages the lifecycle, and provides access to a SCHEDULED_EXECUTOR for background tasks that don’t belong to any specific world (like periodic saves or cleanup operations).
Universe is also a singleton, but it doesn’t have its own thread - it’s a coordination layer. The Universe holds the master registry of all connected players (using a ConcurrentHashMap for thread-safe access) and a map of all loaded worlds. When you need to find a player by UUID (Universally Unique Identifier) or retrieve a world by name, you go through the Universe.
Each World runs on its own dedicated thread. This is where the magic happens. The World owns an EntityStore (the ECS - Entity Component System), a ChunkStore (terrain data), and a taskQueue for scheduling work. Every entity, every block, every tick for that world happens on its thread and only its thread.
Thread-Binding: The Safety Mechanism
Here’s where Hytale’s threading model gets interesting. Every Store records which thread created it and refuses to operate on any other thread.
The actual implementation looks something like this:
public class Store<T> {
// Captured at construction time
private final Thread thread = Thread.currentThread();
// Called before EVERY operation
private void assertThread() {
if (Thread.currentThread() != this.thread) {
throw new IllegalStateException(
"Store accessed from wrong thread! " +
"Expected: " + thread.getName() +
", Got: " + Thread.currentThread().getName()
);
}
}
public T getComponent(Ref<T> ref, ComponentType<T, ?> type) {
assertThread(); // Check first!
// ... actual component retrieval
}
public void addComponent(Ref<T> ref, ComponentType<T, ?> type, Object component) {
assertThread(); // Check first!
// ... actual component addition
}
public void removeComponent(Ref<T> ref, ComponentType<T, ?> type) {
assertThread(); // Check first!
// ... actual component removal
}
public void tick(float deltaTime) {
assertThread(); // Check first!
// ... process all systems
}
}
This check runs on every single operation: getComponent, addComponent, removeComponent, hasComponent, tick - everything. There’s no way to accidentally access the store from the wrong thread without triggering this guard.
The same pattern applies to ChunkStore:
public class ChunkStore {
private final Thread thread = Thread.currentThread();
private void assertThread() {
if (Thread.currentThread() != this.thread) {
throw new IllegalStateException(
"ChunkStore accessed from wrong thread!"
);
}
}
public Block getBlock(int x, int y, int z) {
assertThread();
// ... get block data
}
public void setBlock(int x, int y, int z, Block block) {
assertThread();
// ... set block data
}
}
Why Crashing is Better Than Silent Corruption
You might think: “An immediate crash seems harsh. Why not just log a warning?”
Consider the alternative. Without the assertThread() check, your plugin might:
- Read a component while another thread is writing to it (torn read)
- See half-updated data that makes no sense
- Make decisions based on corrupted state
- Write corrupted data back
- The corruption spreads through the game state
- Eventually something weird happens - an entity teleports randomly, a player loses items, scores don’t add up
- You spend hours debugging, unable to reproduce the issue
Race conditions are the worst bugs in software. They’re non-deterministic - they might happen once every thousand operations. They’re hard to reproduce because they depend on exact timing. They’re impossible to debug because the evidence is corrupted state far removed from the actual bug.
The crash is your friend. When you see:
IllegalStateException: Store accessed from wrong thread!
Expected: world-hub-thread, Got: world-survival-thread
at Store.assertThread(Store.java:42)
at Store.getComponent(Store.java:67)
at MyPlugin.onPlayerKill(MyPlugin.java:123)
This stack trace tells you exactly what went wrong, exactly where, and exactly which threads were involved. You can fix it immediately. No mystery, no hunting for symptoms of corruption.
This is fail-fast design philosophy: crash loudly at the first sign of misuse rather than silently producing incorrect results. In the threading world, it’s the difference between a 5-minute fix and a 5-day debugging session.
The Power of Isolation
Thread-per-world isolation provides several powerful benefits:
Independent Performance: Each world’s performance is isolated. A player building a complex contraption in one world doesn’t affect players in another. Lag stays local.
Different TPS Per World: Since each world ticks independently, you can run them at different speeds. Your main hub might run at 30 TPS to save resources since it’s mostly decorative. A competitive minigame world might run at 60 TPS for smoother gameplay. A creative building world might run at 20 TPS since precise timing doesn’t matter.
// Conceptually, each world can have its own tick rate
worldHub.setTargetTPS(30); // Decorative, low priority
worldPvP.setTargetTPS(60); // Competitive, high priority
worldCreative.setTargetTPS(20); // Building, low priority
True Parallelism: On modern multi-core CPUs, this architecture shines. A 16-core server can run 16 worlds at full speed simultaneously. Traditional single-threaded servers waste 15 cores while one does all the work.
Clear Boundaries: The thread-per-world model creates natural boundaries. You never wonder “can I call this method here?” - if it touches a World’s data, you must be on that World’s thread. The architecture itself enforces correct usage.
Easier Reasoning: Once you understand the model, it’s actually simpler than ad-hoc threading. You don’t need to track which locks protect which data, or worry about deadlocks from lock ordering. Each World is a self-contained island that only its thread can touch.
Visualizing Thread Ownership
Let’s make this concrete with an example. Consider a server with three worlds:
┌─────────────────────────────────────────────────────────────────┐
│ Universe │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ Players: { uuid_1: PlayerRef_Steve, │ │
│ │ uuid_2: PlayerRef_Alex, │ │
│ │ uuid_3: PlayerRef_Zombie } │ │
│ │ (ConcurrentHashMap - thread-safe) │ │
│ └─────────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│ │ │
▼ ▼ ▼
┌───────────────────┐ ┌───────────────────┐ ┌───────────────────┐
│ Thread: hub │ │ Thread: survival │ │ Thread: minigame │
├───────────────────┤ ├───────────────────┤ ├───────────────────┤
│ World "hub" │ │ World "survival" │ │ World "minigame" │
│ ├─ EntityStore │ │ ├─ EntityStore │ │ ├─ EntityStore │
│ │ └─ Steve (E1) │ │ │ └─ Alex (E47) │ │ │ └─ Zombie (E12)│
│ │ │ │ │ └─ Pig (E48) │ │ │ └─ Target (E13)│
│ ├─ ChunkStore │ │ ├─ ChunkStore │ │ ├─ ChunkStore │
│ └─ 30 TPS │ │ └─ 20 TPS │ │ └─ 60 TPS │
└───────────────────┘ └───────────────────┘ └───────────────────┘
Steve is in the hub. If your plugin code runs on the hub thread, you can access Steve’s entity freely. But if that same code tries to access Alex (who’s in survival), you’ll get the IllegalStateException - survival’s EntityStore belongs to the survival thread.
The PlayerRef objects in the Universe are different. They’re designed for cross-thread access (using thread-safe patterns internally). You can look up a player by UUID from any thread. But the moment you want to access their entity data - their position, health, inventory - you must be on their World’s thread.
The Contract is Clear
Hytale’s threading model establishes a simple contract:
- Universe data (player lookups, world registry) is thread-safe - access from anywhere
- World data (entities, components, chunks) is thread-bound - access only from the World’s thread
- Cross-world operations require explicit coordination through task queues or events
- Violations crash immediately with a clear error message
This contract might seem restrictive, but it’s actually liberating. You never have to wonder about thread safety for World data - it’s guaranteed safe because only one thread can touch it. You never have to add locks or synchronization to your plugin code that works within a single World.
The complexity only emerges when you need to span multiple worlds - which is exactly when you should be thinking carefully about threading anyway. The architecture guides you toward correct patterns.
The New Rules
Now that you understand why Hytale uses multi-world threading, let’s get practical. There are exactly two categories of operations, and knowing which is which will save you hours of debugging mysterious crashes.
Two Categories: Always Safe vs. World Thread Only
Always Safe (From Any Thread):
// Getting player references - backed by ConcurrentHashMap
PlayerRef player = Universe.get().getPlayer(uuid);
// Sending messages - network operations are thread-safe
player.sendMessage(Message.raw("Hello!"));
// Reading immutable data - never changes after creation
String username = player.getUsername();
UUID playerId = player.getUUID();
// Scheduling future work - this is your main tool
HytaleServer.SCHEDULED_EXECUTOR.schedule(() -> {
// This runs on the executor thread
}, 5, TimeUnit.SECONDS);
These operations work because they’re designed for concurrent access. Universe.get() returns a singleton with internal ConcurrentHashMap storage. Network operations queue messages for transmission without touching game state. Immutable data like usernames can’t change, so there’s no race condition possible.
World Thread Only (Will Crash Otherwise):
// Reading entity data
TransformComponent transform = store.getComponent(ref, TransformComponent.getComponentType());
// Modifying entities
store.addComponent(ref, PoisonedComponent.getComponentType(), new PoisonedComponent());
// Changing entity structure
store.removeComponent(ref, SpeedBoostComponent.getComponentType());
// Checking entity state
boolean hasPoison = store.hasComponent(ref, PoisonedComponent.getComponentType());
All ECS operations - reading components, adding components, removing components, querying entities - must happen on the World’s thread. The EntityStore is not thread-safe. Call these from the wrong thread and you’ll get crashes, corrupted data, or both.
The Boundary Diagram
Think of the threading model as two zones separated by a bridge:
╔═══════════════════════════════════════════════════════════════╗
║ ANY THREAD ║
║ ║
║ Universe.get() ✓ Always safe ║
║ Universe.get().getPlayer(uuid) ✓ Always safe ║
║ player.sendMessage(...) ✓ Always safe ║
║ player.getUsername() ✓ Always safe ║
║ HytaleServer.SCHEDULED_EXECUTOR.schedule() ✓ Always safe ║
║ ║
╠═══════════════════════════════════════════════════════════════╣
║ ║
║ world.execute() ║
║ THE BRIDGE ║
║ ║
╠═══════════════════════════════════════════════════════════════╣
║ WORLD THREAD ONLY ║
║ ║
║ store.getComponent(ref, type) ✗ Crashes if ║
║ store.addComponent(ref, type, instance) called from ║
║ store.removeComponent(ref, type) wrong thread ║
║ store.hasComponent(ref, type) ║
║ Entity position, health, inventory ║
║ All ECS operations ║
║ ║
╚═══════════════════════════════════════════════════════════════╝
The key insight is that world.execute() is your bridge between these zones. It queues a piece of code to run on the World’s next tick, where ECS operations are safe.
Using world.execute()
The world.execute() method is how you safely cross from any thread into the World thread. It queues your code to run on the World’s next tick:
// From any thread - maybe a scheduled task, HTTP callback, or async operation
world.execute(() -> {
// This code runs on the World thread
// ECS operations are safe here
Store<EntityStore> store = world.getEntityStore().getStore();
HealthComponent health = store.getComponent(playerRef, HealthComponent.getComponentType());
health.setCurrent(health.getMax()); // Full heal
});
The code inside execute() doesn’t run immediately - it’s queued and processed when the World reaches that point in its tick cycle. This is usually fast (within one tick, roughly 50ms at 20 TPS), but don’t expect synchronous behavior.
Here’s a real pattern: scheduling a delayed effect that modifies entity state.
// Apply a damage-over-time effect that triggers after 3 seconds
public void applyDelayedPoison(World world, Ref<EntityStore> targetRef) {
HytaleServer.SCHEDULED_EXECUTOR.schedule(() -> {
// We're on the executor thread now, NOT the World thread
// Can't touch ECS here!
world.execute(() -> {
// NOW we're on the World thread - ECS is safe
Store<EntityStore> store = world.getEntityStore().getStore();
// Check if target still exists (they might have disconnected)
if (targetRef.isValid()) {
store.addComponent(
targetRef,
PoisonedComponent.getComponentType(),
new PoisonedComponent(5, 10.0f) // 5 damage per tick, 10 seconds
);
}
});
}, 3, TimeUnit.SECONDS);
}
Notice the nesting: SCHEDULED_EXECUTOR.schedule() runs the outer lambda after 3 seconds on the executor thread. Inside that, world.execute() queues the inner lambda to run on the World thread. This two-step dance is essential whenever you need to combine timing with ECS operations.
The Most Common Mistake
Side by side, wrong versus correct:
WRONG - Direct ECS access from executor thread:
// DON'T DO THIS - will crash or corrupt data
HytaleServer.SCHEDULED_EXECUTOR.schedule(() -> {
Store<EntityStore> store = world.getEntityStore().getStore();
HealthComponent health = store.getComponent(ref, HealthComponent.getComponentType());
health.setCurrent(0); // BOOM - wrong thread!
}, 5, TimeUnit.SECONDS);
CORRECT - Bridge through world.execute():
// DO THIS - safe thread transition
HytaleServer.SCHEDULED_EXECUTOR.schedule(() -> {
world.execute(() -> {
Store<EntityStore> store = world.getEntityStore().getStore();
HealthComponent health = store.getComponent(ref, HealthComponent.getComponentType());
health.setCurrent(0); // Safe - on World thread
});
}, 5, TimeUnit.SECONDS);
The only difference is the world.execute() wrapper. It’s easy to forget, especially when you’re deep in callback chains. Make it a habit: if you’re scheduling something that touches entities, wrap it in world.execute().
Quick Reference Table
| Task | Safe from any thread? | Pattern |
|---|---|---|
| Get a player by UUID | Yes | Universe.get().getPlayer(uuid) |
| Send a message | Yes | player.sendMessage(...) |
| Read player username | Yes | player.getUsername() |
| Schedule delayed code | Yes | SCHEDULED_EXECUTOR.schedule(...) |
| Read entity component | No | world.execute(() -> store.getComponent(...)) |
| Add component to entity | No | world.execute(() -> store.addComponent(...)) |
| Remove component | No | world.execute(() -> store.removeComponent(...)) |
| Check if entity has component | No | world.execute(() -> store.hasComponent(...)) |
| Modify entity position | No | world.execute(() -> transform.setPosition(...)) |
| Deal damage to entity | No | world.execute(() -> ...) |
| Spawn new entity | No | world.execute(() -> ...) |
| Query entities | No | world.execute(() -> ...) |
The pattern is simple: anything that touches the EntityStore or entity data needs world.execute(). Everything else is probably fine.
Checking Which Thread You’re On
When debugging threading issues, it helps to know which thread your code is actually running on:
public void debugThreadInfo() {
String threadName = Thread.currentThread().getName();
System.out.println("Running on: " + threadName);
// World threads are named like "World-adventure" or "World-hub"
// The executor thread is named "HytaleScheduledExecutor"
// The main server thread handles network I/O
}
You can also add assertions during development to catch threading violations early:
public void assertWorldThread(World world) {
String expected = "World-" + world.getName();
String actual = Thread.currentThread().getName();
if (!actual.equals(expected)) {
throw new IllegalStateException(
"Expected World thread '" + expected + "' but running on '" + actual + "'"
);
}
}
Use this in your plugin’s internal methods to fail fast during testing rather than encountering mysterious production bugs:
public void healPlayer(World world, Ref<EntityStore> playerRef) {
assertWorldThread(world); // Crash immediately if called from wrong thread
Store<EntityStore> store = world.getEntityStore().getStore();
HealthComponent health = store.getComponent(playerRef, HealthComponent.getComponentType());
health.setCurrent(health.getMax());
}
When Systems Are Your Friend
Remember that ECS Systems always run on the correct World thread. If your logic fits the System pattern - processing entities with specific components - you don’t need world.execute() at all:
public class HealingAuraSystem extends EntityTickingSystem<EntityStore> {
@Override
public Query<EntityStore> getQuery() {
return Query.and(
HealthComponent.getComponentType(),
HealingAuraComponent.getComponentType()
);
}
@Override
public void tick(float dt, int index, ArchetypeChunk<EntityStore> chunk,
Store<EntityStore> store, CommandBuffer<EntityStore> cmd) {
// Already on World thread - no execute() needed
HealthComponent health = chunk.getComponent(index, HealthComponent.getComponentType());
HealingAuraComponent aura = chunk.getComponent(index, HealingAuraComponent.getComponentType());
health.setCurrent(Math.min(
health.getCurrent() + aura.getHealPerSecond() * dt,
health.getMax()
));
}
}
The tick method runs on the World thread by definition. You only need world.execute() when you’re crossing into ECS territory from outside - scheduled tasks, event handlers, network callbacks, or cross-world operations.
Summary: The Mental Model
-
Ask yourself: Does this code touch entity data (components, positions, health, inventory)?
-
If yes: Make sure you’re on the World thread. Inside a System? You’re fine. Coming from a scheduler, callback, or different world? Use
world.execute(). -
If no: You’re probably safe.
Universe.get(),sendMessage(), and the scheduler are designed for concurrent access. -
When in doubt: Wrap it in
world.execute(). The overhead is minimal, and it’s far better than a crash.
The threading model isn’t complicated once you internalize the boundary. Safe operations on one side, ECS on the other, and world.execute() is your bridge.
Protecting Your Plugin Data
Hytale’s architecture protects the ECS beautifully. Each World’s EntityStore runs on its own thread, and world.execute() safely schedules operations. But here’s the catch: what about YOUR plugin’s data?
Let’s return to our Global Kill Counter. We want to track kills across all worlds - a simple integer that goes up whenever anything dies. Seems easy:
public class KillTrackerPlugin {
private int globalKills = 0;
public void onEntityKilled(World world) {
globalKills++;
}
}
This code has a serious bug. When World A’s thread and World B’s thread both call onEntityKilled() at the same time, they’re both modifying globalKills simultaneously. That’s a race condition.
Why globalKills++ Is Dangerous
As we saw in the threading basics section, globalKills++ is actually three operations (read-increment-write) that can interleave between threads, causing lost updates. This is the exact race condition we previewed in the single-threaded section - now it’s time to fix it.
Java’s Solution: Atomic Types
Java provides special classes in java.util.concurrent.atomic that guarantee operations complete without interruption. For our kill counter, we need AtomicInteger:
// WRONG — Race condition!
private int globalKills = 0;
globalKills++;
// CORRECT — Atomic, can't be interrupted
private AtomicInteger globalKills = new AtomicInteger(0);
globalKills.incrementAndGet();
The incrementAndGet() method performs all three steps (read, increment, write) as a single, atomic operation. No other thread can see an intermediate state or cause a lost update.
AtomicInteger Operations
AtomicInteger provides several useful methods beyond basic increment:
| Method | What it does | Example |
|---|---|---|
get() | Read the current value | int count = globalKills.get(); |
set(value) | Set to a specific value | globalKills.set(0); |
incrementAndGet() | Add 1, return new value | int newCount = globalKills.incrementAndGet(); |
getAndIncrement() | Return current, then add 1 | int oldCount = globalKills.getAndIncrement(); |
addAndGet(delta) | Add delta, return new value | globalKills.addAndGet(5); |
compareAndSet(expect, update) | Set only if current value matches | globalKills.compareAndSet(10, 0); |
The compareAndSet() method deserves special attention - it’s the foundation of lock-free programming. It says: “If the current value is X, change it to Y and return true. Otherwise, do nothing and return false.” This single operation enables many powerful patterns.
Real Hytale Example: One-Time Initialization
Hytale’s own TeleportPlugin uses AtomicBoolean to ensure warps are loaded exactly once, even if multiple worlds try to initialize simultaneously:
public class TeleportPlugin extends ServerPlugin {
private final AtomicBoolean loaded = new AtomicBoolean();
@Override
public void onWorldLoaded(World world) {
if (loaded.compareAndSet(false, true)) {
// This block runs exactly once, no matter how many
// worlds load or how many threads call this method
initializeWarps();
}
}
private void initializeWarps() {
// Load warp points from disk...
}
}
Without AtomicBoolean, two worlds loading simultaneously could both see loaded == false, both enter the initialization block, and initialize twice. The compareAndSet() guarantees only one thread wins the race.
Per-Player Data: ConcurrentHashMap
Our kill counter should also track kills per player. The obvious approach uses a regular HashMap:
// WRONG — HashMap is NOT thread-safe
private Map<UUID, Integer> playerKills = new HashMap<>();
public void recordKill(UUID playerUuid) {
Integer current = playerKills.get(playerUuid);
if (current == null) {
playerKills.put(playerUuid, 1);
} else {
playerKills.put(playerUuid, current + 1);
}
}
This has multiple race conditions. Two threads could both see null and both initialize to 1 (losing a kill). The HashMap itself isn’t thread-safe - concurrent modifications can corrupt its internal structure, causing infinite loops or lost data.
The fix is ConcurrentHashMap, which handles all the synchronization internally:
// CORRECT — ConcurrentHashMap handles concurrent access
private Map<UUID, Integer> playerKills = new ConcurrentHashMap<>();
But we also need to fix the increment logic. The read-modify-write pattern (get, add 1, put) still has a race condition even with ConcurrentHashMap. The solution is the merge() method:
// Atomic increment pattern:
playerKills.merge(playerUuid, 1, Integer::sum);
This single line says: “If the key exists, apply the function (Integer::sum) to combine the existing value with 1. If the key does not exist, use 1 as the initial value.” The entire operation is atomic.
Other useful ConcurrentHashMap methods:
| Method | What it does |
|---|---|
putIfAbsent(key, value) | Add only if key doesn’t exist |
computeIfAbsent(key, mappingFunction) | Compute value only if key is absent |
compute(key, remappingFunction) | Atomically compute a new value |
merge(key, value, remappingFunction) | Merge value with existing (perfect for counters) |
Simple Flags with volatile
Sometimes you need a simple boolean flag that one thread sets and others read - like an “enabled” toggle:
private volatile boolean trackingEnabled = true;
public void setTrackingEnabled(boolean enabled) {
trackingEnabled = enabled;
}
public void onEntityKilled(World world) {
if (!trackingEnabled) return;
// ... track the kill
}
The volatile keyword ensures two things:
- Visibility: When one thread writes to the variable, other threads immediately see the new value
- No caching: Threads don’t cache the variable in CPU registers - they always read from main memory
Without volatile, Thread A might write trackingEnabled = false, but Thread B could continue seeing true because it cached the old value. This is called a visibility problem.
However, volatile does NOT make compound operations atomic. trackingEnabled = !trackingEnabled is still a race condition (read, negate, write). Use AtomicBoolean if you need atomic toggle operations.
Quick Reference: Which Type to Use
| You need… | Use | Example |
|---|---|---|
| Thread-safe counter | AtomicInteger | Kill counts, online player count |
| Thread-safe boolean | AtomicBoolean | One-time initialization flags |
| Thread-safe long | AtomicLong | Large counters, timestamps |
| Thread-safe per-key data | ConcurrentHashMap | Per-player stats, per-world data |
| Simple read-only flag | volatile boolean | Feature toggles, shutdown flags |
| Complex object swaps | AtomicReference<T> | Replacing entire config objects |
The Complete Kill Counter
Let’s put it all together. Here’s our Global Kill Counter with proper thread safety:
Click to show complete KillTrackerPlugin.java
public class KillTrackerPlugin extends ServerPlugin {
// Global counter - atomic operations only
private final AtomicInteger globalKills = new AtomicInteger(0);
// Per-player stats - concurrent map with atomic updates
private final Map<UUID, Integer> playerKills = new ConcurrentHashMap<>();
// Feature toggle - simple visibility guarantee
private volatile boolean trackingEnabled = true;
public void onEntityKilled(World world, UUID killerUuid) {
if (!trackingEnabled) return;
// Safe: atomic increment
globalKills.incrementAndGet();
// Safe: atomic merge
playerKills.merge(killerUuid, 1, Integer::sum);
}
public int getGlobalKills() {
// Safe: atomic read
return globalKills.get();
}
public int getPlayerKills(UUID playerUuid) {
// Safe: ConcurrentHashMap handles this
return playerKills.getOrDefault(playerUuid, 0);
}
public void setTrackingEnabled(boolean enabled) {
// Safe: volatile write is immediately visible
trackingEnabled = enabled;
}
public void resetStats() {
// Safe: atomic set
globalKills.set(0);
// Safe: ConcurrentHashMap.clear() is thread-safe
playerKills.clear();
}
}
Notice how each field uses the appropriate thread-safe type for its access pattern. The code is simple, readable, and correct.
The Mental Model
When you’re writing plugin code, ask yourself: “Where does this data live?”
- World data (entities, components, blocks) → Use
world.execute()to access safely - Plugin data (your own fields) → Use thread-safe types (
Atomic*,Concurrent*,volatile)
flowchart TD
subgraph "Your Plugin"
A[globalKills: AtomicInteger]
B[playerKills: ConcurrentHashMap]
C[trackingEnabled: volatile]
end
subgraph "World 1 Thread"
D[EntityStore]
E[world.execute]
end
subgraph "World 2 Thread"
F[EntityStore]
G[world.execute]
end
D --> A
D --> B
F --> A
F --> B
E --> D
G --> F
Multiple world threads can safely access your plugin’s thread-safe fields simultaneously. Each world’s EntityStore is protected by its own thread. The boundaries are clear.
This separation is the key insight: Hytale handles the hard part (ECS synchronization), and Java’s concurrent utilities handle your plugin state. You just need to use the right types.
Patterns for Modders
Now that you understand the theory, let’s put it into practice. We’ll build out our Global Kill Counter plugin and explore the patterns you’ll use daily when writing thread-safe Hytale code.
The Complete Kill Counter System
Here’s our full implementation using Hytale’s ECS. The DeathSystems.OnDeathSystem runs when any entity dies, letting us count kills across all worlds:
Click to show complete KillTrackerSystem.java
public class KillTrackerSystem extends DeathSystems.OnDeathSystem {
// Thread-safe counters - accessed from multiple World threads!
private final AtomicInteger globalKills = new AtomicInteger(0);
private final Map<UUID, Integer> playerKills = new ConcurrentHashMap<>();
@Override
public void onComponentAdded(Ref<EntityStore> victimRef, DeathComponent death,
Store<EntityStore> store, CommandBuffer<EntityStore> cmd) {
// Get the damage source - who killed this entity?
Ref<EntityStore> sourceRef = death.getSource();
if (sourceRef == null || !sourceRef.isValid()) {
return; // No source = not a kill
}
// Check if killer is a player
PlayerRef playerRef = store.getComponent(sourceRef, PlayerRef.getComponentType());
if (playerRef == null) {
return; // Only count player kills
}
// Increment global counter (atomic - safe from any thread)
int newTotal = globalKills.incrementAndGet();
// Increment player's personal counter (ConcurrentHashMap - thread-safe)
UUID playerId = playerRef.getUuid();
playerKills.merge(playerId, 1, Integer::sum);
// Send feedback to the player
int personalKills = playerKills.get(playerId);
playerRef.sendMessage(Message.raw(
"Kill #" + personalKills + "! (Server total: " + newTotal + ")"
));
}
public int getGlobalKills() {
return globalKills.get();
}
public int getPlayerKills(UUID playerId) {
return playerKills.getOrDefault(playerId, 0);
}
}
Notice how the AtomicInteger and ConcurrentHashMap handle thread safety automatically. When a player in World A and a player in World B get kills simultaneously, both increments happen correctly without data corruption.
Pattern 1: Spawning Entities from Any Thread
We covered world.execute() in “The New Rules” - here’s a practical example from the TeleportPlugin:
public void spawnWarpMarker(World targetWorld, Vector3d position, String warpName) {
// Queue this to run on targetWorld's thread
targetWorld.execute(() -> {
Store<EntityStore> store = targetWorld.getEntityStore().getStore();
// Now we're on the correct thread - safe to create entities!
Ref<EntityStore> markerRef = store.createEntity();
// Add components
TransformComponent transform = new TransformComponent();
transform.getPosition().set(position);
store.addComponent(markerRef, TransformComponent.getComponentType(), transform);
WarpMarkerComponent warp = new WarpMarkerComponent(warpName);
store.addComponent(markerRef, WarpMarkerComponent.getComponentType(), warp);
});
}
Cross-World Communication
Sometimes you need to notify other Worlds about events. From the PortalsPlugin:
public void notifyWorldsOfPortalActivation(World sourceWorld, String portalId) {
Universe universe = Universe.get();
// Get all worlds
for (World world : universe.getWorlds()) {
if (world == sourceWorld) {
continue; // Skip the world that triggered this
}
// Queue notification on each World's thread
world.execute(() -> {
// Each world can now safely update its portal state
PortalNetworkSystem portalSystem = world.getSystem(PortalNetworkSystem.class);
if (portalSystem != null) {
portalSystem.onRemotePortalActivated(portalId, sourceWorld.getName());
}
});
}
}
Each world.execute() queues work on that specific World’s thread. All the executions happen in parallel on their respective threads.
Pattern 2: Scheduled Tasks
For periodic background tasks, use HytaleServer.SCHEDULED_EXECUTOR. This executor runs on its own thread pool - separate from any World thread.
Auto-Save Every 5 Minutes
From the ObjectivePlugin:
Click to show ObjectivePlugin.java
public class ObjectivePlugin implements Plugin {
private ScheduledFuture<?> autoSaveTask;
@Override
public void onEnable() {
// Schedule auto-save every 5 minutes
autoSaveTask = HytaleServer.SCHEDULED_EXECUTOR.scheduleWithFixedDelay(
this::saveAllObjectives,
5, 5, TimeUnit.MINUTES // Initial delay, period, unit
);
}
@Override
public void onDisable() {
// Cancel the task when plugin unloads
if (autoSaveTask != null) {
autoSaveTask.cancel(false);
}
}
private void saveAllObjectives() {
// WARNING: This runs on SCHEDULED_EXECUTOR thread, NOT a World thread!
// Cannot access Store directly here - must use world.execute()
for (World world : Universe.get().getWorlds()) {
world.execute(() -> {
// NOW we're on the World thread - safe to read Store
Store<EntityStore> store = world.getEntityStore().getStore();
List<ObjectiveData> objectives = collectObjectives(store);
// But file I/O should go back off-thread!
CompletableFuture.runAsync(() -> {
saveToFile(world.getName(), objectives);
});
});
}
}
}
Periodic Leaderboard Broadcast
Let’s add a scheduled task to our kill counter that broadcasts the top killers every 10 minutes:
Click to show KillLeaderboardTask.java
public class KillLeaderboardTask {
private final KillTrackerSystem tracker;
private ScheduledFuture<?> broadcastTask;
public KillLeaderboardTask(KillTrackerSystem tracker) {
this.tracker = tracker;
}
public void start() {
broadcastTask = HytaleServer.SCHEDULED_EXECUTOR.scheduleWithFixedDelay(
this::broadcastLeaderboard,
10, 10, TimeUnit.MINUTES
);
}
private void broadcastLeaderboard() {
// Build leaderboard message (safe - just reading from ConcurrentHashMap)
String leaderboard = buildLeaderboardMessage();
// Broadcast to all players across all worlds
// Universe.sendMessage is thread-safe!
Universe.get().sendMessage(Message.raw(leaderboard));
}
private String buildLeaderboardMessage() {
StringBuilder sb = new StringBuilder();
sb.append("=== TOP KILLERS ===\n");
// Sort players by kills and take top 5
tracker.getPlayerKills().entrySet().stream()
.sorted(Map.Entry.<UUID, Integer>comparingByValue().reversed())
.limit(5)
.forEach(entry -> {
PlayerRef player = Universe.get().getPlayer(entry.getKey());
String name = player != null ? player.getUsername() : "Unknown";
sb.append(name).append(": ").append(entry.getValue()).append(" kills\n");
});
sb.append("Total server kills: ").append(tracker.getGlobalKills());
return sb.toString();
}
}
Remember: code in SCHEDULED_EXECUTOR is NOT on a World thread. You can read from thread-safe collections like ConcurrentHashMap, but you cannot access any Store directly.
Pattern 3: Async Database Access
For slow operations like database queries, use CompletableFuture.runAsync() to avoid blocking the World thread.
public void savePlayerStats(PlayerRef player, World world) {
UUID playerId = player.getUuid();
// Capture data while on the World thread
int kills = getPlayerKills(playerId);
int deaths = getPlayerDeaths(playerId);
long playtime = getPlaytime(playerId);
// Do the slow database write in background
CompletableFuture.runAsync(() -> {
// This runs on ForkJoinPool - NOT a World thread
try {
database.savePlayerStats(playerId, kills, deaths, playtime);
logger.info("Saved stats for " + player.getUsername());
} catch (Exception e) {
logger.error("Failed to save stats", e);
}
});
}
The key insight: capture all the data you need FIRST (while on the World thread), THEN hand off to async. The async code only has the captured values - it cannot touch the Store.
Loading Data Asynchronously
What about loading data? You need to bring results back to the World thread:
public void loadAndApplyPlayerStats(PlayerRef player, World world) {
UUID playerId = player.getUuid();
CompletableFuture.supplyAsync(() -> {
// Background thread: do slow database read
return database.loadPlayerStats(playerId);
}).thenAccept(stats -> {
// Still on background thread here!
// Must queue the Store modification on the World thread
world.execute(() -> {
Ref<EntityStore> playerEntity = player.getEntity();
if (playerEntity != null && playerEntity.isValid()) {
Store<EntityStore> store = world.getEntityStore().getStore();
applyStatsToEntity(store, playerEntity, stats);
}
});
});
}
The flow is: World thread -> async (database read) -> World thread (apply to entity).
Pattern 4: Cross-World Teleport
Teleporting a player between worlds is a common task that demonstrates proper thread coordination. The critical rule: capture data FIRST, then queue on target thread.
From TeleportToPlayerCommand:
public void teleportToPlayer(PlayerRef teleporter, PlayerRef target) {
World targetWorld = target.getCurrentWorld();
if (targetWorld == null) {
teleporter.sendMessage(Message.raw("Target player is not in a world!"));
return;
}
// CRITICAL: Capture target position NOW, while we have valid access
Ref<EntityStore> targetEntity = target.getEntity();
Store<EntityStore> targetStore = targetWorld.getEntityStore().getStore();
TransformComponent targetTransform = targetStore.getComponent(
targetEntity, TransformComponent.getComponentType()
);
// Clone the position - don't hold a reference to the component!
Vector3d destination = new Vector3d(targetTransform.getPosition());
// Now queue the teleport on the target world's thread
targetWorld.execute(() -> {
// Transfer the player to this world
teleporter.transferToWorld(targetWorld, destination);
});
}
Why clone the vector? Because targetTransform.getPosition() returns a reference to live data. By the time your queued code runs, that position might have changed (the target player moved). Cloning captures the exact position at the moment you read it.
The Wrong Way
Here’s what NOT to do:
// WRONG - DON'T DO THIS!
public void teleportToPlayerBroken(PlayerRef teleporter, PlayerRef target) {
World targetWorld = target.getCurrentWorld();
targetWorld.execute(() -> {
// PROBLEM: By now, target might have left this world!
Ref<EntityStore> targetEntity = target.getEntity();
Store<EntityStore> store = targetWorld.getEntityStore().getStore();
// This could crash - targetEntity might be invalid!
TransformComponent transform = store.getComponent(
targetEntity, TransformComponent.getComponentType()
);
Vector3d pos = transform.getPosition(); // CRASH if entity invalid
teleporter.transferToWorld(targetWorld, pos);
});
}
The target player might log out or teleport away between when you queue the code and when it executes. Always validate references inside the queued code, or capture the data beforehand.
Pattern 5: Universe-Wide Broadcast
Some operations are safe from any thread. Universe.get().sendMessage() is one of them - it handles thread safety internally.
public void announceServerEvent(String message) {
// Safe from ANY thread - World thread, SCHEDULED_EXECUTOR, async task, anywhere
Universe.get().sendMessage(Message.raw("[SERVER] " + message));
}
This is intentional design. Server-wide announcements are so common that Hytale makes them thread-safe by default. You don’t need to queue them on any particular thread.
Other thread-safe Universe operations include:
Universe.get().getPlayer(uuid)- Get PlayerRef by UUIDUniverse.get().getPlayers()- Get all connected playersUniverse.get().getWorld(name)- Get a World by nameUniverse.get().getWorlds()- Get all worlds
These return snapshots or thread-safe references. But remember: the World object you get is NOT the World thread. You still need world.execute() to access its Store.
Common Mistakes (And How to Fix Them)
Let’s look at the threading mistakes you’re most likely to make, and how to fix them.
Mistake 1: Accessing Store from Wrong Thread
The most common error - trying to read or write components from outside the World thread.
// WRONG - Will crash or corrupt data!
private void onScheduledTask() {
// This runs on SCHEDULED_EXECUTOR thread
World world = Universe.get().getWorld("main");
Store<EntityStore> store = world.getEntityStore().getStore();
// CRASH: Cannot access Store from SCHEDULED_EXECUTOR thread!
for (Ref<EntityStore> ref : store.getAllEntities()) {
HealthComponent health = store.getComponent(ref, HealthComponent.getComponentType());
// ... do something with health
}
}
// CORRECT - Queue work on the World thread
private void onScheduledTask() {
World world = Universe.get().getWorld("main");
world.execute(() -> {
// NOW we're on the World thread - safe!
Store<EntityStore> store = world.getEntityStore().getStore();
for (Ref<EntityStore> ref : store.getAllEntities()) {
HealthComponent health = store.getComponent(ref, HealthComponent.getComponentType());
// ... do something with health
}
});
}
Mistake 2: Blocking the World Thread
Never use .join() or .get() on a Future from the World thread. This blocks the entire World tick!
// WRONG - Blocks the World thread for potentially seconds!
public void onPlayerJoin(PlayerRef player, World world) {
CompletableFuture<PlayerStats> future = CompletableFuture.supplyAsync(() -> {
return database.loadPlayerStats(player.getUuid()); // Slow!
});
// This BLOCKS until the database responds - freezes the entire World!
PlayerStats stats = future.join(); // BAD!
applyStats(player, stats);
}
// CORRECT - Continue asynchronously, then queue result
public void onPlayerJoin(PlayerRef player, World world) {
CompletableFuture.supplyAsync(() -> {
return database.loadPlayerStats(player.getUuid());
}).thenAccept(stats -> {
// This callback runs on the async thread
// Queue the Store modification on the World thread
world.execute(() -> {
if (player.getEntity() != null && player.getEntity().isValid()) {
applyStats(player, stats);
}
});
});
}
The .thenAccept() callback runs when the database responds, without blocking anything. Then world.execute() safely queues the result application.
Mistake 3: Shared Mutable State Without Synchronization
We covered this in detail in “Protecting Your Plugin Data” - always use AtomicInteger for counters and ConcurrentHashMap for collections that multiple World threads access.
Mistake 4: Potential Deadlock
Deadlock happens when two threads wait for each other forever. In Hytale, this can occur with executeAndWait().
// DANGEROUS - Can deadlock!
public void syncBetweenWorlds(World worldA, World worldB) {
worldA.execute(() -> {
// On World A's thread
doSomethingInA();
// Wait for World B to complete something
worldB.executeAndWait(() -> { // DANGER!
doSomethingInB();
});
});
}
If World B is simultaneously trying to executeAndWait() on World A, both threads wait forever - deadlock!
graph LR
A[World A Thread] -->|waiting for| B[World B Thread]
B -->|waiting for| A
// SAFE - Use non-blocking execute()
public void syncBetweenWorlds(World worldA, World worldB) {
worldA.execute(() -> {
doSomethingInA();
// Non-blocking - queues work and continues
worldB.execute(() -> {
doSomethingInB();
});
});
}
Rule of thumb: prefer execute() over executeAndWait(). Only use executeAndWait() when you absolutely need the result before continuing, and you’re certain the target World won’t call back to your thread.
Pattern Summary
Here’s a quick reference for which pattern to use:
| Task | Pattern |
|---|---|
| Modify entities in a World | world.execute(() -> { ... }) |
| Periodic background task | SCHEDULED_EXECUTOR.scheduleWithFixedDelay() |
| Slow I/O (database, files) | CompletableFuture.runAsync() |
| Cross-world teleport | Capture data first, then targetWorld.execute() |
| Server-wide announcement | Universe.get().sendMessage() (always safe) |
| Get World reference | Universe.get().getWorld(name) (always safe) |
| Share data between threads | AtomicInteger, ConcurrentHashMap |
| Wait for async result | .thenAccept() (NOT .join()) |
The golden rule: always know which thread you’re on. If you’re on a World thread, you can access that World’s Store. If you’re anywhere else (SCHEDULED_EXECUTOR, CompletableFuture, async callback), you must use world.execute() to safely access entity data.
Key Takeaways
We’ve covered a lot of ground. Let’s distill it down to the essential rules you need to remember.
The Three Rules
Rule 1: ECS access requires the World thread.
Every operation on a Store - reading components, adding components, removing components, querying entities - must happen on that World’s thread. Use world.execute() to safely cross into the World thread from anywhere else.
// From any thread, safely access the World's Store:
world.execute(() -> {
Store<EntityStore> store = world.getEntityStore().getStore();
// Safe to read/write components here
});
Rule 2: Shared plugin data needs thread-safe types.
If your plugin has data that multiple World threads might access - counters, caches, feature flags - use the concurrent types: AtomicInteger, AtomicBoolean, ConcurrentHashMap, or volatile for simple flags.
// Thread-safe plugin state:
private final AtomicInteger globalKills = new AtomicInteger(0);
private final Map<UUID, Integer> playerKills = new ConcurrentHashMap<>();
private volatile boolean trackingEnabled = true;
Rule 3: Never block waiting for another World.
Use execute() (non-blocking) instead of executeAndWait() (blocking). Blocking on another World’s response risks deadlock if that World is waiting for you.
// SAFE: Non-blocking communication
worldA.execute(() -> {
worldB.execute(() -> {
// This queues and returns immediately
});
});
Thread Safety Cheatsheet
| Operation | Thread Requirement | Pattern |
|---|---|---|
Universe.get().getPlayer(uuid) | Any thread | Direct call |
Universe.get().sendMessage(...) | Any thread | Direct call |
player.sendMessage(...) | Any thread | Direct call |
player.getUsername() | Any thread | Direct call |
world.execute(runnable) | Any thread | Direct call |
store.getComponent(ref, type) | World thread only | world.execute(() -> ...) |
store.addComponent(...) | World thread only | world.execute(() -> ...) |
store.removeComponent(...) | World thread only | world.execute(() -> ...) |
store.hasComponent(...) | World thread only | world.execute(() -> ...) |
| ECS System tick methods | World thread (automatic) | No wrapper needed |
Java Concurrency Quick Reference
| You need… | Use this | Key method |
|---|---|---|
| Thread-safe counter | AtomicInteger | incrementAndGet() |
| Thread-safe boolean | AtomicBoolean | compareAndSet(expect, update) |
| Thread-safe long | AtomicLong | addAndGet(delta) |
| Thread-safe map | ConcurrentHashMap | merge(key, value, remappingFunction) |
| Simple visibility flag | volatile boolean | Direct read/write |
| Delayed execution | SCHEDULED_EXECUTOR | schedule(runnable, delay, unit) |
| Periodic tasks | SCHEDULED_EXECUTOR | scheduleWithFixedDelay(...) |
| Async work | CompletableFuture | runAsync() / supplyAsync() |
The Mental Model
When writing Hytale plugin code, always ask yourself two questions:
-
What thread am I on?
- Inside an ECS System? You’re on the World thread.
- In a scheduled task? You’re on the executor thread.
- In a CompletableFuture callback? You’re on the async thread.
- In an event handler? Check the documentation.
-
What data am I touching?
- World data (entities, components, chunks)? Must be on the World thread.
- Your plugin’s shared state? Use thread-safe types.
- Universe lookups and messaging? Always safe.
If you’re ever unsure, wrap it in world.execute(). The small overhead is nothing compared to the debugging nightmare of a race condition.
What’s Next?
Now that you understand threading, you can confidently write plugins that span multiple worlds, schedule background tasks, and maintain global state without data corruption.
The patterns we’ve covered - world.execute(), atomic types, concurrent collections, scheduled tasks - are the building blocks of every non-trivial Hytale plugin. Practice with simple cases (like our Global Kill Counter) before tackling complex cross-world systems.
If you want to dive deeper into Java concurrency, the book “Java Concurrency in Practice” by Brian Goetz remains the definitive reference. For Hytale-specific questions, join the community - we’re all learning this new architecture together.
Happy modding!