Comm-Link:18397 - Server Meshing and Persistent Streaming Q&A

Aus Star Citizen Wiki
Zusammenfassung:
18397/en
Comm-Link 18397 Titelbild.webp
Server Meshing and Persistent Streaming Q&A (18397)

Server Meshing and Persistent Streaming Q&A At CitizenCon 2951, we took a deep dive into the transformative technologies of Server Meshing and Persistent Streaming, with Paul Reindell (Director of Engineering, Online Technology) and Benoit Beausejour (Chief Technology Officer at Turbulent). After the panel, we've seen that many folks had follow-up questions for our panelists, and we want to make sure that these are getting answered. Please read on for our Q&A with Paul, Benoit, Roger Godfrey (Lead Producer), and Clive Johnson (Lead Network Programmer).

When will we see Persistent Streaming and Server Meshing in the PU? Our current aim is to release Persistent Streaming and the first version of the Replication layer, ideally, between Q1 and Q2 next year. We’ll then follow up with the first version of a static server mesh, barring any unforeseen technical complications, between Q3 and Q4, of next year.

What is the current state of the server meshing tech and what are the biggest issues holding it back? Most people, when talking about Server Meshing, usually think about the very final step of this technology where we “mesh servers together.” The truth is that, before this final step, a very long chain of pre-requirements and fundamental technology changes need to be made to our game engine. With that in mind, I will try to answer this question in context of the full picture.

The short answer is the state is actually very advanced.

Now the long version. The road to Server Meshing started back in 2017/2018:

Object Container Streaming

For Server Meshing to work, we first required technology that allowed us to dynamically bind/unbind entities via the streaming system, as this isn’t something the engine supported when we started. So when we released ‘Client Side Object Container Streaming’ (OCS) in 2018, we also released the very first step towards server meshing!

Once this initial stepping stone was out the door, the technology that allows us to dynamically bind/unbind entities on the client had to be enabled on the server as well (as ultimately server nodes in the mesh will need to stream entities in/out dynamically). This technology is called ‘Server Side Object Container Streaming’ (S-OCS), and the first version of S-OCS was released at the end of 2019. This was the next big step towards Server Meshing.

Entity Authority & Authority Transfer

While we had the technology that allowed us to dynamically stream entities on the server, there is still only one single server that ‘owns’ all simulated entities. In a mesh where multiple server nodes share the simulation, we needed the concept of ‘entity authority.’ This means that any given entity is no longer owned by a single dedicated game server, but instead there are multiple server nodes in the mesh. So, one server node that controls the entity, and multiple other server nodes that have a client view of this entity. This authority also needs the ability to transfer between server nodes. A good amount of development time was dedicated to the concept of ‘entity authority’ and ‘authority transfer’ in the first half of 2020. This was the first time the entire company had to work on Server Meshing, as a lot of game-code had to be changed to work with the new entity-authority concept. By the end of 2020 most (game) code was modified to support the concept, so another large step was taken, yet there is no actual mesh in sight.

Replication Layer & Persistent Streaming

The next step was to move entity replication into a central place where we can control the streaming and network-bind logic. This then allows us to replicate the network state to multiple server nodes. In order to achieve this, we had to move the streaming and replication logic out of the dedicated server into the “Replication” layer, which now hosts the network replication and entity-streaming code.

At the same time we also implemented Persistent Streaming, which allows the Replication layer to persist entity state into a graph database that stores the state of every single network replicated entity. 2021 was dedicated to work on the Replication layer and the EntityGraph, which allows us to control entity streaming and replication from a separate process (separated from the traditional dedicated game server). This work is almost complete and is in its final stage.

Static & Dynamic Server Meshes

However, this still isn’t a “mesh.” The work on the actual mesh has started and will take us well into next year to complete, and all the pre-requirements that I outlined above were necessary to even get to this point. The first version of this technology will be a static server mesh, and is the next big stepping stone. However, it will also not be the last! With the static mesh, we will have the first version of a true mesh but, as the name ‘static’ indicates, the ability to scale this mesh is very limited.

Before we can truly call this feature complete, we will need to take on another big step, which we call “dynamic mesh.” This step will allow us to dynamically mesh server nodes together and then scale the mesh dynamically based on demand. A lot of the work on this part happens in parallel. For example, the Fleet Manager that controls the dynamic demand of the mesh is already in development, as well as the matchmaking requirements that come with the new inclusion of “shards.”

In the meantime, a lot of game-code teams also have to work on adapting existing game code to fully work with a server mesh (and more importantly find all the edge cases that will only surface once we have a true mesh). While the entity authority work was completed in 2020, entity authority is currently only transferred between the client and one single server, so some code may need additional adjustments.

How do you plan on managing a large ship, say a Javelin? Would that be it's own dedicated resource with ships around it? With Dynamic Server Meshing, it’s possible that large ships such as a Javelin could have their own dedicated server assigned to run the authoritative simulation for that ship and everything on it. However, we’re trying to avoid having inflexible rules about how entities get assigned to processing resources, so that might not always be the case. It comes down to efficiency in terms of both processing speed and server costs. If we had a hard rule that each Javelin and everything in it gets its own server, then it wouldn’t be very cost-efficient when a Javelin only has a handful of players on it. The same rule also wouldn’t be efficient in terms of server processing speed if there were hundreds of players all crowded into the same Javelin, as the rule would prevent us from distributing the processing load across multiple servers.

Dynamic Server Meshing will be a bit different in that it will constantly re-evaluate how best to distribute the simulation, aiming to find the sweet spot so that no single server is overloaded or underutilized. As players move around the ‘verse, the ideal distribution of processing resources will change. To react to those changes, we’ll need the ability to transfer authority over entities from one server to another, as well as bring new servers online and shut down old ones. This will allow us to move the processing load from a server that is at risk of becoming overloaded to one that is currently underutilized. If none of the existing servers have enough spare capacity to handle an increase in load, we can simply rent more servers from our cloud platform provider. And when some servers don’t have enough load to make them cost-efficient, some of them can transfer their parts of the simulation over to the others and we can shut down the ones we no longer need.

How many players will be able to see each other in one space ? Whats the maximum you are planning? This is a difficult question to answer, and the best answer we can give at the moment is that it depends.

Assuming that the question is about the limit of how many players will be able to see each other from a client view, it’s mainly dictated by the game client. This is due to client-side simulation, such as physics and game code, as well as rendering cost.

Additionally, it also heavily depends on the scenario; 100 players in FPS combat are cheaper to simulate and render on the client than 100 players fighting in single-seater spaceships, firing missiles and lasers at each other.

The Graphics team is actively working on Vulkan, which will allow us to increase draw calls and should improve how many players/ships we can render at the same time, while the Engine team is heavily focused on game-code optimizations to increase the number of game objects that we can simulate at once.

We’re aiming to increase our player count and our expectation is that we will support scenarios where 100 players can see each other at reasonable framerates. However, as we start scaling our shards to support higher player counts, the likelihood that every single player within a shard can go to the same physical location and see each other without performance issues will decrease.

This is where we will need to start implementing game mechanics that prevent these scenarios from happening too frequently.

The absolute limit is hard to predict until some of the new technology comes online and we can start to measure performance.

If I make a base on a moon, will my base be reflected on the other shards that I am not on? The Planet Tech team plans to implement base building with server shards in mind. Claiming land for your base will claim this land on all shards, and we plan to replicate your base to all shards.

However, only one shard will have an ‘active’ version of the base, with other shards spawning a ‘limited access/read only’ version of that same base. For example, a base will give full access and the ability to expand in the shard the owner currently plays on, while on all other shards, this base may spawn with locked doors in an immutable state. The full design is not 100% established yet and may change though.

Is the true end goal one single shard for all players? This is our ambition, however giving a definite answer is not possible at this point.

We will start with many small shards per region and slowly reduce the number of shards. The first major goal will be to reduce this to only needing one single shard per region. To get there, our plan is to gradually increase player count per shard and constantly improve the backend and client tech to support more and more players.

It’s not just technology changes that are required to get to this goal - new game design and game mechanics are needed too. Without mechanics to prevent every single player going to the same location, a large mega shard will be very hard to achieve, especially on the client. For example, there could be a mechanic to temporarily close jump points to crowded locations, or create new layers for certain locations.

While the backend is designed to scale horizontally, the game client runs on one single machine and is limited to a definite number of CPU/GPU cores as well as memory.

Only once we overcome these hurdles, and accomplish one mega-shard per region, will we be able to take on the final boss: Merging regional shards into one global mega shard.

This comes with its own set of issues, as locality plays a big role in the player experience. For example, latency between services within the same datacenter is much lower than latency between services that are hosted in two regionally-separated datacenters. And while we designed the backend to support one global shard, it is an operational challenge to deploy the backend in a way that doesn’t favor one group of players over another.

Will the economy of the universe be independent in every shard or joined? Economy will be global and reflected in each shard.

For example, let’s take a look at shops. While each shop has a local inventory (items that are currently on display), shops are restocked from a global inventory shared across all shards. If a lot of players start to buy a specific gun at Port Olisar’s weapon shop, the price of that gun will rise at this shop across all shards. Eventually, the inventory for this gun will be depleted, so shops across all shards will no longer be able to restock this gun.

What will prevent large groups of "blues" and large groups of "reds" ending up in echo-chamber shards? Social dynamics would imply large concentrations of people that will have friends and be in orgs that are of the same interests. Will there be a solution that will ensure proper mixing of good, bad, and in-between? Players will not be permanently assigned to shards as the matchmaking system assigns a new shard for the selected region on each login. Early on this will cause a natural distribution, as we’ll start with many smaller shards in parallel.

As we start to scale our shards (and therefore shrink the number of parallel shards), this question will become more relevant. We plan to address this with our new matchmaking system.

The new matchmaking system currently in development alongside Server Meshing allows us to match players to shards based on multiple input parameters. Those are used to match players into shards with their friends, or where they left most of their items in the open world. However, it also allows us to use more advanced parameters, such as reputation and other hidden player stats that we track.

This will allow us to try and ensure that every shard has a semi-diverse collection of individuals. For example, we could make sure that we don't inadvertently load a shard up with only lawful players, which might not be very fun if part of what they want to do is hunt criminal players.

Will your character and ship always be in-game when you have left; i.e, if I logged out from my ship bed on a planet, will my ship still be there, meaning people could try and break into or destroy my ship? When an entity is “unstowed” in a shard (it physically exists in the shard), it exists permanently within that shard until the player “stows” the entity into an inventory. This can be done by picking up a gun and placing it into your backpack, or by landing a ship on a landing pad, which will stow the ship into a specific landing-pad inventory. Once an entity is within an inventory, it is stored in the global database and can be unstowed into any shard. This allows players to move items between shards.

We also plan for a mechanic called ‘Hero Item Stow/Unstow.’ This will take any player-owned hero items and automatically stow them into a player-specific shard-transition inventory. The automatic stow usually happens when no other players are around and the entity is streamed out. Items in this shard-transition inventory will follow a player automatically, so when a player logs into a different shard, we will take entities and unstow them back into the new shard at the position where the player left them.

When you land your ship on a moon and log out, the ship will stream out and automatically be stowed if no other players are around at that moment. Now, when you log into a different shard, your ship will be unstowed into the new shard. If, for some reason, the ship stayed in the old shard longer and got destroyed while you were logged out, you may wake up in a med bed.

How much is new content dependent on Server Meshing now? While Server Meshing will allow us to start to scale up the number of players who can play together in Star Citizen, it will also enable us to start adding new content experiences. Right now, we’re focused on using this to add new star systems. Server Meshing is one of the key technologies to get the jump points working in-game by allowing star systems to seamlessly move in and out of memory without the need for loading screens. Players will first see this next year when the first iteration of Server Meshing goes live with the introduction of the Pyro system.

As we refine the technology and move away from Static Server Meshing towards Dynamic Server Meshing, designers can use this tech to have larger, more interesting areas (such as larger settlements or large ship interiors) with denser numbers of AI and player characters. Server Meshing could open the doors to gameplay experiences that our designers have not even thought of yet!

How much of a performance improvement can we expect? The biggest gain will be server performance. Right now, our server performance is pretty limited due to the sheer number of entities that we have to simulate on one server. This results in a very low framerate and server degradation, causing the client to experience network lag/rubber banding and other network desync issues. Once we have even the static mesh in place, we expect server framerate to be considerably higher, causing less of these symptoms.

On the client FPS, server meshing actually has very little impact. The client already only streams entities that are in visible range. There may be some slight improvements, as we can be a bit more aggressive with the range culling on the client as, right now, some objects have a bloated streaming radius for features like radar or missiles to work properly. With Server Meshing, we can decouple the client and server-streaming radius. However, these improvements will be minimal on the client. Still, faster server FPS will improve the overall experience as network lag will be reduced considerably.

I know that there may not be an answer to this yet but, upon initial release of Server Meshing, how many shards do you anticipate needing to have? 10, 100, 1000, more? We know that the shift away from DGS means more players per game area, just not sure how many you anticipate. Short answer is that we cannot advance a number.

The concept of the shard is the "malleable" part of the meshing architecture, and we will only be able to tell the number of shards required once all the component pieces are in place and we plan to get there iteratively.

With the first drop of Persistent Streaming (not meshing), we want to start by mimicking the current behavior that you see online by having one shard per server instance and one replicant (called the hybrid). The only difference is that all entities in those shards will still be persistent. This allows us to deal with the worst-case scenario by having a really large number of persistent shards and very large replicants to test the mechanics of creating/seeding, simulation with active players, and spin down for recycling or destruction. We want shard creation and destruction in this first phase to be optimal, fast, and cost-neutral.

This approach has several advantages, as we can get to test shard persistence earlier and, more importantly, can measure active metrics across many shards.

For example (non-exhaustive!):

How many entities remain in a persistent shard over time (shard growth rate)

Size of the global graph (global growth rate)

How many players a single shard database can handle (player usage)

Effect of several gameplay mechanics on entity updates to the shard database (gameplay effects)

Performance profile of the write queues, mean query times of shard db clusters (shard database metrics)

Performance profile of the write queues, mean query times of global db cluster (global database metrics)

Efficiency of database sharding (another sharding level!) of the graph

While we do have proper estimates and internal measurements for these, nothing replaces real players generating representative load on the system.

As we get the other components of meshing into play, mainly the static mesh, we plan to gradually reduce the number of shards, grouping players into bigger and bigger shards until we feel comfortable with the performance of replicants, DGSs, and the entity graph. Of course, the static mesh will suffer from congregation problems and we will only be able to resume going to much larger shards once the dynamic mesh is in place.

Ultimately, with the dynamic mesh, we aim to support very large shards.

Can an asset as small as a bullet travel across server shards? The short answer is no.

You can see shards as a completely isolated instance of the simulated universe, very similar to how we currently have different isolated instances per dedicated server. In order for items to transfer between instances, these items need to be stowed into an inventory before they can be unstowed into a different shard. For example, if a player picks up a gun in one shard then places that gun into their backpack. Now, when the player connects to a different shard, the player can take the gun out of their backpack, unstowing it into the new shard.

Within a shard, an entity like a missile will be able to travel across multiple server nodes if these server nodes have the missile within the server's streaming area. Only one server node will be in control (has authority) over that missile, while the other server nodes will only see a client view of the same missile.

Bullets are actually spawned client-side. So, a unique version of the bullet is spawned on each client and server node, which is why I used a network-replicated entity like a missile in the example above.

When we you are handling different regions of the world, are you planning on hosting four main server farms, such as US, EU, China, Oceanic? Or are you planning on making "One-Global-Universe"? If global, how would that handle the balance of players with extreme ping variations? We still plan on keeping the regional distribution of network-sensitive services. In the initial deployment of Persistent Streaming, the global database will be truly global. Shards themselves will be regionally distributed, so a game client connecting to the EU region would be preferably match-made to an EU shard. As shards grow in size (for both players and entities), we plan to revisit this model and also introduce regional-level services for serving data closer to the locality.

I live in Eastern Europe. After launching Server Meshing, will I be able to play with friends from the USA? We do not plan to limit what shard and region a player can choose.

A player will be free to choose any region to play in and, within this region, we will allow limited shard selection. For example, the shard with your friends or the shard you last played on.

Since all player data is stored in the global database, players can switch between shards similarly to how they can switch between instances today. Items that are stowed will transfer with the player and are always accessible regardless of shard.

Replication Layer Dying: What will players experience if a Replication Layer is shut down/'dies'? We know that the entity graph will collect the seeded information and feed it back into a new replication layer, but will we return to the main menu if the Replication layer dies compared to if a server node dies, or will we have some sort of loading screen that automatically match-makes us into a new layer? To answer this properly, I first need to give some more detail on what our final architecture will look like. Ultimately, the Replication Layer won’t be a single server node. Instead, it will consist of multiple instances of a suite of microservices with names like Replicant, Atlas, and Scribe. One advantage of this is that the Replication layer itself will be able to scale. Another advantage, more relevant to this question, is that although a single node/instance in the Replication layer may fail, it’s very unlikely the whole Replication layer will fail at once. From a client’s point of view, the Replicant nodes are the most important as it is those that will handle networked entity steaming and state replication between clients and the game. The Replicant is designed to not run any game logic and, in fact, it will run very little code at all; no animation, no physics, just network code. Being built from such a small codebase should mean fewer bugs overall. So, after some inevitable teething troubles, we’re expecting Replicants to be pretty stable. It’s also important to know that, at any one time, a single client may be served by multiple Replicants (but those Replicants will also be serving other clients at the same time). The final piece of the puzzle is the Gateway layer: Clients won’t connect directly to Replicants but instead to a gateway node in the Gateway layer. The Gateway service is just there to direct packets between clients and the various Replicants they are talking to. The Gateway service will use an even smaller codebase than the Replicant so should be even less likely to crash.

So what will a client experience if one of the Replicants serving it suddenly crashes?

The client will remain connected to the shard but part or all of their simulation will temporarily freeze. The Replication layer will spin up a new replicant node to replace the one that crashed and will recover the lost entity state from persistence via EntityGraph. The client gateways and DGS nodes that were connected to the old replicant will re-establish connection with the new one. Once everything is reconnected, the game will unfreeze for the affected clients. At this point the client may experience some snapping/teleporting of entities. We’re hoping the whole process will take less than a minute.

What will a client experience if the gateway serving it suddenly crashes?

The Gateway service doesn’t hold any game state and will have its own form of crash recovery. Since it’s a much simpler service than a replicant, the recovery time should be much quicker, more in the region of seconds. While the recovery is in progress, the client will experience a temporary freeze followed by some snapping/teleporting.

What about the Hybrid service?

During their CitizenCon presentation on Persistent Streaming and Server Meshing, Paul and Benoit talked about the Replication layer in terms of the Hybrid service. The Hybrid service is, as its name suggests, a hybrid of the Replicant, Atlas, Scribe, and Gateway services I mentioned above (but not EntityGraph), as well as a handful of other services not discussed yet. We have chosen to develop this first before splitting it into its component services as it reduces the number of moving parts we’re trying to deal with all at once. It also allows us to focus on proving out all the big concepts rather than the boilerplate of having all those individual services communicate correctly. In this initial implementation, the Replication layer will be a single Hybrid server node. If this Hybrid node crashes, then the situation will be similar to what clients experience now when a dedicated game server crashes. All clients will get kicked back to the frontend menu with the infamous 30k error. Once the replacement Hybrid has started, clients will be able to rejoin the shard and continue where they left off. Hopefully, we’ll be able to implement it such that the clients receive an on-screen notification that the shard is available again and a single keypress will match them back to the shard (similar to how it works for client crash recovery).

We saw a lot of talk in the panel about which nodes have write authority within a shard, but what about write authority between separate shards? Are separate persistence databases maintained for separate shards or will the states of world items eventually be synchronized between shards even if they were left in different states (i.e., a door is left open on one shard and left closed on another - will one shard eventually write its state to the database, updating the state of the door on the other shard?) Generally speaking, each shard is its own unique copy of the universe, and any item within the shard will not share state with an item from a different shard as each shard has its own database. On the other hand, we do have a global database for player inventory data. This database is used to store any item in a player’s inventory, and items can transfer between shards if they first get stowed from a shard into an inventory and then unstowed into another shard.

Some features, such as player outposts or minable resources, implement special code that will replicate a global state to all shards, so an outpost may exist in multiple shards in parallel and slowly (relative to the speed of real-time play) replicate its state between shards. This isn’t an instant replication (a door opening/closing will not be replicated), however, a persistent state like a door being locked or unlocked may be replicated between shards.

It’s similar for minable resources: While each shard has a unique version of a minable rock, the overall amount will be replicated between shards, so when players start to mine a certain area, the global resource map for this area will be modified and the number of minable rocks in that location will be affected on all shards.

When you have a party moving (quantum travelling or other) from one object to another, and another DGS node, object, or instance is full , will T0 / Static Meshing create another DGS node pre-emptively? Or how will this be handled? With Static Server Meshing, everything is fixed in advance, including the number of server nodes per shard and which game server is responsible for simulating what locations. This does mean that if everyone in the shard decides to head to the same location, they will all end up being simulated by the same server node.

Actually, the worst case is if all the players decide to spread themselves out between all the locations assigned to a single server node. That way, the poor server will be trying to deal not only with all of the players but it will also need to have streamed in all of its locations. The obvious answer is to allow more servers per shard, so each server node has fewer locations it may need to stream in. However, because this is a static mesh and everything is fixed in advance, having more server nodes per shard also increases running costs. But we need to start somewhere, so the plan for the first version of Static Server Meshing is to start with as few server nodes per shard as we can while still testing that the tech actually works. Clearly that is going to be a problem if we allow shards to have many more players than the 50 we have right now in our single-server “shards”.

So, don’t expect player counts to increase much with the first version. That avoids the issue of a single server node becoming full before players get there since we’ll limit the maximum player count per shard based on the worst case. Once we’ve got this working, we’ll look at how the performance and economics work out and see how far we can push it. But to make further expansion economically viable, we’ll need to look at making Server Meshing more dynamic as soon as possible.

With the sheer volume of data travelling between the clients and server nodes, and the need for extreme low latency, can you describe or dig in to how you are managing that or what technologies you are using to help speed things up, or rather keep them from slowing down? The biggest factors currently affecting latency are server tick rate, client ping, entity spawning, and the latency of persistent services.

Server tick rate has the biggest effect of these and is related to the number of locations a game server is simulating. Server meshing should help with this by reducing the number of locations each game server needs to stream in and simulate. Fewer locations will mean a much lower average entity count per server and the savings can be used to increase the number of players per server.

Client ping is dominated by distance from the server. We see many players choosing to play on regions in entirely different continents. Some of our game code is still client authoritative, which means that players with high ping can adversely affect the play experience for everyone else. There’s not much we can do about this in the short term but it’s something we want to improve on after Server Meshing is working.

Slow entity spawning can cause latency by delaying when entities stream in on clients. This can cause undesirable effects, such as locations not fully appearing until minutes after quantum traveling to a location, falling through floors after respawning at a location, ships taking a long time to appear at ASOP terminals, changing player loadout, etc. The bottlenecks with this are mostly on the server. First, entities don’t get replicated to clients until they have been fully spawned on the server. Second, the server has a single spawn queue that it must process in order. Third, the more locations a server needs to stream in, the more spawning it has to do. To improve things, we have modified the server spawning code to make use of parallel spawn queues. Server meshing will also help, not only by cutting the load on spawn queues by reducing the number of locations a server has to stream in, but also because the Replication layer replicates entities to clients and servers simultaneously, allowing them to spawn in parallel.

We’re still using some of our legacy persistent services, adequate as designed but known to have performance and scalability issues under our demands. This can result in long waits when fetching persistent data from the services in order to know what to spawn, such as spawning a ship from an ASOP terminal, examining an inventory, changing player loadout, etc. Since full persistent streaming and Server Meshing will both dramatically increase the amount of data we need to persist, we knew we needed to do something about this. This is why Benoit and his team at Turbulent have completely reinvented how we will persist data in the form of EntityGraph, which is a highly scalable service built on top of a highly scalable database that is optimized for exactly the kind of data operations we perform. On top of that, we’re also developing the Replication layer, which acts like a highly scalable in-memory cache of the current state of all entities in a shard, eliminating the need for the majority of queries we’ve been sending to the legacy persistent services. That’s right, it’s going to be highly scalable services all the way down!

To help reduce/eliminate any additional latency the Replication layer may introduce, we’re building it to be event-driven rather than on a tick rate like a traditional game server. This means that as packets come in, it will immediately process them and send out the response and/or forward the information to relevant clients and game servers. Once work on the initial version of the Replication layer is complete (the Hybrid service), we’ll be doing an optimization pass to make sure it’s as responsive as possible. And, although this is ultimately a decision for DevOps, we’ll deploy them in the same data centers as the game servers themselves so the on-the-wire network latency due to the extra hop between the Replication layer and game server should be less than a millisecond. Oh, and did I mention the Replication layer will be highly scalable? That means if we detect the Replication layer causing latency hotspots in particular parts of the ‘verse, we will be able to reconfigure it to remedy the problem.

Disclaimer The answers accurately reflect development’s intentions at the time of writing, but the company and development team reserve the right to adapt, improve, or change feature and designs in response to feedback, playtesting, design revisions, or other considerations to improve balance or the quality of the game overall.

Cookies helfen uns bei der Bereitstellung dieses Wikis. Durch die Nutzung des Star Citizen Wiki erklärst du dich damit einverstanden, dass wir Cookies speichern.