Search the database
Search forum topics
Search members
Search for trades
diablo2.io is supported by ads
diablo2.io is supported by ads
2 replies   859 views
1

Blizzard: "Outages, Rollbacks & How We're Fixing them"

No data yet

2

Description

https://us.forums.blizzard.com/en/d2r/t ... ward/28164

3 hours ago:
Blizzard wrote:
Hello, everyone.

Since the launch of Diablo II: Resurrected, we have been experiencing multiple server issues, and we wanted to provide some transparency around what is causing these issues and the steps we have taken so far to address them. We also want to give you some insight into how we’re moving forward.

tl;dr: Our server outages have not been caused by a singular issue; we are solving each problem as they arise, with both mitigating solves and longer-term architectural changes. A small number of players have experienced character progression loss–moving forward, any loss due to a server crash should be limited to several minutes. This is not a complete solve to us, and we are continuing to work on this issue. Our team, with the help of others at Blizzard, are working to bring the game experience to a place that feels good for everyone.

We’re going to get a little bit into the weeds here with some engineering specifics, but we hope that overall this helps you understand why these outages have been occurring and what we’ve been doing to address each instance, as well as how we’re investigating the overall root cause. Let’s start at the beginning.

The problem(s) with the servers:

Before we talk about the problems, we’ll briefly give you some context as to how our server databases work. First, there’s our global database, which exists as the single source of truth for all your character information and progress. As you can imagine, that’s a big task for one database, and wouldn’t cope on its own. So to alleviate load and latency on our global database, each region–NA, EU, and Asia–has individual databases that also store your character’s information and progress, and your region’s database will periodically write to the global one. Most of your in-game actions are performed against this regional database because it’s faster, and your character is “locked” there to maintain the individual character record integrity. The global database also has a back-up in case the main fails.

With that in mind, to explain what’s been going on, we’ll be focusing on the downtimes experienced between Saturday October 9 to now.

On Saturday morning Pacific time, we suffered a global outage due to a sudden, significant surge in traffic. This was a new threshold that our servers had not experienced at all, not even at launch. This was exacerbated by an update we had rolled out the previous day intended to enhance performance around game creation–these two factors combined overloaded our global database, causing it to time out. We decided to roll back that Friday update we’d previously deployed, hoping that would ease the load on the servers leading into Sunday while also giving us the space to investigate deeper into the root cause.

On Sunday, though, it became clear what we’d done on Saturday wasn’t enough–we saw an even higher increase in traffic, causing us to hit another outage. Our game servers were observing the disconnect from the database and immediately attempted to reconnect, repeatedly, which meant the database never had time to catch up on the work we had completed because it was too busy handling a continuous stream of connection attempts by game servers. During this time, we also saw we could make configuration improvements to our database event logging, which is necessary to restore a healthy state in case of database failure, so we completed those, and undertook further root cause analysis.

The double-edged sword of Sunday’s outage was that because of what we’d dealt with on Saturday, we had created what was essentially a playbook on how to recover from it quickly. Which was good.

But because we came online again so quickly in a peak window of player activity, with hundreds of thousands of games within tens of minutes, we fell over again. Which was bad.

So we had many fixes to deploy, including configuration and code improvements, which we deployed onto the backup global database. This leads us into Monday, October 11, when we made the switch between the global databases. This led to another outage, when our backup database was erroneously continuing to run its backup process, meaning that it spent most of its time trying to copy from the other database when it should’ve been servicing requests from servers. During this time, we discovered further issues, and we made further improvements–we found a since-deprecated-but-taxing query we could eliminate entirely from the database, we optimized eligibility checks for players when they join a game, further alleviating the load, and we have further performance improvements in testing as we speak. We also believe we fixed the database-reconnect storms we were seeing, because we didn’t see it occur on Tuesday.

Then Tuesday, we hit another concurrent player high, with a few hundreds of thousands of players in one region alone. This made us hit another incident of degraded database performance, the cause of which is currently being worked on by our database engineers. We also reached out to other engineers around Blizzard to work on smaller fixes as our own team focused on core server issues, and we reached out to our third-party partners for assistance as well.

Why this is happening:

In staying true to the original game, we kept a lot of legacy code. However, one legacy service in particular is struggling to keep up with modern player behavior.

This service, with some upgrades from the original, handles critical pieces of game functionality, namely game creation/joining, updating/reading/filtering game lists, verifying game server health, and reading characters from the database to ensure your character can participate in whatever it is you’re filtering for. Importantly, this service is a singleton, which means we can only run one instance of it in order to ensure all players are seeing the most up-to-date and correct game list at all times. We did optimize this service in many ways to conform to more modern technology, but as we previously mentioned, a lot of our issues stem from game creation.

We mention “modern player behavior” because it’s an interesting point to think about. In 2001, there wasn’t nearly as much content on the internet around how to play Diablo II “correctly” (
Baal
runs for XP,
Pindleskin
/Ancient Sewers/etc for magic find, etc). Today, however, a new player can look up any number of amazing content creators who can teach them how to play the game in different ways, many of them including lots of database load in the form of creating, loading, and destroying games in quick succession. Though we did foresee this–with players making fresh characters on fresh servers, working hard to get their magic-finding items–we vastly underestimated the scope we derived from beta testing.

Additionally, overall, we were saving too often to the global database: There is no need to do this as often as we were. We should really be saving you to the regional database, and only saving you to the global database when we need to unlock you–this is one of the mitigations we have put in place. Right now we are writing code to change how we do this entirely, so we will almost never be saving to the global database, which will significantly reduce the load on that server, but that is an architecture redesign which will take some time to build, test, then implement.

A note about progress loss:

The progress loss some players have experienced is due to the way we do character locks both in the regional and global databases–we lock your character in the global database when you are assigned to a region (for example, when you play in the US region, your character is locked to the US region, and most actions are resolved in the US region’s database.)

The problem was that during a server outage, when the database was falling over, a number of characters were becoming stuck in the regional database, and we had no way of moving them over to the global database. At that time, we believed we had two options: we either unlock everyone with unsaved changes in the global database, therefore losing some progress due to an overwrite that would occur in the global database, or we bring the game down entirely for an indeterminate amount of time and run a script to write the regional data to the global database.

At the time, we acted on the former: we felt it was more important to keep the game up so people could play, rather than take the game down for a long period of time to restore the data. We are deeply sorry to any players who lost important progress or valuable items. As players ourselves, we know the sting of a rollback, and feel it deeply.

Moving forward, we believe we have a way to restore characters that doesn’t lead to any significant data loss–it should be limited to several minutes of loss, if any, in the event of a server crash.

This is better, but still not good enough in our eyes.

What we are doing about it:

Rate limiting: We are limiting the number of operations to the database around creating and joining games, and we know this is being felt by a lot of you. For example, for those of you doing
Pindleskin
runs, you’ll be in and out of a game and creating a new one within 20 seconds. In this case, you will be rate limited at a point. When this occurs, the error message will say there is an issue communicating with game servers: this is not an indicator that game servers are down in this particular instance, it just means you have been rate limited to reduce load temporarily on the database, in the interest of keeping the game running. We can assure you this is just mitigation for now–we do not see this as a long-term fix.

Login Queue Creation: This past weekend was a series of problems, not the same problem over and over again. Due to a revitalized playerbase, the addition of multiple platforms, and other problems associated with scaling, we may continue to run into small problems. To diagnose and address them swiftly, we need to make sure the “herding”–large numbers of players logging in simultaneously–stops. To address this, we have people working on a login queue, much like you may have experienced in World of Warcraft. This will keep the population at the safe level we have at the time, so we can monitor where the system is straining and address it before it brings the game down completely. Each time we fix a strain, we’ll be able to increase the population caps. This login queue has already been partially implemented on the backend (right now, it looks like a failed authentication in the client) and should be fully deployed in the coming days on PC, with console to follow after.

Breaking out critical pieces of functionality into smaller services: This work is both partially in progress for things we can tackle in less than a day (some have been completed already this week) and also planned for larger projects, like new microservices (for example, a GameList service that is only responsible for providing the game list to players). Once critical functionality has been broken down, we can look into scaling up our game management services, which will reduce the amount of load.

We have people working incredibly hard to manage incidents in real-time, diagnosing issues, and implementing fixes–not just on the D2R team, but across Blizzard. This game means so much to all of us. A lot of us on the team are lifelong D2 players–we played during its initial launch back in 2001, some are part of the modding community, and so on. We can assure you that we will keep working until the game experience feels good to us not only as developers, but as players and members of the community ourselves.

Please continue to submit your feedback to the Diablo II: Resurrected forum 113, report your bugs to our Bug Report forum 35, and for troubleshooting assistance, visit our Technical Support forum 19. Thank you for your ongoing communication with us across all channels–it’s invaluable to us as we work on these issues.

The Diablo community team will keep you updated on our progress via the forums.

The Diablo II: Resurrected Dev Team
Description by BillyMaysed
5

Can be used to make Runewords:

7
User avatar

BillyMaysed 2265Moderator

Sorceress Americas PC
https://us.forums.blizzard.com/en/d2r/t ... ward/28164

3 hours ago:
Blizzard wrote:
Hello, everyone.

Since the launch of Diablo II: Resurrected, we have been experiencing multiple server issues, and we wanted to provide some transparency around what is causing these issues and the steps we have taken so far to address them. We also want to give you some insight into how we’re moving forward.

tl;dr: Our server outages have not been caused by a singular issue; we are solving each problem as they arise, with both mitigating solves and longer-term architectural changes. A small number of players have experienced character progression loss–moving forward, any loss due to a server crash should be limited to several minutes. This is not a complete solve to us, and we are continuing to work on this issue. Our team, with the help of others at Blizzard, are working to bring the game experience to a place that feels good for everyone.

We’re going to get a little bit into the weeds here with some engineering specifics, but we hope that overall this helps you understand why these outages have been occurring and what we’ve been doing to address each instance, as well as how we’re investigating the overall root cause. Let’s start at the beginning.

The problem(s) with the servers:

Before we talk about the problems, we’ll briefly give you some context as to how our server databases work. First, there’s our global database, which exists as the single source of truth for all your character information and progress. As you can imagine, that’s a big task for one database, and wouldn’t cope on its own. So to alleviate load and latency on our global database, each region–NA, EU, and Asia–has individual databases that also store your character’s information and progress, and your region’s database will periodically write to the global one. Most of your in-game actions are performed against this regional database because it’s faster, and your character is “locked” there to maintain the individual character record integrity. The global database also has a back-up in case the main fails.

With that in mind, to explain what’s been going on, we’ll be focusing on the downtimes experienced between Saturday October 9 to now.

On Saturday morning Pacific time, we suffered a global outage due to a sudden, significant surge in traffic. This was a new threshold that our servers had not experienced at all, not even at launch. This was exacerbated by an update we had rolled out the previous day intended to enhance performance around game creation–these two factors combined overloaded our global database, causing it to time out. We decided to roll back that Friday update we’d previously deployed, hoping that would ease the load on the servers leading into Sunday while also giving us the space to investigate deeper into the root cause.

On Sunday, though, it became clear what we’d done on Saturday wasn’t enough–we saw an even higher increase in traffic, causing us to hit another outage. Our game servers were observing the disconnect from the database and immediately attempted to reconnect, repeatedly, which meant the database never had time to catch up on the work we had completed because it was too busy handling a continuous stream of connection attempts by game servers. During this time, we also saw we could make configuration improvements to our database event logging, which is necessary to restore a healthy state in case of database failure, so we completed those, and undertook further root cause analysis.

The double-edged sword of Sunday’s outage was that because of what we’d dealt with on Saturday, we had created what was essentially a playbook on how to recover from it quickly. Which was good.

But because we came online again so quickly in a peak window of player activity, with hundreds of thousands of games within tens of minutes, we fell over again. Which was bad.

So we had many fixes to deploy, including configuration and code improvements, which we deployed onto the backup global database. This leads us into Monday, October 11, when we made the switch between the global databases. This led to another outage, when our backup database was erroneously continuing to run its backup process, meaning that it spent most of its time trying to copy from the other database when it should’ve been servicing requests from servers. During this time, we discovered further issues, and we made further improvements–we found a since-deprecated-but-taxing query we could eliminate entirely from the database, we optimized eligibility checks for players when they join a game, further alleviating the load, and we have further performance improvements in testing as we speak. We also believe we fixed the database-reconnect storms we were seeing, because we didn’t see it occur on Tuesday.

Then Tuesday, we hit another concurrent player high, with a few hundreds of thousands of players in one region alone. This made us hit another incident of degraded database performance, the cause of which is currently being worked on by our database engineers. We also reached out to other engineers around Blizzard to work on smaller fixes as our own team focused on core server issues, and we reached out to our third-party partners for assistance as well.

Why this is happening:

In staying true to the original game, we kept a lot of legacy code. However, one legacy service in particular is struggling to keep up with modern player behavior.

This service, with some upgrades from the original, handles critical pieces of game functionality, namely game creation/joining, updating/reading/filtering game lists, verifying game server health, and reading characters from the database to ensure your character can participate in whatever it is you’re filtering for. Importantly, this service is a singleton, which means we can only run one instance of it in order to ensure all players are seeing the most up-to-date and correct game list at all times. We did optimize this service in many ways to conform to more modern technology, but as we previously mentioned, a lot of our issues stem from game creation.

We mention “modern player behavior” because it’s an interesting point to think about. In 2001, there wasn’t nearly as much content on the internet around how to play Diablo II “correctly” (
Baal
runs for XP,
Pindleskin
/Ancient Sewers/etc for magic find, etc). Today, however, a new player can look up any number of amazing content creators who can teach them how to play the game in different ways, many of them including lots of database load in the form of creating, loading, and destroying games in quick succession. Though we did foresee this–with players making fresh characters on fresh servers, working hard to get their magic-finding items–we vastly underestimated the scope we derived from beta testing.

Additionally, overall, we were saving too often to the global database: There is no need to do this as often as we were. We should really be saving you to the regional database, and only saving you to the global database when we need to unlock you–this is one of the mitigations we have put in place. Right now we are writing code to change how we do this entirely, so we will almost never be saving to the global database, which will significantly reduce the load on that server, but that is an architecture redesign which will take some time to build, test, then implement.

A note about progress loss:

The progress loss some players have experienced is due to the way we do character locks both in the regional and global databases–we lock your character in the global database when you are assigned to a region (for example, when you play in the US region, your character is locked to the US region, and most actions are resolved in the US region’s database.)

The problem was that during a server outage, when the database was falling over, a number of characters were becoming stuck in the regional database, and we had no way of moving them over to the global database. At that time, we believed we had two options: we either unlock everyone with unsaved changes in the global database, therefore losing some progress due to an overwrite that would occur in the global database, or we bring the game down entirely for an indeterminate amount of time and run a script to write the regional data to the global database.

At the time, we acted on the former: we felt it was more important to keep the game up so people could play, rather than take the game down for a long period of time to restore the data. We are deeply sorry to any players who lost important progress or valuable items. As players ourselves, we know the sting of a rollback, and feel it deeply.

Moving forward, we believe we have a way to restore characters that doesn’t lead to any significant data loss–it should be limited to several minutes of loss, if any, in the event of a server crash.

This is better, but still not good enough in our eyes.

What we are doing about it:

Rate limiting: We are limiting the number of operations to the database around creating and joining games, and we know this is being felt by a lot of you. For example, for those of you doing
Pindleskin
runs, you’ll be in and out of a game and creating a new one within 20 seconds. In this case, you will be rate limited at a point. When this occurs, the error message will say there is an issue communicating with game servers: this is not an indicator that game servers are down in this particular instance, it just means you have been rate limited to reduce load temporarily on the database, in the interest of keeping the game running. We can assure you this is just mitigation for now–we do not see this as a long-term fix.

Login Queue Creation: This past weekend was a series of problems, not the same problem over and over again. Due to a revitalized playerbase, the addition of multiple platforms, and other problems associated with scaling, we may continue to run into small problems. To diagnose and address them swiftly, we need to make sure the “herding”–large numbers of players logging in simultaneously–stops. To address this, we have people working on a login queue, much like you may have experienced in World of Warcraft. This will keep the population at the safe level we have at the time, so we can monitor where the system is straining and address it before it brings the game down completely. Each time we fix a strain, we’ll be able to increase the population caps. This login queue has already been partially implemented on the backend (right now, it looks like a failed authentication in the client) and should be fully deployed in the coming days on PC, with console to follow after.

Breaking out critical pieces of functionality into smaller services: This work is both partially in progress for things we can tackle in less than a day (some have been completed already this week) and also planned for larger projects, like new microservices (for example, a GameList service that is only responsible for providing the game list to players). Once critical functionality has been broken down, we can look into scaling up our game management services, which will reduce the amount of load.

We have people working incredibly hard to manage incidents in real-time, diagnosing issues, and implementing fixes–not just on the D2R team, but across Blizzard. This game means so much to all of us. A lot of us on the team are lifelong D2 players–we played during its initial launch back in 2001, some are part of the modding community, and so on. We can assure you that we will keep working until the game experience feels good to us not only as developers, but as players and members of the community ourselves.

Please continue to submit your feedback to the Diablo II: Resurrected forum 113, report your bugs to our Bug Report forum 35, and for troubleshooting assistance, visit our Technical Support forum 19. Thank you for your ongoing communication with us across all channels–it’s invaluable to us as we work on these issues.

The Diablo community team will keep you updated on our progress via the forums.

The Diablo II: Resurrected Dev Team

7
User avatar

Beardozer 461Moderator

Sorceress Americas PC
Image

diablo2.io janitor | Odunga Brotherhood
7
User avatar

Fae 21

Sorceress Americas PC
Well it's more than I expected Blizzard to communicate maybe they are learning? Would be nice.
Last bumped by BillyMaysed 3 years ago.
9

Advertisment

Hide ads
999

Greetings stranger!

You don't appear to be logged in...

No matches
 

 

 

 

Value:
Hide ads forever by supporting the site with a donation.

Greetings adblocker...

Warriv asks that you consider disabling your adblocker when using diablo2.io

Ad revenue helps keep the servers going and supports me, the site's creator :)

A one-time donation hides all ads, forever:
Make a donation