128
replies
13217 views
Description
Description by TeeblingCan be used to make Runewords:
That was probably it. Someone trying too hard.
It’s probably some kid writing a script for a homework assignment that doesn’t realize they need to throttle their requests to like 1 per minute.
Have you played with fail2ban? It just scans log txt files looking for failed login attempts and manages tmp IP bans that way. Maybe a custom log parser could run every 5 mins and find those IPs that have been way too trigger happy recently.
Have you played with fail2ban? It just scans log txt files looking for failed login attempts and manages tmp IP bans that way. Maybe a custom log parser could run every 5 mins and find those IPs that have been way too trigger happy recently.
And that, kids is why we can't have nice things!
(Running great now..let's see how that continues. )
(Running great now..let's see how that continues. )
I hope my app isn't responsible, I didn't realize it would take off so much, so fast... Basically I have about 400 installed users now, increasing at about 40 user per day. The app fetches status about once per 3 minutes (depends on the greatest status).
here's the code for determining when to fetch next:
The app can determine highest status based on any combination of ladder/nonladder - hardcore/softcore or all the above. Just depends on your preferences. I was getting hit with 1 stars yesterday when the site was timing out, so thats what brought me here to investigate.
I felt like ~3 minutes wasn't super invasive but I as the number of users increases, the number of requests will increase accordingly... I hope this won't be a problem
The app itself: https://play.google.com/store/apps/deta ... onetracker || https://github.com/armlesswunder/d2rCloneTrackerAndroid
here's the code for determining when to fetch next:
public static long getStartOffset() {
int maxStatus = 1;
for (Status status: getStatusList()) {
if (status.status > maxStatus) maxStatus = status.status;
}
if (maxStatus == 1) return 5*60000L; //every 5 minutes
else if (maxStatus == 2) return 4*60000L; //every 4 minutes
else if (maxStatus == 3) return 3*60000L; //every 3 minutes
else if (maxStatus == 4) return 2*60000L; //every 2 minutes
else if (maxStatus == 5) return 60000L; //every 1 minute
else return 6L*60000L; //every 6 minutes
}
The app can determine highest status based on any combination of ladder/nonladder - hardcore/softcore or all the above. Just depends on your preferences. I was getting hit with 1 stars yesterday when the site was timing out, so thats what brought me here to investigate.
I felt like ~3 minutes wasn't super invasive but I as the number of users increases, the number of requests will increase accordingly... I hope this won't be a problem
The app itself: https://play.google.com/store/apps/deta ... onetracker || https://github.com/armlesswunder/d2rCloneTrackerAndroid
I know from other pages that they use rate limits to allow only like 100 requests per minute. If an IP address makes more requests it will be blocked for some time. If to many requests to the DClone API is the reason for the slowness of the site this might be a solution.
Not to be that guy piling on with the suggestions but @Armlesswunder's use case may become pretty common -- everyone fetching from the API directly instead of fetching from a cached third-party source.
The simplest way I've found to get around these situations is to have the API fetch from a materialized view instead of from a "live" table -- as long as the matview is continously refreshed, there's no need for each individual request to kick off a bunch table queries if it can essentially be pre-generated, just query one and done.
The simplest way I've found to get around these situations is to have the API fetch from a materialized view instead of from a "live" table -- as long as the matview is continously refreshed, there's no need for each individual request to kick off a bunch table queries if it can essentially be pre-generated, just query one and done.
OP
Guys, I didn't communicate properly in that last post.
The huge amount of automated traffic I just blocked out was not to the API specifically, it was to the actual rendered tracker page itself.
The API just allows one HTTP request, to a query that is cached in lots of ways. It's about as light an impact as things get on the server because it's just pulling JSON response and nothing else. The tracker page on the other hand is having to handle hundreds of different requests on each page load, much more resource intensive.
The spike in automated traffic from China was having the most impact on its visits to the tracker page itself, not the API.
It's just that I happened to find out about it by looking at the API path analytics, which then made me look at the dclone tracker page path specifically and I'm like 'ooooh'
The API itself is seeing steady growth in requests thanks to successful apps made by people like Armlesswunder (thanks for your input here btw) - but it is still not concerning to me at this time, and like others have pointed out, can be rate limited if necessary.
I'm hoping that by firewalling traffic from those Chinese ASNs to the dclone tracker page, that I have improved the situation.
API is sort of unrelated to this fix I guess, but of course requires me to continue monitoring it in case in the future it becomes an issue.
Hope that makes sense.
Pray with me now that the server stays stable for the next six hours and I don't have to pump another restart .
The huge amount of automated traffic I just blocked out was not to the API specifically, it was to the actual rendered tracker page itself.
The API just allows one HTTP request, to a query that is cached in lots of ways. It's about as light an impact as things get on the server because it's just pulling JSON response and nothing else. The tracker page on the other hand is having to handle hundreds of different requests on each page load, much more resource intensive.
The spike in automated traffic from China was having the most impact on its visits to the tracker page itself, not the API.
It's just that I happened to find out about it by looking at the API path analytics, which then made me look at the dclone tracker page path specifically and I'm like 'ooooh'
The API itself is seeing steady growth in requests thanks to successful apps made by people like Armlesswunder (thanks for your input here btw) - but it is still not concerning to me at this time, and like others have pointed out, can be rate limited if necessary.
I'm hoping that by firewalling traffic from those Chinese ASNs to the dclone tracker page, that I have improved the situation.
API is sort of unrelated to this fix I guess, but of course requires me to continue monitoring it in case in the future it becomes an issue.
Hope that makes sense.
Pray with me now that the server stays stable for the next six hours and I don't have to pump another restart .
Well, so far so no issues.
you got it, just wanted to make sure my app isn't responsible!Teebling wrote: 2 years ago Guys, I didn't communicate properly in that last post.
The huge amount of automated traffic I just blocked out was not to the API specifically, it was to the actual rendered tracker page itself.
The API just allows one HTTP request, to a query that is cached in lots of ways. It's about as light an impact as things get on the server because it's just pulling JSON response and nothing else. The tracker page on the other hand is having to handle hundreds of different requests on each page load, much more resource intensive.
The spike in automated traffic from China was having the most impact on its visits to the tracker page itself, not the API.
It's just that I happened to find out about it by looking at the API path analytics, which then made me look at the dclone tracker page path specifically and I'm like 'ooooh'
The API itself is seeing steady growth in requests thanks to successful apps made by people like Armlesswunder (thanks for your input here btw) - but it is still not concerning to me at this time, and like others have pointed out, can be rate limited if necessary.
I'm hoping that by firewalling traffic from those Chinese ASNs to the dclone tracker page, that I have improved the situation.
API is sort of unrelated to this fix I guess, but of course requires me to continue monitoring it in case in the future it becomes an issue.
Hope that makes sense.
Pray with me now that the server stays stable for the next six hours and I don't have to pump another restart .
amazing work dude. I love the website. the functionality is flawless and as soon as the off and on slowness is mostly (if not completely) dealt with, it's literally perfect. (See what I did there? ) Keep up the amazing work man, massive respect and support from my end.Teebling wrote: 2 years ago Guys, I didn't communicate properly in that last post.
The huge amount of automated traffic I just blocked out was not to the API specifically, it was to the actual rendered tracker page itself.
The API just allows one HTTP request, to a query that is cached in lots of ways. It's about as light an impact as things get on the server because it's just pulling JSON response and nothing else. The tracker page on the other hand is having to handle hundreds of different requests on each page load, much more resource intensive.
The spike in automated traffic from China was having the most impact on its visits to the tracker page itself, not the API.
It's just that I happened to find out about it by looking at the API path analytics, which then made me look at the dclone tracker page path specifically and I'm like 'ooooh'
The API itself is seeing steady growth in requests thanks to successful apps made by people like Armlesswunder (thanks for your input here btw) - but it is still not concerning to me at this time, and like others have pointed out, can be rate limited if necessary.
I'm hoping that by firewalling traffic from those Chinese ASNs to the dclone tracker page, that I have improved the situation.
API is sort of unrelated to this fix I guess, but of course requires me to continue monitoring it in case in the future it becomes an issue.
Hope that makes sense.
Pray with me now that the server stays stable for the next six hours and I don't have to pump another restart .
This is Major Tom to Ground control.
Unfortunately still having some issues this morning, 524/522 and SQL 2002 Connection refused.
OP
Yep, started acting up after about 3 hours after restart.BinaryShrub wrote: 2 years ago Unfortunately still having some issues this morning, 524/522 and SQL 2002 Connection refused.
I chopped off another bit of code (online filter in search tool), and then cleared cache, sessions, and restarted rack again.
Let's see how it goes now. Bit by bit
Thanks for staying so positive about it, I know OpEx is a PITA sometimes!Teebling wrote: 2 years agoYep, started acting up after about 3 hours after restart.BinaryShrub wrote: 2 years ago Unfortunately still having some issues this morning, 524/522 and SQL 2002 Connection refused.
I chopped off another bit of code (online filter in search tool), and then cleared cache, sessions, and restarted rack again.
Let's see how it goes now. Bit by bit
@Teebling cool, that sounds great and I shouldn't have assumed the API is doing heavy work. Sounds like someone in China is just silly then for web-scraping the UI rather than just hitting the endpoint you gave them...
OP
Dare I say it but things seem to be better now? The time I would usually be hitting that restart button has already passed (3 hrs).
the response/load time right now has been stellar.Teebling wrote: 2 years ago Dare I say it but things seem to be better now? The time I would usually be hitting that restart button has already passed (3 hrs).
This is Major Tom to Ground control.
OP
How's it been guys? Haven't had to restart for 16 hours now.
I would say (significantly) better but not quite perfect yet.
From my experience, there still seems to be something that is slowing things down that is specifically trade-related (or it just shows more there due to more things to load). Forums are relatively responsive (albeit somewhat slower than when you last restarted) but where I am really still seeing a slowdown (albeit also nowhere near what it was) is specifically in trade lists. In this case, looking at the trade stashes in my profile, switching from one to the next, tends to load quite a bit slower again.
Either way, MUCH better than it was.
From my experience, there still seems to be something that is slowing things down that is specifically trade-related (or it just shows more there due to more things to load). Forums are relatively responsive (albeit somewhat slower than when you last restarted) but where I am really still seeing a slowdown (albeit also nowhere near what it was) is specifically in trade lists. In this case, looking at the trade stashes in my profile, switching from one to the next, tends to load quite a bit slower again.
Either way, MUCH better than it was.
Ironically, as I edited/sent the above, it went back to taking ages only to then result in a 503. So..take the responsive forum bit with a grain of salt after all I guess. Argh!
Yesterday was a lot better for my app then the day before. I only saw 2 timeout periods vs ~5 the day before.
Similar pages
Advertisment
Hide adsGreetings stranger!
You don't appear to be logged in...
99
Who is online
Users browsing Forums: Hallsingland, JackLJ, Proximic [Bot], Vadim667 and 104 guests.
No matches
Jinsho
38