Jump to content

Shit's Borked


RPM

Recommended Posts

Posted (edited)
42 minutes ago, immamac said:

Chicago latency from Texas is lowest and geographically far enough to where the same event is unlikely to have effected both. 

Florida has too many hurricanes. North Carolina didn't have any capacity, didn't wanna do Arizona, but could look into it. California or any west region is too high latency for me same with the full on northeast regions. 

Surly is on the backbone, which is one of the reasons we don't have to have a cdn to serve content surly gets to the open web in 1 hop, through the redundant interconnect switch. Surly for users in Texas is actually faster than the majority of CDNs. For users outside of Texas it's likely just as fast. When I travel I test it pretty thoroughly and the latency and bandwidth is totally fine. 

I was only considering locations on backbone and not buried behind 2-3 interconnects to get to there. 

I may set up a data replication service off backbone to make DR trivial, the hard part right now is dealing with data syncing between sites as replication is one way by design right now. 

This was a catastrophic event and all system durability was proven pretty well which i am happy about. There has never been a time in this current Datacenter where we have had complete power loss. We had a network hiccup a few years ago due to a misconfigured switch but that was a blip in comparison to today's real deal full dark outage. 

You're good, dude. My post wasn't intended as criticism of you or this site. I just wanted to diss Chicago.

Edited by Braff Zacklin
wasn't
Link to comment
Share on other sites

For non nerd version. I only put equipment I own on the red dots which are very specific facilities. Choosing to do it this way makes it cost effective and the most robust and reliable deployment possible. 
image.png.c925bf747f81f238e2cafdb1a0a3ca0f.png

@terriblemaps
Link to comment
Share on other sites

57 minutes ago, immamac said:

For non nerd version. I only put equipment I own on the red dots which are very specific facilities. Choosing to do it this way makes it cost effective and the most robust and reliable deployment possible. 

image.png

Why do you hate South Dakota and New Mexico. The Surly population in those meccas of assholes is massive. 

Link to comment
Share on other sites

1 hour ago, immamac said:

I was only considering locations on backbone and not buried behind 2-3 interconnects to get to there. 

I may set up a data replication service off backbone to make DR trivial, the hard part right now is dealing with data syncing between sites as replication is one way by design right now. 

IMG_2754.jpeg.bcb487906893d323118c0dd54994ba6a.jpeg

Link to comment
Share on other sites

5 hours ago, Hornius Emeritus said:


Don't worry about it. Shit happens. You don't owe us anything.

However, I'm about to get evil on those assholes in the 2025 recruiting thread who insist on discussing whether Katy is in Houston. Jeeze.

Of course Katy is Houston. 

  • Haha 3
Link to comment
Share on other sites

5 hours ago, immamac said:

For non nerd version. I only put equipment I own on the red dots which are very specific facilities. Choosing to do it this way makes it cost effective and the most robust and reliable deployment possible. 

image.png

rh1dnt91gtaz.gif

Link to comment
Share on other sites

8 hours ago, Pato del Muerto said:

@immamac not sure if you know or care for any metrics for revenue, but it looks like view count isn’t working at leas Ron threads started today. They all show 0 views even though they have replies. 

I'm aware I'm flipping things back around and it'll fix probably later today. 

Link to comment
Share on other sites

Was Texassports.com being all borked yesterday caused by the same issue? It was a really weird feeling when two of my most accessed sites were down. And I’m currently in the middle of The Last Stand book series which is about an EMP blast, so my mind went to some weird places for a bit.

Link to comment
Share on other sites

44 minutes ago, SquishMitten said:

Was Texassports.com being all borked yesterday caused by the same issue? It was a really weird feeling when two of my most accessed sites were down. And I’m currently in the middle of The Last Stand book series which is about an EMP blast, so my mind went to some weird places for a bit.

no idea, but yes this took a lot of sites down it's not some shitty little third rate colo.

Link to comment
Share on other sites

On 5/26/2024 at 11:22 AM, Buzzrock said:

You should try the cloud. Clouds don’t get hit by tornadoes. 

 

On 5/26/2024 at 11:29 AM, Gatorubet said:

IMG_2749.jpeg.40a56145c6b7048fabb1daf33d8492c0.jpeg

The cloud is just someone elses' computers.

FIFY.

But, @immamac, have you considered going active / active, and doubling up the A records in DNS so you can distribute traffic accordingly?  Or would the sync times between the two DCs just kill that idea?

Link to comment
Share on other sites

28 minutes ago, PvilleStang said:

 

FIFY.

But, @immamac, have you considered going active / active, and doubling up the A records in DNS so you can distribute traffic accordingly?  Or would the sync times between the two DCs just kill that idea?

Why? It introduces so much complexity for basically no gain. 

 

If surly produced millions a year in revenue and did hundreds of financial transactions a second I'd do some of this stuff, it's just not really worth it or feasible for the level of revenue. 

  • Hook 'Em 1
Link to comment
Share on other sites

I didn't realize surly was rawdogging it to the internet with no CDN, that definitely makes the fail over and fail back much more complex. But on the other hand, if you rarely ever use the DR resources because of the hassle to change over, what's the point in maintaining it? Why not just do change data capture for the posts to the off-site and rebuild when needed?

 

Link to comment
Share on other sites

34 minutes ago, Captainant said:

I didn't realize surly was rawdogging it to the internet with no CDN, that definitely makes the fail over and fail back much more complex. But on the other hand, if you rarely ever use the DR resources because of the hassle to change over, what's the point in maintaining it? Why not just do change data capture for the posts to the off-site and rebuild when needed?

 

Why do that when you can just replicate off site? DR site is exactly that. A site that can scale up in the event of DR to serve traffic, it still has to have a complete copy of the data there. The data hotel I was talking about setting up as a tertiary never to serve up anything place is what you are suggesting, which isn't a bad idea I just don't know if it fucking matters. 

The reality is it wouldn't have been worth flipping to DR then having to wait till night to take the downtime and switch back to main and have circular replication or an extra redirect for another day because DNS propagation is potato even with 60s TTL. I'd rather just take the downtime up front if it's only an hour or 2 than to spend 12-16 hours of my own time making sure shit goes well and is fine.

Posts are only one part of the whole thing that makes the site go. 

Check this out. I had to use a Dallas DB server that I replicate to just in case some shit happens in the rack to serve traffic for the day. This is what that looks like. Sustained 300+Mbps and averaging well into the 400+Mbps range. Just for the database. I think people fail to recognize how big the site is constantly.

 

 

signal-2024-05-26-13-42-26-311.jpg

  • Hook 'Em 1
Link to comment
Share on other sites

1 hour ago, immamac said:

Why do that when you can just replicate off site? DR site is exactly that. A site that can scale up in the event of DR to serve traffic, it still has to have a complete copy of the data there. The data hotel I was talking about setting up as a tertiary never to serve up anything place is what you are suggesting, which isn't a bad idea I just don't know if it fucking matters. 

The reality is it wouldn't have been worth flipping to DR then having to wait till night to take the downtime and switch back to main and have circular replication or an extra redirect for another day because DNS propagation is potato even with 60s TTL. I'd rather just take the downtime up front if it's only an hour or 2 than to spend 12-16 hours of my own time making sure shit goes well and is fine.

Posts are only one part of the whole thing that makes the site go. 

Check this out. I had to use a Dallas DB server that I replicate to just in case some shit happens in the rack to serve traffic for the day. This is what that looks like. Sustained 300+Mbps and averaging well into the 400+Mbps range. Just for the database. I think people fail to recognize how big the site is constantly.

 

 

signal-2024-05-26-13-42-26-311.jpg

Is that 300-500Mbps for just the DB server? Or is that the front door app server? 

Sorry for all the probing questions lol, I have a customer that puts a ton of work into their DR but almost never fails over because of DNS headaches as you describe. They're trying to decide if it's really worth the power and man hours of maintaining a copy in a DR location given they've never actually used it in the last decade lol. 

It's crazy that the persistence isn't maintained in the DB or ES layer, where you could more easily keep a synchronized copy in a cluster config. I don't know the ins and outs of the board software, but it seems surprisingly stateful and brittle in that a new instance of the forum couldn't just read/write to the same data layer 

Link to comment
Share on other sites

Simple script could automate the DNS failover, then manually fail it back.  

I also think you should mirror almost everything to the DR site, but replace all embedded pics with Billy_fellating_a_Thujone_donger.gif.

 

In all seriousness, what is the size of Surly in its entirety on disk, and do you tier the data storage, or is it all on one storage device?

Link to comment
Share on other sites

5 minutes ago, PvilleStang said:

Simple script could automate the DNS failover, then manually fail it back.  

I also think you should mirror almost everything to the DR site, but replace all embedded pics with Billy_fellating_a_Thujone_donger.gif.

 

In all seriousness, what is the size of Surly in its entirety on disk, and do you tier the data storage, or is it all on one storage device?

It's on an array and each web node has local nvme cache serving it up for the internal cdn (media.surlyhorns.com). So it's tiered for access with cache misses getting pulled from a large disk array with plenty of dram cache and also replicated in real time (block) and every 15m to DR on the media side. It's measured in high single digit TB, but I'd say the hot data is in the 200-300GB range. 

The databases run on all flash. 

The problem isn't the mechanism of fail over. That's all automated. It's DNS propagation paired with intentional one way replication for active/passive. I have active/active intra rack. Database failovers that are planned are sub 10ms and unplanned I have not fully automated because I'd rather take downtime than risk split brain which is a bitch to remediate. 

Uptime isn't a priority because we aren't pumping revenue every second on this site. Data is the priority. 

Link to comment
Share on other sites

23 minutes ago, thunderlounge said:

Worked ok for me a few minutes ago. 

I think it's related to new banner ads that are appearing at the bottom.  Being Burnt Ends, I'm guessing you don't get those.  Perhaps it's time to jump back in.

Link to comment
Share on other sites

  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...