Jump to content

Recommended Posts

Posted
7 hours ago, bschoolprof said:

How many surls work for AWS?  It's fucking up the intertrons today.  Any idea when you dorks will fix this shit?

image.png.c575e141cd44fcbbc659c3be89260f83.png

image.png.431e7b6c6653a0acfec4cc44b03be7a6.png

  • Hook 'Em 2
  • Like 2
  • Haha 3
Posted

Sweatergawd more enterprises should take multiregion architecture more seriously. At least a pilot light backup would let you hit a 30 minute RTO to just fail over from use1

  • Hook 'Em 2
Posted

The AWS outage (simplified) in Haiku form:

 

It's not DNS

There's no way it's DNS

It was DNS

 

And this is a pretty good and easy to follow explainer, but a DNS failure caused dynamoDB to become unreachable in us-east-1, which made the shit hit the fan

 

 

  • Hook 'Em 3
  • Like 1
  • Haha 3
Posted
Just now, Captainant said:

The AWS outage (simplified) in Haiku form:

 

It's not DNS

There's no way it's DNS

It was DNS

 

And this is a pretty good and easy to follow explainer, but a DNS failure caused dynamoDB to become unreachable in us-east-1, which made the shit hit the fan

 

 

Dude, it's always DNS.

  • Hook 'Em 4
  • Like 1
Posted

Can't tell if this is english - none of these words make any sense. I didn't learn any of them in 5th grade english class and they weren't on my SAT. 

  • Hook 'Em 1
  • Haha 4
Posted

Based on my experience with IT personnel and every single software implementation I’ve been reliant on I’m surprised this doesn’t happen more often.  All perfectly summarized by Michael Bolton in Office Space.

 

image.jpeg.b11839b7a33c2c4912e1c9d8cd455e3b.jpeg

  • Hook 'Em 1
Posted

About 4 years ago it was server replacement time.  The discussion was cloud or stay local?  Local won out.  I was travelling back from the Kentucky game yesterday - so happy we are still on a local machine!

Posted

Lol on prem is not even hard anymore. You can legit just have a private cloud in less than 8 hours with licensing and the right software. 

Probably closer to a week with a bunch of open source and support contracts. 

IaaS is a solved as fuck problem and anyone still claiming the cloud is a competitive advantage or does something for TCO has rocks for brains and may not know how computers work. 

  • Hook 'Em 3
Posted
1 hour ago, jeevsie said:

About 4 years ago it was server replacement time.  The discussion was cloud or stay local?  Local won out.  I was travelling back from the Kentucky game yesterday - so happy we are still on a local machine!

TBH you're still subject to the same risk profile of single-source outage, you just (theoretically) have more control on the state of your single site. If you build a true multi-region architecture that doesn't hard-code its endpoints to region specific infra, it's pretty straightforward to recover.

1 minute ago, immamac said:

IaaS is a solved as fuck problem and anyone still claiming the cloud is a competitive advantage or does something for TCO has rocks for brains and may not know how computers work. 

The big advantage IaaS gives is that you can burst/scale with zero opportunity cost, whereas in your own DC you're paying for that hardware for its lifespan. But if your operational model is pretty steady state, then yeah cloud is just paying for someone else's margin. If you don't want to deal with physical infra management it's a great deal - but yeah my NAS can't be beat by cloud in terms of price per GB or cost of IO operations, even including cost of power

  • Hook 'Em 1
Posted
12 minutes ago, immamac said:

Lol on prem is not even hard anymore. You can legit just have a private cloud in less than 8 hours with licensing and the right software. 

Probably closer to a week with a bunch of open source and support contracts. 

IaaS is a solved as fuck problem and anyone still claiming the cloud is a competitive advantage or does something for TCO has rocks for brains and may not know how computers work. 

We (small bank) have bare metal on prem and bare metal in a datacenter in Austin and we still got fucked and were down for hours yesterday because my MSP had our fucking failover routed incorrectly so when this outage took out our primary ISP they couldn't reach our on prem hardware to fix it until our primary ISP came back up. Sometimes I just miss the days where I could unplug shit and plug it in somewhere else and shit would just work.

  • Hook 'Em 1
  • Haha 1
Posted
18 minutes ago, Captainant said:

TBH you're still subject to the same risk profile of single-source outage, you just (theoretically) have more control on the state of your single site. If you build a true multi-region architecture that doesn't hard-code its endpoints to region specific infra, it's pretty straightforward to recover.

The big advantage IaaS gives is that you can burst/scale with zero opportunity cost, whereas in your own DC you're paying for that hardware for its lifespan. But if your operational model is pretty steady state, then yeah cloud is just paying for someone else's margin. If you don't want to deal with physical infra management it's a great deal - but yeah my NAS can't be beat by cloud in terms of price per GB or cost of IO operations, even including cost of power

Burst into cloud for capacity is a very viable and undervalued thing people should absolutely be doing and architecting especially for DR or site replacement/moves. 

The storage in AWS is only a ripoff if you have everything there it's worth every penny if it's your DR strategy and you scale up quick around it and your only anchor of $$ is the storage and retrieval/replication once you get going. 

4 minutes ago, Brian Fantana said:

We (small bank) have bare metal on prem and bare metal in a datacenter in Austin and we still got fucked and were down for hours yesterday because my MSP had our fucking failover routed incorrectly so when this outage took out our primary ISP they couldn't reach our on prem hardware to fix it until our primary ISP came back up. Sometimes I just miss the days where I could unplug shit and plug it in somewhere else and shit would just work.

That's actually fucking crazy - what Colo doesn't have multiple redundant ISP links? 

  • Hook 'Em 2
Posted
10 minutes ago, Brian Fantana said:

We (small bank) have bare metal on prem and bare metal in a datacenter in Austin and we still got fucked and were down for hours yesterday because my MSP had our fucking failover routed incorrectly so when this outage took out our primary ISP they couldn't reach our on prem hardware to fix it until our primary ISP came back up. Sometimes I just miss the days where I could unplug shit and plug it in somewhere else and shit would just work.

Man we're (Credit Union) looking to change our core in the next 30 months, and our preferred choice is really trying to make the price point for hosting it offsite enticing. Shit like this scares me compared to what we have with everything on-site now.

  • Hook 'Em 1
Posted
11 minutes ago, immamac said:

That's actually fucking crazy - what Colo doesn't have multiple redundant ISP links? 

I have a primary (AT&T) and two backups. The failovers didn't fucking work because were are in the middle of a migration and my MSP apparently had some routes configured incorrectly. I was absolutely fuming yesterday.

3~ hour downtime for a bank is a fucking disaster. Luckily our drive-through location down the street was still operational so I was able to send people down there to do their transactions.

  • Hook 'Em 2
Posted
13 minutes ago, SimonBolivar said:

Man we're (Credit Union) looking to change our core in the next 30 months, and our preferred choice is really trying to make the price point for hosting it offsite enticing. Shit like this scares me compared to what we have with everything on-site now.

Oof. Changing cores is not fun, I don't envy you that process.

  • Hook 'Em 1
Posted
1 minute ago, Brian Fantana said:

I have a primary (AT&T) and two backups. The failovers didn't fucking work because were are in the middle of a migration and my MSP apparently had some routes configured incorrectly. I was absolutely fuming yesterday.

Gotta love waiting until an outage to actually test a DR strategy 

  • Hook 'Em 1
Posted
1 minute ago, Captainant said:

Gotta love waiting until an outage to actually test a DR strategy 

Yeah. We did failover testing like 2 months ago but we haven't done it since we started migrating hardware. Oops. Fuck me for assuming my MSP knew what the fuck they were doing.

  • Fuck Around and Find Out 2
Posted
18 minutes ago, Brian Fantana said:

Oof. Changing cores is not fun, I don't envy you that process.

Yeah it'll be our leadership's second time, but we're tired of Fiserv's shit so it'll be worth it in the long run. 

Posted (edited)
8 minutes ago, immamac said:

For everyone's info Surly is tied into a 17 ISP redundant link backbone. 

Crazy to buy from a single or 2 providers. 

Yeah we have our wonderful rural Central Texas infrastructure to thank for that.

There are two fiber providers here which are our primary and primary backup. Tertiary backup is a fixed wireless connection and Starlink is the nuclear option.

Kill me.

e: I could add a couple of 5G solutions as additional nuclear options but those aren't really supposed to be used for business infrastructure.

Edited by Brian Fantana
  • Hook 'Em 1
  • Haha 1
Posted
12 minutes ago, immamac said:

For everyone's info Surly is tied into a 17 ISP redundant link backbone. 

Crazy to buy from a single or 2 providers. 

Thank goodness for letting us all know.  I feel better/worse now.  I still do not know what any of this means.

  • Haha 5
  • Drool 1
Posted
2 minutes ago, Incredulity said:

Can you ELI5 the "17 ISP redundant link backbone".  I assume that doesn't mean 17 ISP providers each with separate bills, or does it?

That's exactly what it means, more or less.

I think imma runs all his own bare metal unless I remember that wrong

  • Hook 'Em 1
Posted
37 minutes ago, SimonBolivar said:

Yeah it'll be our leadership's second time, but we're tired of Fiserv's shit so it'll be worth it in the long run. 

Until Fiserv buys your new core...

  • Rage+1 1
Posted

seems like bad planning

"A major Amazon Web Services (AWS) outage on October 20 had the unexpected side effect of causing chaos in bedrooms across the US, as owners of Eight Sleep’s $2,000+ ‘Pod’ mattress covers found their smart beds had no offline mode and were stuck at high temperatures and odd positions in the night."

https://www.dexerto.com/entertainment/aws-crash-causes-2000-smart-beds-to-overheat-and-get-stuck-upright-3272251/

 

  • Haha 1
Posted
8 minutes ago, Incredulity said:

Can you ELI5 the "17 ISP redundant link backbone".  I assume that doesn't mean 17 ISP providers each with separate bills, or does it?

17 ISPs trunk straight to the open internet and I take 2 links off that open internet and have them redundant going to the one wire to the rack. 

Any one of the 17 fibers can get cut and it's fine. Any one of the redundant links can go down and everything is fine, if someone cuts my physical link we are hard down (this is good and bad, as I can also disconnect from the internet simply with one cable) 

The 17 ISPs contract with the colocation provider and its a mutually beneficial thing. They can sell capacity for them downstream and it also gets them a place to get out to the open internet backbone. These are exclusive to tier 4 facilities typically. 

Basically Surly isnt going down for you unless your DNS server is blacklisting us (Verizon for a hot second did this) or I fuck something up. 

We had one network outage that lasted ~90 minuted a few years ago that was a huge deal, but I failed out into azure because ive got DR set up outside and did that via DNS update. 

  • Hook 'Em 2
Posted
7 minutes ago, immamac said:

We had one network outage that lasted ~90 minuted a few years ago that was a huge deal, but I failed out into azure because ive got DR set up outside and did that via DNS update. 

You would not have been impacted yesterday if you were running on AWS - you have reasonable multi-site DR configured and sensitive to DNS update. That's the biggest thing a lot of businesses overlook, really just because it's easier/cheaper to design around a single site/region 

Posted
9 minutes ago, Captainant said:

You would not have been impacted yesterday if you were running on AWS - you have reasonable multi-site DR configured and sensitive to DNS update. That's the biggest thing a lot of businesses overlook, really just because it's easier/cheaper to design around a single site/region 

I think a ton of people went down cuz of dynamodb shitting the bed and a bunch of stuff they didn't even know indirectly using it.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...