Jump to content

Facebook AKA “Hydra” Thread


Hugo Stiglitz

Recommended Posts

4 minutes ago, Goredho said:

HAHAHAHA, WE TOLD YOU TO NOT FUCK WITH CONSERVATIVE DISINFORMATION AND HATE SPEECH, BIG TECH!  Parler.com still resolving like a BOSS!

 

$ dig parler.com

; <<>> DiG 9.10.6 <<>> parler.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57087
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;parler.com.                    IN      A

;; ANSWER SECTION:
parler.com.             300     IN      A       185.203.41.218

;; AUTHORITY SECTION:
parler.com.             22938   IN      NS      ns1.parler.com.
parler.com.             22938   IN      NS      ns2.parler.com.

;; ADDITIONAL SECTION:
ns1.parler.com.         22938   IN      A       216.246.208.246
ns2.parler.com.         22938   IN      A       216.246.208.247

;; Query time: 229 msec
;; SERVER: 192.168.88.1#53(192.168.88.1)
;; WHEN: Mon Oct 04 13:55:37 MDT 2021
;; MSG SIZE  rcvd: 112

 

Not just Parler--Gab has been scooping up traffic as well.

  • Haha 1
Link to comment
Share on other sites

11 minutes ago, Captainant said:

I've heard of poisoning BGP routes, but never something like this that completely takes ALL of their routes off the internet. They must have some compromised credentials getting exploited or something. 

I just literally don't understand how you can run such a large IT enterprise and have EVERYTHING in EVERY region go down like this. They either are running a clown show of an IT org, or they got pwned harder than any tech firm has ever been pwned before

Yea the scope of this is pretty fucking crazy. 10 or 11 years ago I did bring down the City of Plano's network for about 2 minutes by causing a spanning tree loop so almost hits close to home lol

  • Hook 'Em 1
  • Haha 1
  • Drool 1
Link to comment
Share on other sites

40 minutes ago, Francisco 2.0 said:

 

Sort of.  Best analogy I can come up with at the moment.

 

Facebook self-deleted all of its street addresses, zip codes, phone numbers, etc this morning.  So the mail, delivery trucks and customers no longer know how to get to their various properties all over the world.

Further, because Facebook did such a thorough job of deleting all of this information, the only way to fix their problems is to show up in person and start creating their addresses, zip codes and phone numbers all over again.  Which takes time.

 

 

facebook is a lot more like AOL than i had thought. 

  • Hook 'Em 1
Link to comment
Share on other sites

1 minute ago, Longhorn_Fan68 said:

hearing rumblings of Twitter and Gmail being affected too, but currently both work for me

Both work for me, but down detector is reporting many issues for sites and for carriers like ATT, Verizon, T-mobile, etc. Very interesting shit happening... Glad I've at least got my media library on my NAS if the internet gets borked lol

Link to comment
Share on other sites

9 minutes ago, Longhorn_Fan68 said:

hearing rumblings of Twitter and Gmail being affected too, but currently both work for me

Those sites are being affected because there's so many failed lookups for Facebook that it's causing a small DDOS "attack" on DNS servers for other platforms and services.   We noticed that at work this morning, but it cleared up around 11:30.  

Cloudflare's CTO said earlier they had to bring in extra teams for their DNS service to boost what they had to overcome the loss of Facebook.

Edited by Francisco 2.0
  • Hook 'Em 3
Link to comment
Share on other sites

https://www.nytimes.com/live/2021/10/04/business/news-business-stock-market#facebook-down

 

Quote

Facebook and its family of apps, including Instagram and WhatsApp, went down at the same time on Monday, taking out a vital communications platform used by more than three billion people around the world and adding heat to a company already under intense scrutiny.

Facebook’s apps — which include Facebook, Instagram, WhatsApp, Facebook Messenger and Oculus — began displaying error messages around 11:40 a.m. Eastern time, users reported. Within five minutes, Facebook had disappeared from the internet. Hours later, the sites were still not functioning, according to Downdetector, which monitors web traffic and site activity.

 

Quote

Technology outages are not uncommon, but to have so many apps go dark from the world’s largest social media company at the same time was highly unusual. Facebook’s last significant outage was in 2019, when a technical error affected its sites for 24 hours, in a reminder that even the most powerful internet companies can still be crippled by a snafu.

 

Quote

This time, the cause of the outage remained unclear. Several hours into the incident, Facebook’s security experts were still trying to identify the root issue, according to an internal memo and employees briefed on the matter. Two members of its security team, who spoke on the condition of anonymity because they were not authorized to speak publicly, said it was unlikely that a cyberattack had taken place because one hack was unlikely to affect so many apps at once.

 

Spoiler

Security experts said the problem most likely stemmed instead from a misconfiguration of Facebook’s server computers, which were not letting people connect to its sites like Instagram and WhatsApp. When such errors occur, companies frequently roll back to their previous configuration, but Facebook’s problems appeared to be more complex and to require some manual updating.

Andy Stone, a Facebook spokesman, posted on Twitter, “We’re aware that some people are having trouble accessing our apps and products. We’re working to get things back to normal as quickly as possible, and we apologize for any inconvenience.”

The outage caused outrage and mirth online, as Facebook and Instagram users turned to Twitter to lament and poke fun at their inability to use the apps. The hashtag #facebookdown also quickly started trending.

But the outage was a blow to small businesses and others that rely on the platform to conduct outreach and advertising and to millions who use Facebook and its apps to communicate with friends and family across the world.

Gamers who livestream their play on Facebook Gaming and are paid by viewers and subscribers said on Monday that they were trying to find alternatives.

“You definitely feel out of touch, and it’s scary, too,” said Douglas Veney, a gamer in Cleveland who goes by GoodGameBro. He said he had hoped to post videos and other content on Facebook for his followers ahead of a planned live stream Monday night. “I have 300,000 followers there — you just cross your fingers that nothing’s gone when it comes back.”

Mr. Veney, 33, has a job outside of streaming as well, but he said he knew of other streamers living paycheck to paycheck who were making the jump to other sites to be able to keep making money.

“It’s hard when your primary platform for income for a lot of people goes down,” he said.

Inside Facebook, workers scrambled because their internal systems also stopped functioning. The company’s global security team “was notified of a system outage affecting all Facebook internal systems and tools,” according to an internal memo sent to employees. Those tools included security systems, an internal calendar and scheduling tools, the memo said.

Employees said they had trouble making calls from work-issued cellphones and receiving emails from people outside the company. Facebook’s internal communications platform, Workplace, was also taken out, leaving many unable to do their jobs. Some turned to other platforms to communicate, including LinkedIn and Zoom as well as Discord chat rooms.

Some Facebook employees who had returned to working in the office were also unable to enter buildings and conference rooms because their digital badges stopped working. Security engineers said they were hampered from assessing the outage because they could not get to server areas.

Facebook’s global security operations center determined the outage was “a HIGH risk to the People, MODERATE risk to Assets and a HIGH risk to the Reputation of Facebook,” the company memo said.

A small team of employees was soon dispatched to Facebook’s Santa Clara, Calif., data center to try a “manual reset” of the company’s servers, according to an internal memo.

Several Facebook workers called the outage the equivalent of a “snow day,” a sentiment that was publicly echoed by Adam Mosseri, the head of Instagram.

Facebook has already been dealing with plenty of scrutiny. The company has been under fire from a whistle-blower, Frances Haugen, a former Facebook product manager who amassed thousands of pages of internal research and has since distributed them to the news media, lawmakers and regulators. The documents revealed that Facebook knew of many harms that its services were causing.

Ms. Haugen, who revealed her identity on Sunday online and on “60 Minutes,” is scheduled to testify on Tuesday in Congress about Facebook’s impact on young users.

In Facebook’s early days, the site experienced occasional outages as millions of new users flocked to the network. Over the years, it spent billions of dollars to build out its infrastructure and services, spinning up enormous data centers in cities including Prineville, Ore., and Fort Worth, Texas.

The company has also been trying to integrate the underlying technical infrastructure of Facebook, WhatsApp and Instagram for several years.

John Graham-Cumming, the chief technology officer of Cloudflare, a web infrastructure company, said in an interview that Monday’s problem was most likely a misconfiguration of Facebook’s servers.

Computers convert websites such as facebook.com to numeric internal protocol addresses, through a system that is likened to a phone’s address book. Facebook’s issue was the equivalent of removing people’s phone numbers from under their names in their address book, making it impossible to call them, he said. Cloudflare provides some of the systems that support Facebook’s internet infrastructure.

“It was as if Facebook just said, ‘Goodbye, we’re leaving now,’” Mr. Graham-Cumming said.

 

Link to comment
Share on other sites

55 minutes ago, Francisco 2.0 said:

 

Sort of.  Best analogy I can come up with at the moment.

 

Facebook self-deleted all of its street addresses, zip codes, phone numbers, etc this morning.  So the mail, delivery trucks and customers no longer know how to get to their various properties all over the world.

Further, because Facebook did such a thorough job of deleting all of this information, the only way to fix their problems is to show up in person and start creating their addresses, zip codes and phone numbers all over again.  Which takes time.

 

 

A slightly more technical explanation...

Basically, everything facebook was removed from the global DNS routing tables this morning.  DNS is the system that allows you to type facebook.com in a browser, while under the covers, the browser is talking to a computer at 142.250.69.238 (an example) that is mapped to facebook.com at a nameserver.  So when you type facebook.com into the browser, whats really happening under the covers is that your browser contacts nameserver(s) that respond with the the mapped IP address (142.250.69.238 or whatever).  Your browser then talks to the computer at 142.250.69.238, and you get a response back in your browser with the facebook web page.

Right now, if you type facebook.com into a browser, the first part of that process is failing.  You contact nameservers, asking for the IP address mapped to facebook.com, and the nameservers are like, "facebook.com?  Never heard of it" and your browser can't find the IP address to talk to.  Your browser then reports this as:

553566373_ScreenShot2021-10-04at2_42_25PM.png.7831676ab211780f9140736c4fc0ddc2.png
I bet all their servers are still up and reachable if you had their IP addresses like 142.250.69.238.

Link to comment
Share on other sites

3 minutes ago, Goredho said:

A slightly more technical explanation...

Basically, everything facebook was removed from the global DNS routing tables this morning.  DNS is the system that allows you to type facebook.com in a browser, while under the covers, the browser is talking to a computer at 142.250.69.238 (an example) that is mapped to facebook.com at a nameserver.  So when you type facebook.com into the browser, whats really happening under the covers is that your browser contacts nameserver(s) that respond with the the mapped IP address (142.250.69.238 or whatever).  Your browser then talks to the computer at 142.250.69.238, and you get a response back in your browser with the facebook web page.

Right now, if you type facebook.com into a browser, the first part of that process is failing.  You contact nameservers, asking for the IP address mapped to facebook.com, and the nameservers are like, "facebook.com?  Never heard of it" and your browser can't find the IP address to talk to.  Your browser then reports this as:

553566373_ScreenShot2021-10-04at2_42_25PM.png.7831676ab211780f9140736c4fc0ddc2.png
 

I bet all their servers are still up and reachable if you had their IP addresses like 142.250.69.238.

Actually it's far worse than simple DNS issues - Facebook doesn't have any BGP routes to their servers/IP range(s) - so even if you had IP addresses, there's no route across the internet to those IP addresses.

So whereas removing DNS names means taking the name out of the phone book, removing the BGP routes means there's no roads to get there, even if you know where you're going.

This is a really serious fuckup and/or attack - never seen anything like this before for a major organization, nevermind such a tech focused one like FB.

  • Hook 'Em 2
Link to comment
Share on other sites

18 minutes ago, Francisco 2.0 said:

https://www.nytimes.com/live/2021/10/04/business/news-business-stock-market#facebook-down

Quote

Facebook and its family of apps, including Instagram and WhatsApp, went down at the same time on Monday, taking out a vital communications platform used by more than three billion people around the world and adding heat to a company already under intense scrutiny.

 

giphy.gif?ssl=1 

Link to comment
Share on other sites

Just now, Captainant said:

Actually it's far worse than simple DNS issues - Facebook doesn't have any BGP routes to their servers/IP range(s) - so even if you had IP addresses, there's no route across the internet to those IP addresses.

So whereas removing DNS names means taking the name out of the phone book, removing the BGP routes means there's no roads to get there, even if you know where you're going.

This is a really serious fuckup and/or attack - never seen anything like this before for a major organization, nevermind such a tech focused one like FB.

Hadn't seen the border gateway protocol withdrawls part, and yeah, that is seriously no bueno for Facebook.  The only way this is a fuckup is if Facebook and all of its acquisitions that originated as completely separate entities shared the same centralized network infrastructure and config.  Maybe they brought all that under one umbrella.  If they did, they just found out why that may not have been the best idea.

Link to comment
Share on other sites

3 minutes ago, kevwun said:

I am confused how both the dns and bgp stuff was broken at the same time.  It seems very unlikely that someone could accidentally wipe out enough stuff to take them down this thoroughly.

there was a post in /r/sysadmin from a now-deleted account that may have some additional detail (archive.is link)

Quote

/u/ramenporn  Update 1440 UTC:

As many of you know, DNS for FB services has been affected and this is likely a symptom of the actual issue, and that's that BGP peering with Facebook peering routers has gone down, very likely due to a configuration change that went into effect shortly before the outages happened (started roughly 1540 UTC).

There are people now trying to gain access to the peering routers to implement fixes, but the people with physical access is separate from the people with knowledge of how to actually authenticate to the systems and people who know what to actually do, so there is now a logistical challenge with getting all that knowledge unified.

Part of this is also due to lower staffing in data centers due to pandemic measures.

Update from /u/ramenporn 

No discussion that I'm aware of yet that is considering a threat/attack vector.

I believe the original change was 'automatic' (as in configuration done via a web interface). However, now that connection to the outside world is down, remote access to those tools don't exist anymore, so the emergency procedure is to gain physical access to the peering routers and do all the configuration locally.

So it is possible that we are seeing the worst-case scenario of having insufficient access controls and change management. But the end result is that it's only possible to access their servers physically/from the same subnet. Can't imagine the destruction if their core routes were all wiped and they have to manually recreate it - would be a disaster of epic proportions.

Willy Wonka Suspense GIF

  • Hook 'Em 1
  • Haha 1
Link to comment
Share on other sites

8 minutes ago, kevwun said:

I am confused how both the dns and bgp stuff was broken at the same time.  It seems very unlikely that someone could accidentally wipe out enough stuff to take them down this thoroughly.

I don't fully understand the ins, outs, and whathaveyous, but this is where my head is too.  This seems like it almost has to be an attack of some kind - disgruntled employee or outside actor(s) who are real ninjas.  This is such a colossal fuckup that it doesn't seem possible to be an engineering goof.

Link to comment
Share on other sites

I have dabbled a little bit with bgp at work and the changes you make in bgp routers are pretty much instantaneous.  You don't have to wait long for it to propagate.  Granted our setup have is not in the same universe in terms of scale and complexity.  It's still pretty amazing they broke it this badly and can't get it back up and running. 

Link to comment
Share on other sites

4 minutes ago, The Royal We said:

I don't fully understand the ins, outs, and whathaveyous, but this is where my head is too.  This seems like it almost has to be an attack of some kind - disgruntled employee or outside actor(s) who are real ninjas.  This is such a colossal fuckup that it doesn't seem possible to be an engineering goof.

The link Cap posted makes sense, but I don't know if I buy it.  The theory is that since Facebook hosts all of the stuff that lets the rest of the internet access them that a BGP config change wiped out all of that info.  As far as the rest of the internet is concerned, facebook.com doesn't exist.  No providers have an IP address for it or a route to get to it.  No one can get in remotely to fix the issue because of that.  There has been enough time to fly people in though if that was the problem, so I'm still skeptical.  If this actually is what happened, lots of people are getting fired because it shouldn't be possible for facebook to lock themselves out of their own house without a spare key. 

  • Hook 'Em 1
Link to comment
Share on other sites

Just now, 956 Worldwide said:

Stupid wives thread is that way ——————>

lmao I had to drive a spare key to my wife last Friday after she locked herself out of her car

3 minutes ago, kevwun said:

If this actually is what happened, lots of people are getting fired because it shouldn't be possible for facebook to lock themselves out of their own house without a spare key. 

I've seen out-of-touch leadership handwave away engineer concerns with changes, and perpetually kick the can on tech debt, so it's not completely outrageous to me that something like this could happen. Many orgs are so siloed off from eachother that there's little to no communication or mechanism to communicate across those silos - and as a result the whole org loses visibility into the impact of some changes. 

I'm really REALLY interested in the post-mortem of this event, because even if it was a malicious attack, there should have been mechanisms to protect against it like a reboot to last known good config if healthchecks fail after a deploy.

This sort of failure doesn't happen because just a single team had a bad push - there have to be multiple levels of decision making without safeties or guardrails against this sort of cascading failure

  • Hook 'Em 2
Link to comment
Share on other sites

26 minutes ago, Captainant said:

there was a post in /r/sysadmin from a now-deleted account that may have some additional detail (archive.is link)

So it is possible that we are seeing the worst-case scenario of having insufficient access controls and change management. But the end result is that it's only possible to access their servers physically/from the same subnet. Can't imagine the destruction if their core routes were all wiped and they have to manually recreate it - would be a disaster of epic proportions.

Willy Wonka Suspense GIF

billy-madison-cooking-fire.gif

Link to comment
Share on other sites

1 hour ago, kevwun said:

I have dabbled a little bit with bgp at work and the changes you make in bgp routers are pretty much instantaneous.  You don't have to wait long for it to propagate.  Granted our setup have is not in the same universe in terms of scale and complexity.  It's still pretty amazing they broke it this badly and can't get it back up and running. 

Yep once the routes are advertised it's pretty quick. This has been my project all year. My company had shitty static routing and I've been making the changes onsite to get bgp up and running. I can't wait to hear what actually happened, it's pretty amazing to me right now. I also spent a good chunk of my afternoon reviewing our documentation and rollback configs. This was crazy. 

Link to comment
Share on other sites

1 minute ago, Zepol87 said:

How do you have physical access but not network access? Also that's what I figured happened regarding the bgp issue. 

I'd imagine it's an issue of the people with physical access to the servers don't have access to the secrets management system that holds creds to those systems. And that the people who actually know the password are very VERY few and far between. And the wiki pages that have all that information and documentation are also down.

At least that's what I'd figure

  • Hook 'Em 1
Link to comment
Share on other sites

1 minute ago, Zepol87 said:

True that. I'm not thinking in terms of a huge company like that. Always been in smaller IT teams where here is your access to damn near everything  Incase we need shit fixed 

Shit, any enterprise-scale organization is going to operate with that sort of access control and logging. SOX/ISO/PCI/yaddayadda compliance and all that, plus a CTO/CISO who needs something to do to justify their fat compensation package.

My first job out of school was doing SRE work for a web retail site for a fortune 500 retailer managing physical servers in racks, and we still had to contend with bastions and jump boxes and shit just to access our dev environment - and had to get VP approval plus supplemental documentation to access a prod server outside of a maintenance window. And there's change management folks who LIVE to deny you access or rake your shit over the coals for failing to follow everything perfectly.

The point I'm trying to make is that there's an incredible amount of process at large companies to make changes to their infrastructure - which means that this sort of failure observed today is an indictment of their entire IT management stack. First that it happened at all, and second that it took so long to remediate the problem. I've seen prod go down midday before - but never for this long or to this depth

  • Hook 'Em 2
Link to comment
Share on other sites

Cloudflare has a great writeup that isn't that nerdy, for those interested.     Even touches on the reason why DNS servers were overloaded, hindering other services/websites from being accessible:

https://blog.cloudflare.com/october-2021-facebook-outage/

 

Quote

“Facebook can't be down, can it?”, we thought, for a second.

Today at 1651 UTC, we opened an internal incident entitled "Facebook DNS lookup returning SERVFAIL" because we were worried that something was wrong with our DNS resolver 1.1.1.1.  But as we were about to post on our public status page we realized something else more serious was going on.

Social media quickly burst into flames, reporting what our engineers rapidly confirmed too. Facebook and its affiliated services WhatsApp and Instagram were, in fact, all down. Their DNS names stopped resolving, and their infrastructure IPs were unreachable. It was as if someone had "pulled the cables" from their data centers all at once and disconnected them from the Internet. 

How's that even possible?

 

Link to comment
Share on other sites

4 hours ago, Zepol87 said:

Yea the scope of this is pretty fucking crazy. 10 or 11 years ago I did bring down the City of Plano's network for about 2 minutes by causing a spanning tree loop so almost hits close to home lol

Causing highly visible outages comes with the territory when you work in networking. Everyone gets burned at some point in time, some worse than others. Been there a few times myself.

Biggest "oh shit" moment happened when I was a young pen tester. Someone somewhere fucked up the CIDR that was given to me and I penetrated the wrong network. Found an opening and got carried away as to how far I was able to get into the network.

That moment when you realize you just broke into a Dutch bank without permission.

shocked steve harvey GIF

  • Hook 'Em 1
  • Like 1
  • Haha 3
  • Drool 1
Link to comment
Share on other sites



×
×
  • Create New...