Now it’s really in the cloud!
God, what an obvious and predictable fuckup. That picture is going to show up in baby database administrator textbooks and training slides for decades 😆
I wonder how many people in South Korea’s government knew this was a disaster waiting to happen but felt like it was disrespectful to speak up?
Allegedly, backups simply couldn’t be kept, due to the G-Drive system’s massive capacity.
X Doubt. Things like S3 can also store massive amounts of data and still support backups or at least geo replication. It’s probably just a matter of cost.
But it gets worse. It turns out that before the fire, the Ministry of the Interior and Safety had apparently instructed government employees to store everything in the G-Drive cloud and not on their office PCs.
Which is totally fine and reasonable? The problem isn’t the order to use the centralized cloud system, but that the system hasn’t been sufficiently secured against possible data loss.
Tape backup is still a thing.
If you can’t afford backups, you can’t afford storage. Anyone competent would factor that in from the early planning stages of a PB-scale storage system.
Going into production without backups? For YEARS? It’s so mind-bogglingly incompetent that I wonder if the whole thing was a long-term conspiracy to destroy evidence or something.
A conspiracy is always possible of course, but people really do tend to put off what isn’t an immediate problem until it’s a disaster.
Fukushima springs to mind. The plant had been warned more than a decade before the disaster that an earthquake in the wrong place would result in catastrophe and didn’t do anything about it, and lo and behold…
I was just thinking that incompetence on this scale is likely deliberate.
Either some manager refused to pay for backups and they’re too highly placed to hold accountable, or they deliberately wanted to lose some data, but I refuse to believe anyone built this system without even considering off-site backups.
It would be useful to know what percentage of the total storage these 858TB are, because that is practically nothing nowadays.
858tb not backed up? Thats paltry in data center terms. 86 x 10tb hard drives? Two of those 45 drive rack mounts worth of backup…
This was bad data architecture
There were backups!
Unfortunately, they were located three rows and two racks down from their source.
In their defense, they clearly never thought the building would burn down.
And let’s be fair to them, who’s even heard of lithium batteries catching fire?
This was a once in a millennia accident, something you can’t anticipate, and therefore can’t plan for.
Unless you’re talking about off-site backups. Then maybe they could have planned for that.
But who am I to judge?
And let’s be fair to them, who’s even heard of lithium batteries catching fire?
Euhhhh… You sure about that? Maybe you’re beeing sarcastic? But buses lithium batteries catching fire is not something unheard off ! Even a few Teslas burned to the ground went viral in the news !
He was
Teslas burned to the ground
All user error. Musk said so. Trust.
Everyone knows lithium is a mood stabilizer, same for batteries.
Why didn’t they have an offsite backup if the data was that valueable?
that’s gonna be an expensive backblaze account
yes it’s expensive. it’s also a basic requirement.
maybe they can do what I do, run Windows on the server so you can tell Backblaze that it’s a personal computer. Then I only have to pay a flat rate for the one computer.
All valuable data should be backed up off site in “cold storage” type places. It’s not that expensive compared to the production storage.
Something something the cloud is not a backup