I would trust SFTP that existed for decades much more, and that not mentioning performance impact. All validation should do is verify that the chunks required to restore files are present.Īnd yet, adding minio adds another layer of complexity that can fail. Host filesystem must protect it from bit rot. It is not a job of a backup solution to validat it. If the chunk is uploaded it must be assumed to stay the same. Data is encryptred during transfer corrupted data will fail to decrypt and will get retransmitted.Īnd much faster validation of data (comparing checksums instead of downloading data to compare). This needs to be ensured by transport - SFTP in this case. Over SFTP like atomic writes of files (faster and less error checking required by Arq),Ĭhecking file size is not that hard and transferring files over SFTP reliably has been polished to death.Ĭhecksums of uploaded data (so Arq can verify the NAS received the correct data), So I nuked the whole thing and will continue to use SFTP. Perhaps there are some tweaking to be done, but for this specific use case SFTP seems to be superior, and the caching and optimization that Minio could have provided did not materialized with the default configuration. Why writes to Minio is 2.5 times slower I’m not sure. Split 244.14M bytes into 51 chunks with compression and encryption in 2.08s: 117.57M/s Split 244.14M bytes into 51 chunks with compression but without encryption in 1.97s: 124.07M/s Split 244.14M bytes into 51 chunks without compression/encryption in 1.51s: 161.56M/s SFTP: CPU utilization around 4% combined by two sshd proceses alexmbp:~ alex$ duplicacy benchmark -storage tuchka Uploaded 256.00M bytes in 8.18s: 31.28M/sĭownloaded 256.00M bytes in 3.13s: 81.91M/sĭeleted 64 temporary files from the storage Split 244.14M bytes into 50 chunks with compression and encryption in 2.07s: 117.73M/s Split 244.14M bytes into 50 chunks with compression but without encryption in 2.04s: 119.65M/s Split 244.14M bytes into 50 chunks without compression/encryption in 1.52s: 160.82M/s Storage set to 244.14M byte random data in memory Minio: Cpu utilization around 15% by minio alexmbp:~ alex$ duplicacy benchmark -storage minio So I aborted and ran duplicacy benchmark on each storage instead, three time in a row each, recording results from the last run, and watching CPU utilization on the server. I have started to copy the storage from sftp to Minio, and that would have taken three days, according to how fast it was going, at 30MB/sec on average. “In addition, you may configure Minio server to continuously mirror data between Minio and any Amazon S3 compatible server.”Īgain, thank you all for your thoughtful responses.įor what its worth, I’ve installed Minio on the same server that hosts my duplicacy backups over SFTP, which happens to be Intel Atom C3538 based machine with 16GB ECC memory. If I can skip my main storage, create a second encrypted bucket for mino, and replicate that to GCP directly from mino as the site claims, it might save me some storage space/ allow me to keep more redundant snapshots. My last consideration is that I have most of my servers on and offsite, backup to my main server storage, which then gets backed up 2 more times. I wouldn’t have even considered this without your replies so THANK YOU!!! Duplicacy check is dreadfully slow in my experience as well… understandably so, but still. I also like the suggestion that using mino will provide bitrot detection. I think I’m actually going to connect it to a virtual backup server on the same machine though and since there’s a lot of overhead on samba/ windows file shares, I will probably use mino (usually, if you ask for 1 file exists, it downloads the entire directory listing each time you check a file (this happens transparently, but it happens)). If I add it directly, I agree that anything additional might be unnecessary overhead. Then I decided that since the NAS was too slow (half the speed of GCP or less comparing to the old initial GCP logs… hard to tell), so I bought an old 12 bay server, second raid card, some 8088 external connectors, etc all cheap on ebay and I’m going to setup a DAS. Then I decided to resurrect an old NAS I have and RAID 0 a couple of drives together so I don’t have to split into multiple backup sets. Hoping the deduplication would hold me over for a while. These ran out of space and I didn’t want to split it to yet a third drive.Īt first I was just going to use duplicacy with some external hard drives and split the backup. I was using Bvckup 2 to backup to a couple external HDDs to have a second backup in case anything ever went wrong with duplicacy/gcp/encryption codes. This backup is in addition to my backup to GCP. I love all of the well thought out answers.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |