Skip to main content

Timeline for answer to Creating a grow-on-demand encrypted volume with LUKS by Damiano Verzulli

Current License: CC BY-SA 4.0

Post Revisions

18 events
when toggle format what by license comment
Aug 15, 2024 at 12:02 comment added Luis A. Florit @DamianoVerzulli I asked this specific question, and got a good answer: unix.stackexchange.com/a/781639/149203
Aug 15, 2024 at 7:48 comment added Damiano Verzulli @LuisA.Florit : unfortunately I'm not skilled enough to give you a scientifical, exact answer. My guessing is for a "yes! Fragmentation can impact underlying storage layers" but... it's only a guess :-( Sorry!
Aug 7, 2024 at 21:17 comment added Luis A. Florit Isn't fragmentation of the encrypted file a problem as it grows over time?
Aug 7, 2024 at 19:05 comment added Luis A. Florit What happens if the filesystem itself has less free space than the free space in the sparse encrypted file, and we try to write to it? No corruption?
Feb 1, 2023 at 11:23 comment added TrinitronX @leonixyz is correct. The random_data.bin file(s) created in the example use /dev/zero as the input. Therefore this creates files containing zero bits as data, which are seen by the underlying sparse file as "holes". If we replace if=/dev/zero with if=/dev/random and fill 100% of the filesystem so that exactly zero bytes are left... then any subsequent writes of normal files will return disk full errors. For example, try writing a simple text file with vim and once you try to write (:w), it will complain that there is no space. So grow-on-demand depends on sparse-ness of data.
Oct 30, 2021 at 13:26 comment added leonixyz I suspect the reason why you did not run out of space when creating the last file, is that you created a sparse file in the sparse file-filesystem,... so to say.
Feb 18, 2020 at 21:22 comment added Michael cryptsetup -y luksFormat /dev/loop0 fails for me with Cannot format device /dev/loop0 which is still in use.
S Oct 5, 2019 at 7:35 history suggested AGI-Chandler CC BY-SA 4.0
Added clarification regarding grow-on-demand attribute
Sep 27, 2019 at 22:43 review Suggested edits
S Oct 5, 2019 at 7:35
Dec 14, 2017 at 1:57 comment added user1747036 That was so helpful I used it in my latest script! published on github and gave credits, please let me know if any issue. github.com/jupiter126/blortchzj
Oct 26, 2016 at 13:52 comment added localhost @Thilo I'm also curious what would happen if you tried to read the file that silently overflowed. rsync has a --sparse option that should create sparse files on the destination disk.
Dec 16, 2015 at 1:34 history bounty awarded Thilo
Dec 14, 2015 at 1:16 vote accept Merc
Dec 13, 2015 at 12:39 comment added Damiano Verzulli I was tempted to futher investigate but.... unfortunately I was short on time and, indeed, it's definitely a space valid for another different SF question. Anyway, it can be easily avoided by not overbooking your overall storage: I mean, you can create sparse-files but... so to have the maximum total allocatable space fitting in your phisical disk. Don't you? If, instead, you're searching for "overbooking" solutions.... than maybe something else should be investigated (LVM?)
Dec 13, 2015 at 11:47 comment added Thilo Thanks for this great answer. Leaves me with a question and a worry. Worry: Pretending to have successfully written that second 2GB file when there was really no space for it? Troublesome... What happens when you try to read it back (with sha1sum or something)? Question: Are there ways to back up a sparse file across the network that keeps it sparse (i.e. only actually copies the parts that are used)?
Dec 13, 2015 at 8:38 comment added Damiano Verzulli No.Sorry. They should work as normal user as well, given proper write permission on the main folder.
Dec 13, 2015 at 3:06 comment added Merc I noticed that you have to be root for those commands to work. Is that always the case for sparse files?
Dec 12, 2015 at 19:33 history answered Damiano Verzulli CC BY-SA 3.0