Page 3 of 5 FirstFirst 12345 LastLast
Results 21 to 30 of 41

Thread: Squashfs-ed knoppix

  1. #21
    Senior Member registered user
    Join Date
    Dec 2009
    Posts
    423
    Quote Originally Posted by Forester View Post
    Perhaps Knoppix has a reason to stick with cloop after all.
    You guys are diplomatic !
    I think it is strictly sentimental value and perhaps better way of saying is, for backward compatibility.

    Each time I wanted to compile cloop, I have to look out for patches for it. Kernel 2.6.35, .36, .37, .38. Every version there is something to tweak. Perhaps the Linux kernel is to blame, and perhaps squashfs kernel source also requires changes, but hey, someone has already modified it for me !

    If one checks kernel 2.6.38, it has LZMA2 for SQUASHFS ( aka XZ compresion). This fella claims it compresses better than lzma :-

    http://chakra-project.org/bbs/viewtopic.php?id=4145

    Cheers.

    p/s: I am quite surprised about the time taken difference. Perhaps the mksquashfs was not done with lzma compression. But hey, without lzma it is already smaller than the '-b' of cloop ?

  2. #22
    Senior Member registered user
    Join Date
    May 2006
    Location
    Columbia, Maryland USA
    Posts
    1,631
    I would like to see a comparison of the time it takes to boot
    a LiveUSB (better yet, a LiveSDCard) made from a
    Knoppix 6.4.x LiveCD, with the only difference being
    between cloop and squashfs. Times for both, that is.

  3. #23
    Senior Member registered user
    Join Date
    Dec 2009
    Posts
    423
    Read post #19 from Forester. He did it already. Basically on a fast computer, there is no appreciable difference. In a virtual machine, cloop is slower.

  4. #24
    Senior Member registered user
    Join Date
    May 2006
    Location
    Columbia, Maryland USA
    Posts
    1,631
    @ kl522

    Thanks. I was hoping for a lot more difference.
    You should be aware of some history, if you're not:

    google <squashfs cloop knoppix 2004 klaus>

    Thread from 1/25/2005
    http://www.knoppix.net/forum/threads...xperimental%29

    Thread from 11/12/2005
    http://www.knoppix.net/forum/threads...squashfs-rocks

  5. #25
    Senior Member registered user
    Join Date
    Dec 2009
    Posts
    423
    There maybe some truth in some of the older threads. However, the then cloop and the then squashfs might be different now. For software/hardware, 6months or 1 year is already a big difference. So I am not too sure of their relevance in todays view.

    Technical analysis might be useful. But unless it is subtantiated with real life data, otherwise it is academic. I dare say in order to see the significant and conclusive difference in terms of performance, one will have to carefully design experiments to observe them. Otherwise at a gross level, especially on today's hardware, it will be hard to notice any difference.

    And for usage history, I put my bet on squashfs. You see them in almost all of the embedded linux devices today. These devices have much more stringent CPU and memory contraints than a typical notebook/desktop.

  6. #26
    Senior Member registered user
    Join Date
    May 2006
    Location
    Columbia, Maryland USA
    Posts
    1,631
    @ kl522

    I would like to sign on to krishna's post #5 and
    suggest Klaus K. probably has good reasons for lagging
    the squashfs effort. Among other reasons is Ubuntu
    and Fedora are plowing this ground. KK is a one-man
    effort AFAIK.

    This is not to diminish your effort or Forester's,
    merely to suggest that we keep our discussions relative
    to computer metrics, and not spend any time on long-
    distance prognostications on ulterior motives.

    Foresters decompress times on 'a machine at work'
    seemed attractive. What were its parameters?
    How does that machine compare to my Laptop/SDCard rig?

  7. #27
    Senior Member registered user
    Join Date
    Dec 2009
    Posts
    423
    Since you use the word "probably", so it is belief-based. Period.

    The time difference in compression is fully explanable, as I have already mentioned.

    Likely Forester did not use lzma compression for squashfs. Lzma has the behaviour that it takes a long time to compress but decompresses very fast. When he uses '-b' for cloop, that will result in using lzma and gzip. The gzip time is insignificant compared to lzma.

    But the thing is even without lzma, squashfs results in smaller image - if that experiment carried out by Forester is correct. ( In my posts sometime ago, I use gzip for both cloop and squashfs, it also proved squashfs has smaller image). Once you use lzma-squashfs, you will see about another 20 % smaller image.

  8. #28
    Senior Member
    Join Date
    Jan 2011
    Posts
    242
    Ladies, gentlemen, please. "Calm down. It's only a commercial". One set of results does not prove anything and should not be used to jump to conclusions.

    The improvement in boot time at work does not prove that squashfs is faster than cloop. It is an unexplained side affect. The slow part of the boot is the udevprobing. Why should that be hitting the compressed file system ? After the green bar has gone as far as it will, the spinner shows the system is still working. It spins for a long time with cloop but not with squashfs. Comments in /etc/init.d/knoppix-autoconfig suggest to me the boot is waiting for i/o activity to die down. What i/o I don't know - the VirtualBox console indicator shows no USB i/o at this time.

    I did say (perhaps not clearly) that I used the Squeeze version of mksquashfs, which depends (have a look on the Debian repository web-site) on a gzip library, not an lzma library. Ergo, I used gzip compression. The same web-site shows the Sid version of mksquashfs, depends on several compression library packages, including liblzo2-2, which supports lzma and lzma2 compression.

    kl522 says that Linux kernel 2.6.38 contains lzma compression but I am using Knoppix 6.4.3, which runs atop Linux kernel 2.6.36. I infer that had I used lzma compression, I would not have been able to boot my squashed file system.

    There are some that say cloop requires more memory than squashfs. Perhaps, I don't know but the arguments for this that I have read so far appear specious to me.

    Linux manages the disk cache. If memory isn't required for anything else, Linux will use it to cache disk contents but as soon as memory is required for something else it will free up disk cache. This is why, with two otherwise identical machines, the one with the more memory will appear to run faster. It is also why, at the time, Linux 'ran faster' than Windows 98.

    The disk cache management is independent of squashfs and cloop. Because cloop is a loop device, data gets cached twice - once before and again after decompression (also true of knoppix-data.img but without the decompression). This is given as 'proof' that cloop requires more memory than squashfs. Poppy-cock.

    Somehow, somewhere, squashfs must be buffering (aka caching) data before decompression. If its cache is too small, it will have to 'hit the disk' more often. Might use less memory, but that would make it slower.

    Block size might be more significant and might give different results for different users. I used 64 Kb blocks from the cloop example on the Wiki. The squashfs man page says its default is 128 Kb. kl522's examples appear to be using 256 kb.

    Now, if you are starting up mega Windoze-like applications that require tens of Mb to display an OK button, big blocks are probably going to make your app start-up faster. How often do you start these programs ? Once per session, so you don't need the disk cache.

    If you are a sentimental old Unix fuddy-duddy who is reluctant to say goodbye to the power and productivity of the old command line interface, you want the disk cache to cache the commands you use a lot. The very idea that typing 'pwd' might cause squashfs to go off and read a 256 kb block and decompress it into a memory 'block' twice that size in order to run a program of only 25 kb in size is embarassing.

  9. #29
    Senior Member
    Join Date
    Jan 2011
    Posts
    242

    Patch Not As Intended

    Quote Originally Posted by kl522 View Post
    mount -o loop=/dev/loop1 /mnt-system/"$knoppix_dir"/[Kk][Nn][Oo][Pp][Pp][Ii][Xx].sq /KNOPPIX
    There is a problem with the patches used to mount the squashfs file system.

    With the vanilla Knoppix, the KNOPPIX and KNOPPIX-DATA file systems are mounted on /dev/cloop0 and /dev/loop0 respectively. Code elsewhere in the /init script 'knows' that KNOPPIX-DATA is on /dev/loop0.

    With squashed Knoppix, both file systems are on loop devices. The -o loop=/dev/loop1 attempts to ensure that KNOPPIX is /dev/loop1, leaving /dev/loop0 available for KNOPPIX-DATA in order to be backwards compatible.

    This does not work as df -h on a squashed Knoppix systems shows. Why ? The /init script runs BusyBox mount, not the ordinary mount we all know and love. The output of /bin/busybox mount --help includes:

    Code:
    -o OPT:
            loop            Ignored (loop devices are autodetected)
    The only solution is to 'fix' all the references to /dev/loop0 throughout the /init script.

    But it works for me so where's the problem ? Long may it continue to do so. If you are using an ordinary persistent store I think you're OK: /dev/loop0 is only referenced on error paths. If you try to use an encrypted persistent store I think you might be in trouble.

    Attached to this post are two patches: generalise.txt fixes the /dev/loop0 problem; squashfs.txt implements the boot squashed-Knoppix feature that is the proper subject of this thread.

    If you have already implemented a boot squashed-Knoppix feature using either kl522's or dinosoep's examples as a guide, you don't need squashfs.txt - that's just my implementation of the same thing.

    If you look at generalise.txt you may think it looks way too complex for what it is. There is a reason for this. Many of the lines that needed changing are also lines that needed changing for the implementation of the knoppix_data cheat code. In generalise.txt is a combo-patch that meets the needs of both squashed-Knoppix and (a re-issue) of the knoppix_data cheat code patch.
    Attached Files Attached Files

  10. #30
    Senior Member
    Join Date
    Jan 2011
    Posts
    242
    Quote Originally Posted by kl522 View Post
    I don't want to get into the war of the compression speed comparison ...
    You want just wanna be an innocent bystander who lobs the odd grenade from time to time.

    Quote Originally Posted by kl522 View Post
    Which is more established compression method ?

    Maybe my knowledge is skewed but here is what I know :-

    1. Squashfs has long existed if not as long as cloop compression.
    2. Squashfs is accepted into stock kernel a few versions back but cloop is still an external patch.
    3. You can find squashfs in almost all embedded devices, including your typical home routers, home ADSL modems, home media players, home appliances such as TVs and so on. If you count the number of seats ( linux OS ) uses squashfs compared to cloop, squashfs is many many times more widespread than cloop..
    4. Just run this utility on the compression file system :-
    [code]
    Are you in embedded software development ?

    My screwed knowledge is that the vast majority of embedded devices are too dumb to have either an operating system or a filesystem while high-end house hold gadgets like games consoles, smart phones and iWhatevers don't run Linux. There is a large class of low-end embedded systems run the [appalling] uClinux which [in my recent experience] don't use squashfs but use cramfs instead, which is so bad that you would have to be a lot more than merely 'sentimental' to prefer it. The assertion that you can find squashfs in almost all embedded devices is your first grenade.

    For those who know nothing of embedded systems, you've usually got flash but it is not the same as USB or picture card flash. It's mtd flash and you might prefer a jffs2 file system for your persistent store as it does wear levelling. Depending on whether your flash is nand or nor and whether your upgrades are slow 'over the air' jobs or a quick updates from CD or USB, you might or might not put all your applications on the jffs2. That's what my last lot did, bless them.

    You've still got the equivalent of minitrt.gz but usually it is embedded in the Linux kernel image. This is the initial root file system that Knoppix uses to get going. It uses it to mount the KNOPPIX image, your knoppix-data.img, join the two into a UNIONFS and then chroot to the UNIONFS leaving minirt behind like the first stage of a moon rocket.

    The Knoppix (and desktop systems in general) minitrt.gz is a minimal Linux 'cos desktop distributions have to work on lots of different hardware. In embedded systems you often tailor the kernel and minirt.gz for a particular system and just use that. You don't need a second stage KNOPPIX image, just a minirt.gz and a knoppix-data.img. So using embedded systems in a discussion about compression methods for the KNOPPIX image is grenade number 2.

    My last lot had a box with lots of option cards. The option cards ran uClinux 2.6.20 with cramfs for a root file system, the box ran Linux 2.6.20 with squashfs but not because it came with the stock kernel. The kernel was too old and management had no intention of upgrading just because. So even if the majority of embedded Linux devices use squashfs (and that may be so), they don't do it because squashfs is mainline, rather squashfs is now mainline because its use is so widespread. Grenade number 3.

    I was tasked with getting rid of uClinux and cramfs. It took a recent stock kernel, built a initramfs with BusyBox and a few extra packages and I had a working system. Yes, the entire root file system was in RAM but there was plenty of it and this fixed many of the performance problems. An embedded system with no squashfs. In fact, no compressed file systems at all.

    Both box and cards either used the old, deprecated initrd (don't think so, no /initrc) or used an initramfs arrangement but with squashfs or cramsfs instead of a compressed cpio image to be read into a tmpfs. Either way, once again, not what I think most people think of as 'stock kernel'.

    Time for me to toss few grenades back (I'm a non-combatant too).

    The Wikipedia page on initrd and initramfs has a description that makes the old initrd sound a lot like cloop in design. KK was using cloop for his iso image before Debian tried using cramfs with initrd. I don't think Debian ever tried squashfs. Probably squashfs was still 'in development' at the time.

    The squashfs web-site freely admits that its mainlining was only possible because the Consumer Electronics Linux Form paid for it. Or in other words, big companies suddenly realised their products and reputations were dependent on a ropely old bit of free software maintained by some kernel developer wanabee. No, I'm sure Phillip Lougher is a very nice bloke and a good egg and all that but you can see why big companies would not want to be dependent on something maintained, in his spare time, by an eccentric German university professor. Besides KK cannot be bought.

    Finanlly, look up cloop on the Debian package web-site. It's available. It's not maintained by KK. It is available for all platforms. Unstable sid even has a port for m68k. So someone out there is using it with Debian on all sorts of platforms. It's not Knoppix 'cos was just i386 (and only recent amd64). So someone knows something they are not telling us.

Page 3 of 5 FirstFirst 12345 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  


SanDisk 1TB SSD Plus, Internal Solid State Drive - SDSSDA-1T00-G26 picture

SanDisk 1TB SSD Plus, Internal Solid State Drive - SDSSDA-1T00-G26

$74.99



Samsung 870 EVO Series 500GB 2.5

Samsung 870 EVO Series 500GB 2.5" SATA III Internal SSD MZ-77E500B/AM New Sealed

$59.00



DELL NDDN1 0NDDN1 SSDSC1BG200G4R Intel 200GB 6GB/S SATA 1.8'' SSD DC S3610 picture

DELL NDDN1 0NDDN1 SSDSC1BG200G4R Intel 200GB 6GB/S SATA 1.8'' SSD DC S3610

$15.00



DT8XJ Dell Intel DC S3700 800GB SATA 6Gb/s 2.5

DT8XJ Dell Intel DC S3700 800GB SATA 6Gb/s 2.5" SSD 0DT8XJ SSDSC2BA800G3R

$59.00



Crucial BX500 240GB Internal SSD,Micron 3D NAND SATA CT240BX500SSD1 - OEM item picture

Crucial BX500 240GB Internal SSD,Micron 3D NAND SATA CT240BX500SSD1 - OEM item

$16.99



Netac 1TB 2TB 512GB Internal SSD 2.5'' SATA III 6Gb/s Solid State Drive lot picture

Netac 1TB 2TB 512GB Internal SSD 2.5'' SATA III 6Gb/s Solid State Drive lot

$13.99



Patriot P210 128GB 256GB 512GB 1TB 2TB 2.5

Patriot P210 128GB 256GB 512GB 1TB 2TB 2.5" SATA 3 6GB/s Internal SSD PC/MAC Lot

$14.99



Western Digital  WDS500G1B0A-00H9H0 500GB 2.5

Western Digital WDS500G1B0A-00H9H0 500GB 2.5" SSD Grade A SKU 5036

$18.99



Fanxiang SSD 512GB 1TB 2TB 4TB 2.5'' SSD SATA III Internal Solid State Drive lot picture

Fanxiang SSD 512GB 1TB 2TB 4TB 2.5'' SSD SATA III Internal Solid State Drive lot

$198.99



Fanxiang 256GB 512GB 1TB 2TB 4TB Internal SSD 2.5

Fanxiang 256GB 512GB 1TB 2TB 4TB Internal SSD 2.5" SATA III 6GB/s for PC/MAC Lot

$197.99