Sunday, January 12, 2020

Start of LFS section 5 through 5.4

This chapter is about building a temporary system to bootstrap the system with the required basic tools and utilities needed to build the actual target system.

Section 5.1 Introduction

Contains a short introduction about creating the temporary system with needed tools installed under $LFS/tools/.  This tool chain will be used to build the final version of the basic LFS system.  There is no steps or instructions needed for this section.

Section 5.2 Toolchain Technical Notes

This section contains useful information about the temporary toolchain that is about to be built.  The technical details has some notes about cross-compiling and gives a brief overview of the upcoming steps.  There is no steps or instructions needed for this section, feel free to read the LFS book for additional info.

Section 5.3 General Compilation Instructions

This section contains generic information about build steps, such as:
  • Ensure $LFS is set
  • Sources directory is available under the $LFS location and is NOT under the tools directory
  • When building a package:
    • Change to the sources directory.
    • Untar the given package.
    • Change to the unpacked directory.
    • Use the book instructions for compiling and installing the given package.
    • When done change back to the sources directory, and remove the unpacked package, unless otherwise instructed.
I may need to adapt these steps for my layout for several reasons.  I am not untaring pre-built packages, but will be building in a checked out repository.  Sometimes build steps or settings may need to be slightly modified when building from a repository directory vs a 'built' release version tarball that in some cases may have been slightly modified durring packaging to allow builds.  In addition I will be using ZFS snapshots, cleans, rollbacks as necissary to clean up my builds.

Section 5.4 Binutils-2.33 Pass 1

Package Information:

The binutils package contains a linker, an assembler, and other tools for handling object files.
Approximate build time: 1 SBU
Required disk space: 659 MB

Build Prep and Timing

Recommends wrapping all the commands in a time { ... } to get a feel for how long '1 SBU' takes. For details on what all the options in the following commands mean, I recommend just reading the LFS book. My prep output is as follows:

 gentoo@livecd ~ $ echo $LFS  
 /rootPool/root_fs  
 gentoo@livecd ~ $ env  
 TERM=xterm  
 MAKEFLAGS=-j44  
 LC_ALL=POSIX  
 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.cfg=00;32:*.conf=00;32:*.diff=00;32:*.doc=00;32:*.ini=00;32:*.log=00;32:*.patch=00;32:*.pdf=00;32:*.ps=00;32:*.tex=00;32:*.txt=00;32:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:  
 LFS=/rootPool/root_fs  
 PATH=/tools/bin:/sbin:/bin:/usr/bin  
 PWD=/home/gentoo  
 LFS_TGT=x86_64-lfs-linux-gnu  
 PS1=\[\033]0;\u@\h:\w\007\]\[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\]   
 SHLVL=1  
 HOME=/home/gentoo  
 _=/bin/env  
 gentoo@livecd ~ $ cd $LFS/sources/binutils  
 gentoo@livecd /rootPool/root_fs/sources/binutils $ cd binutils-gdb/  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $ git status  
 HEAD detached at binutils-2_33_1  
 nothing to commit, working directory clean  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $ /sbin/zfs snapshot rootPool/root_fs/sources/binutils@lfs_prep_section_5.4_pass1  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $   
   

Configure/Build/Install

So I could easily time and recreate the commands need to build this pass, I created the following script:
 gentoo@livecd /rootPool/root_fs/sources/binutils $ cat build_binutils_temp_pass1.sh   
 #!/bin/bash  
 set -x  
   
 # Change into the repository  
 cd binutils-gdb  
   
 # Create a build location, and change to it.  
 mkdir -v build  
 cd build  
   
 # Run the configuration  
 ../configure --prefix=/tools            \  
              --with-sysroot=$LFS        \  
              --with-lib-path=/tools/lib \  
              --target=$LFS_TGT          \  
              --disable-nls              \  
              --disable-werror  
   
 # Build bin-utils  
 make  
   
 # create reequired install locations for x64 as needed  
 case $(uname -m) in  
  x86_64) mkdir -v /tools/lib && ln -sv lib /tools/lib64 ;;  
 esac  
   
 # Install the bin-utils  
 make install  
 gentoo@livecd /rootPool/root_fs/sources/binutils $   
   

When running the script I received the following error(s):
 mkdir: cannot create directory '/tools/lib': No such file or directory  
 + make install  
 make[1]: Entering directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build'  
 /bin/sh ../mkinstalldirs /tools /tools  
 make[1]: Nothing to be done for 'install-target'.  
 mkdir -p -- /tools /tools  
 mkdir: cannot create directory '/tools': Permission denied  
 mkdir: cannot create directory '/tools': Permission denied  
 make[1]: *** [Makefile:2505: installdirs] Error 1  
 make[1]: Leaving directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build'  
 make: *** [Makefile:2258: install] Error 2  
   

What I forgot about is that in section 4,
 livecd gentoo # ln -sv $LFS/tools /   
  '/tools' -> '/rootPool/root_fs//tools'   
on the host system, that this would NOT be persistent across reboot, so I need to add this item to the persistent startup script to set the symbolic link back up. After rebooting I now have a /tools link as can be seen here:
 gentoo@livecd ~ $ ls -l /  
 total 1  
 drwxr-xr-x   2 root   root  1660 Jul  3  2016 bin  
 drwxr-xr-x   2 root   root    28 Jul  3  2016 boot  
 drwxr-xr-x  18 root   root  5700 Jan 13 01:13 dev  
 drwxr-xr-x 127 root   root   420 Jan 13 01:19 etc  
 drwxr-xr-x   4 gentoo users    5 May 28  2019 gentooScratch  
 drwxr-xr-x   4 root   root    60 Jan 13 01:13 home  
 lrwxrwxrwx   1 root   root     5 Jun 23  2016 lib -> lib64  
 drwxr-xr-x   2 root   root  1070 Jun 23  2016 lib32  
 drwxr-xr-x  18 root   root    60 Jan 13 01:13 lib64  
 drwxr-xr-x   2 root   root    28 Oct 28  2015 media  
 drwxr-xr-x   6 root   root   120 Jan 13 01:12 mnt  
 drwxr-xr-x   2 root   root    40 Jan 13 01:12 newroot  
 drwxr-xr-x   5 root   root    64 Oct 28  2015 opt  
 dr-xr-xr-x 626 root   root     0 Jan 13 01:12 proc  
 drwx------   3 root   root    60 Jan 13 01:13 root  
 drwxr-xr-x   3 root   root     3 Jun  3  2019 rootPool  
 drwxr-xr-x  19 root   root   660 Jan 13 01:47 run  
 drwxr-xr-x   2 root   root  5023 Jul  3  2016 sbin  
 dr-xr-xr-x  13 root   root     0 Jan 13 01:12 sys  
 drwxrwxrwt   6 root   root   220 Jan 13 01:47 tmp  
 lrwxrwxrwx   1 root   root    23 Jan 13 01:19 tools -> /rootPool/root_fs/tools  
 drwxr-xr-x  21 root   root   180 Jan 13 01:17 usr  
 drwxr-xr-x  13 root   root   100 Jan 13 01:13 var  
   

Now I can rollback to the rootPool/root_fs/sources/binutils@lfs_prep_section_5.4_pass1 snapshot I made.  This time I will redploy my script and make a new snapshot with my build script, and run again.
 gentoo@livecd ~ $ sudo zfs rollback -r rootPool/root_fs/sources/binutils@lfs_prep_section_5.4_pass1   
 gentoo@livecd ~ $ mv build_binutils_temp_pass1.sh /rootPool/root_fs/sources/binutils   
 gentoo@livecd ~ $ /sbin/zfs snapshot rootPool/root_fs/sources/binutils@lfs_prep_section_5.4_pass1_scripted  
 gentoo@livecd ~ $ /sbin/zfs snapshot rootPool/root_fs/tools@lfs_prep_section_5.4_pass1_scripted  
 gentoo@livecd ~ $ mv lfs.bash_profile .bash_profile  
 gentoo@livecd ~ $ mv lfs.bashrc .bashrc                                     
 gentoo@livecd ~ $ source .bash_profile                                      
 gentoo@livecd ~ $ env                                              
 TERM=xterm                                                    
 MAKEFLAGS=-j44                                                  
 LC_ALL=POSIX                                                   
 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.cfg=00;32:*.conf=00;32:*.diff=00;32:*.doc=00;32:*.ini=00;32:*.log=00;32:*.patch=00;32:*.pdf=00;32:*.ps=00;32:*.tex=00;32:*.txt=00;32:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:  
 LFS=/rootPool/root_fs  
 PATH=/tools/bin:/sbin:/bin:/usr/bin  
 PWD=/home/gentoo  
 LFS_TGT=x86_64-lfs-linux-gnu  
 PS1=\[\033]0;\u@\h:\w\007\]\[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\]   
 SHLVL=1  
 HOME=/home/gentoo  
 _=/bin/env  
 gentoo@livecd ~ $ ls -ld /tools   
 lrwxrwxrwx 1 root root 23 Jan 13 01:19 /tools -> /rootPool/root_fs/tools  
 gentoo@livecd ~ $ cd $LFS/sources/binutils  
 gentoo@livecd /rootPool/root_fs/sources/binutils $ ls  
 binutils-gdb build_binutils_temp_pass1.sh  
 gentoo@livecd /rootPool/root_fs/sources/binutils $ time ./build_binutils_temp_pass1.sh...
[WHOLE BUNCH OF CONFIGURE, MAKE, COMPILER OUTPUT REMOVED FOR BREVITY]
...     
 make[3]: Leaving directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build/gdb'  
 make[2]: Leaving directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build/gdb'  
 make[1]: Leaving directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build'  
   
 real  0m58.416s  
 user  10m43.418s  
 sys   1m8.464s  
 gentoo@livecd /rootPool/root_fs/sources/binutils $
   

SBU Timings

Because I am using ZFS snapshots, I can perform the rollbacks as many times as I want to re-perform the build steps.  This will actually allow me to benchmark the approximate amount of time per SBU with different thread counts.  Also if there are build errors I can effectively reset, determine what went wrong, fix it, and re-perform the build steps, with a clean state as if I had not attempted the build at all:
 gentoo@livecd ~ $ sudo zfs rollback rootPool/root_fs/sources/binutils@lfs_prep_section_5.4_pass1_scripted  
 gentoo@livecd ~ $ sudo zfs rollback rootPool/root_fs/tools@lfs_prep_section_5.4_pass1_scripted  
 gentoo@livecd ~ $ export MAKEFLAGS='-j1'  
 gentoo@livecd ~ $ cd /rootPool/root_fs/sources/binutils/  
 gentoo@livecd /rootPool/root_fs/sources/binutils $ time ./build_binutils_temp_pass1.sh
+ cd binutils-gdb
+ mkdir -v build
mkdir: created directory 'build'
+ cd build
+ ../configure --prefix=/tools --with-sysroot=/rootPool/root_fs --with-lib-path=/tools/lib --target=x86_64-lfs-linux-gnu --disable-nls --disable-werror
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking target system type... x86_64-lfs-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c...
[WHOLE BUNCH OF CONFIGURE, MAKE, COMPILER OUTPUT REMOVED FOR BREVITY]
...make[1]: Nothing to be done for 'install-target'.
make[1]: Leaving directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build'
real    6m52.944s
user    5m50.666s
sys     0m35.333s        

A quick explination about the time values.  The 'real' time is the wall clock time that has passed between entering the command and the command is complete.  The user time is the accounting time of how much cpu time was executing userland code.  The sys time is the accounting time of how much cpu time was executing kernel/syscall functions, including IO waits.  As an simplified example, in the first scenerio where we had the real/user/sys times of 0:58/10:43/1:08, with code running on all 32 threads, each thread on average spent about 2 seconds waiting on IO or waiting on the OS to perform a task for its behalf.  Also each thread spent on average about 20 seconds actually compiling.  Because of the the multiple cores/threads available in the system it was able to schedule all the work in parallel in about a minute.  In the second example, since we only ran with 1 thread on 1 core, nothing was done in parallel therefore the total/real time had to wait for that one thread to perform all the user and system tasks in order, therefore taking as much or more wall clock as the actual work...

The next question you may ask is, why did the total user and sys times take less with just 1 thread.  There are several things that could cause this difference in time.
  1. Thread-Resource Contention.

    This is where one thread can not execute because another thread is currently exclusively using a resource it needs.  This resource could be an execution unit on the CPU itself, a memory operation to main-memory, a file operation on the disk, or some-other resource both compiling threads need access to.  Depending on the type of resource the thread is waiting on will determine which accounting time is incremented while the thread is waiting.
  2. Cached Resources.

    Since the '1 thread' version was run after the '44 thread version', some items may have already been loaded into memory, so the make process did not actually have to go back to disk durring the 2nd compile.  This can include both the tool chains (make, shell, compiler, assembler, linker, libraries) which have to load from the slow gentoo usb flash OR which portions of the source code are already in the ZFS ARC, based on usage and available memory.  Typically when benchmarking one will throw out the 'first' run, because that run will warm up (assuming enough memory to hold the data set in memory) the tools and data into main memory and/or other system caches.
  3. Compile Dependencies.

    Not all of the items can be run in parallel.  Certain steps require other portions of the build to have already been completed.  With multiple threads, there can be time spent waiting for a previous step to have completed, before running the current step on the current thread.  For the single thread version, all steps are done sequentially so there is no time spent waiting for a previous step, since it won't be queued for work untill the prior dependencies have all completed.
  4. Other Computer Tasks.

    Other programs running on the computer during compile.  This includes both background, scheduled, and user UI tasks that may have been going on while I was waiting for the compile to complete.
What I have decided to do, to get a good feel for my system's SBU value, is to re-run this build with different values of thread settings to get a good benchmark of values.  Using the commands in the last example, but with different numeric values of '-j#' I came up with the following data:


Threads 1 2 4 8 12 16 17 24 32 32* 33 40 44 64
Real 6:52.127 3:39.783 1:59.355 1:24.866 1:04.054 1:00.332 0:59.636 0:58.843 0:58.156 0:57.727 0:58.141 0:58.279 0:58.658 0:58.945
User  5:49.268  5:54.943  6:03.927  6:28.973  6:49.164  7:23.114  7:37.813  9:26.583  10:48:781  10:45.696  10:55.746  10:56.867  11:03.175  11:01.951
Sys 0:35.268 0:35.745 0:37.309 0:41.800 0:44.180 0:48.328 0:50.493 1:00.726 1:08.731 1:06.232 1:08.551 1:08.171 1:10.394 1:08.338
The *note on the 2nd run of 32 threads was redirecting output to a file instead of the text console.

It looks like binutils build had diminishing returns after 16 threads, and seems to compile fastest at 32 threads.  Using more threads (such as my current default value of 44) does not significantly slow it back down past 32, so I will probably leave that value set.  Note that we now have a base SBU value of about 1 minute per SBU.  When compiling with multiple threads there will be more variance per SBU based on the package layout, design, make configuration, and internal dependencies.  Because of this some packages may do significantly better or worse then 1 minute per SBU.

Final Cleanup.


Once a package has successfully built we are left with some clean up steps.  The LFS book talks about deleting the unpacked package.  Instead we are going to do a different set of steps.  We will snapshot the build and install, we will perform a clean, we will perform a snapshot, and then if needed manually clean any remaining changes and perform a final snapshot as follows:
 gentoo@livecd /rootPool/root_fs/sources/binutils $ zfs snapshot rootPool/root_fs/sources/binutils@lfs_built_section_5.4_pass1  
 gentoo@livecd /rootPool/root_fs/sources/binutils $ zfs snapshot rootPool/root_fs/tools@lfs_install_section_5.4_pass1  
 gentoo@livecd /rootPool/root_fs/sources/binutils $ cd binutils-gdb/build/  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb/build $ make clean && make distclean  
 rm -f *.a TEMP errs core *.o *~ \#* TAGS *.E *.log  
 make[1]: Entering directory '/rootPool/root_fs/sources/binutils/binutils-gdb/build'  
 make[1]: Nothing to be done for 'clean-target'.  
 Doing clean in binutils  
 [...ADDITIONAL MAKE OUTPUT FOR CLEAN OMITTED FOR BREVITY...]  
 find . -name config.cache -exec rm -f {} \; \; 2>/dev/null  
 make: [Makefile:2104: local-distclean] Error 1 (ignored)  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb/build $ cd ..  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $ rm -rf build  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $ zfs snapshot rootPool/root_fs/sources/binutils@lfs_cleaned_section_5.4_pass1  
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $ git status  
 HEAD detached at binutils-2_33_1  
 nothing to commit, working directory clean
 gentoo@livecd /rootPool/root_fs/sources/binutils/binutils-gdb $ cd ../..
 gentoo@livecd /rootPool/root_fs/sources $                 
In this case, git did not show any modified or untracked files, so the source repository is clean, and this section is now complete.

Next time I will start in on section 5.5 to cross-compile gcc for the temporary system.

LFS Section 4

Final Preparations

Section 4.1 Introduction 

Final tasks to prep the system for the temporary bootstrap build.  Discusses SBU 'Standard Build Unit' measurement of time, setting up an unprivileged user for the builds, etc... No real work to be done for this section, on to the next.

Section 4.2 Tools directory

Our bootstrap LFS system will build the initial toolchain under $LFS/tools.  This step is intended to setup the given path and symbolic links to seperate our initial LFS tool chain from our host gentoo's tool chain.  Remember in a previous step, we built a set of persistent changes that we can apply after a gentoo reboot.  One of those persistent changes was making sure that the $LFS variable was set to our initial target bootstrap filesystem, in which we are using the ZFS rootPool/root_fs dataset instead of a standard EXT4 partition.  We can see that these changes are still in place on our recently rebooted system by issuing the following command:
 gentoo@livecd ~ $ echo $LFS  
 /rootPool/root_fs/   
 gentoo@livecd ~ $     

The instructions say to run the following commands as root:
      mkdir -v $LFS/tools
      ln -sv $LFS/tools
However, we will deviate slightly, since we are on a ZFS directory, we will use a new tools dataset and create a snapshot instead of a plain old directory:
 gentoo@livecd ~ $ su  
 livecd gentoo # zfs create rootPool/root_fs/tools  
 livecd gentoo # zfs snapshot rootPool/root_fs/tools@init  
 livecd gentoo # ln -sv $LFS/tools /  
 '/tools' -> '/rootPool/root_fs//tools'  
 livecd gentoo # exit  
 exit  
 gentoo@livecd ~ $       

Section 4.3 Temporary LFS user.

Section 4.3 recommends creating an LFS user, instead of performing everything as 'root'.  It recommends using this user rather than a normal user so that it will be 'easier' to clean all the temporary bootstrap stuff up later.  Since our host system is a gentoo live environment that already contains a non-root user of 'gentoo', we will just continue using the 'gentoo' user.  One reason I am not creating a LFS specific user, is it would not be persistent across reboots without a lot of additional work.  In addition the instructions include changing permissions of tools and source directories to be owned by the lfs user instead of root.  My sources are already owned (including zfs granted permissions for management) to the gentoo user.  I will just grant 'gentoo' the added tools information instead.  So I will be skipping the following recommended commands as root:

      groupadd lfs
      useradd -s /bin/bash -g lfs -m -k /dev/null lfs
      passwd lfs
      chown -v lfs $LFS/tools 

      chown -v lfs $LFS/sources

Instead I will be performing the following commands for ZFS with the existing gentoo user:
 gentoo@livecd ~ $ sudo chown -v gentoo:users /rootPool/root_fs/tools/  
 changed ownership of '/rootPool/root_fs/tools/' from root:root to gentoo:users  
 gentoo@livecd ~ $ sudo chmod -v a+wt $LFS/tools  
 mode of '/rootPool/root_fs//tools' changed from 0755 (rwxr-xr-x) to 1777 (rwxrwxrwt)  
 gentoo@livecd ~ $ sudo zfs allow users mount,create,snapshot,diff rootPool/root_fs/tools  
 gentoo@livecd ~ $   
   

Section 4.4 Setting Up the Environment.

We will be changing the gentoo user's environment to prevent contamination from the host gentoo system.

.bash_profile

We will update the profile so that on login a new bash shell will be executed with new clean environment different than the system environment.
 gentoo@livecd ~ $ cat > ~/.bash_profile << "EOF"  
 > exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash  
 > EOF  
 gentoo@livecd ~ $  
   

.bashrc

We will update the bash resource settings so that we only have the environment variables we want.  In addition we are setting the umask so that we will mask off the group/world write-able bits.  This will make default mode of files 644 vs 666.  The 'set +h' option turns off bash path hashing, to ensure that bash always searches the path for an executable, instead of caching the command.  This ensures that as we build new programs in the target bootstrap environment, updated tools/paths will be used as they are installed or replaced.
 gentoo@livecd ~ $ cat >~/.bashrc << "EOF"  
 > set +h  
 > umask 022  
 > LFS=/rootPool/root_fs  
 > LC_ALL=POSIX  
 > LFS_TGT=$(uname -m)-lfs-linux-gnu  
 > PATH=/tools/bin:/sbin:/bin:/usr/bin  
 > export LFS LC_ALL LFS_TGT PATH  
 > EOF  
 gentoo@livecd ~ $   
   

Sourcing the New Profile
Finally we will snapshot these changes, and source the changes for testing, after which we will reboot, to make sure these changes are persistent across reboots.
 gentoo@livecd ~ $ sudo zfs snapshot gentooScratch/persistentHomeGentoo@section_4.4_environment  
 gentoo@livecd ~ $ source ~/.bash_profile  
 gentoo@livecd ~ $ env      
 TERM=xterm  
 LC_ALL=POSIX  
 LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.cfg=00;32:*.conf=00;32:*.diff=00;32:*.doc=00;32:*.ini=00;32:*.log=00;32:*.patch=00;32:*.pdf=00;32:*.ps=00;32:*.tex=00;32:*.txt=00;32:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:  
 LFS=/rootPool/root_fs  
 PATH=/tools/bin:/sbin:/bin:/usr/bin  
 PWD=/home/gentoo  
 LFS_TGT=x86_64-lfs-linux-gnu  
 PS1=\[\033]0;\u@\h:\w\007\]\[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\]   
 SHLVL=1  
 HOME=/home/gentoo  
 _=/bin/env  
 gentoo@livecd ~ $ sudo reboot
   

I found that I need to add the following two lines to the end of my persistent script:
 gentoo@livecd ~ $ tail -n 2 /mnt/cdrom/scratch/mountfs.sh   
 mv /home/gentoo/.bash_profile /home/gentoo/lfs.bash_profile  
 mv /home/gentoo/.bashrc /home/gentoo/lfs.bashrc  
 gentoo@livecd ~ $      

This allows me after a reboot to have my full environment for bringing up my gentoo X-Windows desktop, firefox for updating this blog, and layout my work terminals.  Once I am ready to start performing LFS steps in my target environment I need only run the following commands to be ready to work:
      mv lfs.bash_profile .bash_profile
      mv lfs.bashrc .bashrc


At which points sourcing the profile, or starting a new bash session will be setup for LFS user (with my existing gentoo account) access.

Section 4.5 More about SBUs

The LFS document talks about SBUs or Standard Build Unit, since many would like to know how long it will take to build the packages and system as a whole.  This is used because different equipment will take different amounts of time. A value of '1' is the amount of time it takes to build the first pass of the 'binutils' package.  These will be approximate values.  One factor that can speed up the builds is to allow make to spawn parallel threads for the different compilation units.  In the past I have often found that approximately launching 1.25 to 1.5 * the available processor threads will keep most of the CPU fully loaded, while having additional threads that can run while waiting on the IO intensive portions of a build.  The value I will use for most builds will be 16 cores * 2 SMT/core * 1.37 make load = which is about 44 threads.  Using this value I will update my .bashrc settings to the following:
 gentoo@livecd ~ $ cat .bashrc   
 set +h  
 umask 022  
 LFS=/rootPool/root_fs  
 LC_ALL=POSIX  
 LFS_TGT=$(uname -m)-lfs-linux-gnu  
 PATH=/tools/bin:/sbin:/bin:/usr/bin  
 MAKEFLAGS='-j44'  
 export LFS LC_ALL LFS_TGT PATH MAKEFLAGS  
 gentoo@livecd ~ $      

Section 4.6 Test Suites

The latest LFS book recommends not bothering with the test suites of the given packages while running the steps in Chapter 5.  There are also details on test failures due to lack of ptys, and workaround information if you do perform the tests.  Basically this last section is just informational with nothing actually needing to be done.

End of section.

This is all of section 4 tasks/information, in my next blog entry I will start in on Chapter 5 to construct the temporary bootstrap system.

LFS Updates

Since I got side tracked for so long, the LFS book has been updated since I originally downloaded it back in May of 2019.  As I was performing steps for Section 3.3 earlier this past week, I found that the patches section from the LFS server no longer matched what was in my book, so now it is time to update the book, and update anything else that may have gotten out of date over the past 6 months.

LFS Book update.

 gentoo@livecd ~ $ cd /gentooScratch/sources/lfs_book/  
 gentoo@livecd /gentooScratch/sources/lfs_book $ ls  
 BOOK book_output  
 gentoo@livecd /gentooScratch/sources/lfs_book $ cd BOOK/  
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $ ls  
 INSTALL  appendices       chapter01 chapter04 chapter07 general.ent lfs-bootscripts-20190524.tar.bz2 obfuscate.sh pdf-fixups.sh      stylesheets  
 Makefile aux-file-data.sh chapter02 chapter05 chapter08 images      lfs-latest.php                   packages.ent process-scripts.sh tidy.conf  
 README   bootscripts      chapter03 chapter06 chapter09 index.xml   make-aux-files.sh                patches.ent  prologue           udev-lfs  
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $ svn update  
 Updating '.':  
 U  chapter01/changelog.xml  
 U  chapter01/whatsnew.xml  
 U  chapter06/meson.xml  
 U  chapter06/libcap.xml  
 U  chapter06/systemd.xml  
 U  chapter06/chapter06.xml  
 U  chapter06/bison.xml  
 U  chapter06/shadow.xml  
 U  chapter06/bash.xml  
 U  chapter06/libelf.xml  
 U  chapter06/libffi.xml  
 U  chapter06/python.xml  
 U  chapter06/gettext.xml  
 U  chapter06/check.xml  
 U  chapter06/binutils.xml  
 U  chapter06/linux-headers.xml  
 U  chapter06/glibc.xml  
 U  chapter06/openssl.xml  
 U  chapter06/findutils.xml  
 U  chapter06/gcc.xml  
 U  chapter06/creatingdirs.xml  
 U  chapter06/bc.xml  
 U  chapter06/ninja.xml  
 U  chapter06/eudev.xml  
 U  chapter06/libtool.xml  
 U  chapter06/vim.xml  
 U  chapter06/util-linux.xml  
 U  chapter06/e2fsprogs.xml  
 U  chapter06/perl.xml  
 U  packages.ent  
 U  chapter09/theend.xml  
 U  chapter09/reboot.xml  
 U  general.ent  
 U  appendices/dependencies.xml  
 U  chapter05/bzip2.xml  
 U  chapter05/linux-headers.xml  
 U  chapter05/m4.xml  
 U  chapter05/perl.xml  
 U  chapter05/findutils.xml  
 U  chapter03/patches.xml  
 U  patches.ent  
 U  chapter04/addinguser.xml  
 U  bootscripts/ChangeLog  
 U  bootscripts/lfs/init.d/checkfs  
 U  bootscripts/lfs/init.d/sysklogd  
 U  bootscripts/lfs/init.d/network  
 U  bootscripts/lfs/init.d/mountfs  
 U  bootscripts/lfs/init.d/swap  
 U  bootscripts/lfs/init.d/console  
 U  bootscripts/lfs/init.d/localnet  
 U  bootscripts/lfs/init.d/setclock  
 U  bootscripts/lfs/init.d/udev_retry  
 U  bootscripts/lfs/init.d/modules  
 U  bootscripts/lfs/init.d/mountvirtfs  
 U  bootscripts/lfs/init.d/sysctl  
 U  bootscripts/lfs/init.d/udev  
 U  prologue/why.xml  
 U  prologue/standards.xml  
 U  prologue/bookinfo.xml  
 U  chapter07/systemd-custom.xml  
 U  chapter08/kernel.xml  
 U  chapter02/hostreqs.xml  
 U  chapter02/creatingpartition.xml  
 U  lfs-latest.php  
 U  Makefile  
 U  aux-file-data.sh  
 U  make-aux-files.sh  
 Updated to revision 11722. 
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $ sudo zfs snapshot gentooScratch/sources/lfs_book@SVN_UPDATE_rev_11722 
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $    

So I have updated from revision 11610 to 11722, so there have been about 112 updates checked in since I originally obtained the book.  Next step after updating is of course is to rebuild the book.
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $ make clean; make REV=systemd BASEDIR=/gentooScratch/sources/lfs_book/book_output   
 make: *** No rule to make target 'clean'. Stop.  
 Creating and cleaning /home/gentoo/tmp  
 Processing bootscripts...  
 Adjusting for revision systemd...  
 Validating the book...  
 Validation complete.  
 Generating profiled XML for XHTML...  
 Generating chunked XHTML files at /gentooScratch/sources/lfs_book/book_output/ ...  
 Copying CSS code and images...  
 Running Tidy and obfuscate.sh...  
 Generating consolidated wget list at /gentooScratch/sources/lfs_book/book_output/wget-list ...  
 Generating consolidated md5sum file at /gentooScratch/sources/lfs_book/book_output/md5sums ...  
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $ sudo zfs snapshot gentooScratch/sources/lfs_book@book_11722_built  
 gentoo@livecd /gentooScratch/sources/lfs_book/BOOK $   
   
After reloading the book in my browser I now get:
Which is updated and different than the value I got back in may from this blog entry.

LFS Section 2.3 Requirement Changes

A quick browser through the 'Host System Requirements', it looks like my gentoo system still meets or exceeds the expected versions.  The only real difference I noted is the 'requirements' now says GCC 6.2 instead of GCC 5.2.  Not sure if exactly one entire major version bump is a typo or not, my system uses the GCC 5.4.0.  My target version repository is currently set to version 9.1.  If I have trouble building my bootstrap version, I can always build an updated gcc 6.2 chain from my repository, and then use that to build the 9.1.

LFS Section 3.1 Easy Tar Ball Download Changes

I re-perform the easy step to get the latest recommended tarballs similarly to how I performed in my previous 'LFS sections 2.5 through 3.1'  blog entry using the following command:

      wget --input-file=/gentooScratch/sources/lfs_book/book_output/wget-list --continue --directory-prefix=$LFS/sources/lfs_tarballs

and

      zfs snapshot rootPool/root_fs/sources/lfs_tarballs@lfs_book_20200109_updated_tars

Which downloads the following
  • acl-2.2.53.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • attr-2.4.48.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • autoconf-2.69.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • automake-1.16.1.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • bash-5.0.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • bc-2.4.0.tar.gz  [249154 bytes, previous version was bc-1.07.1.tar.gz ] 
  • binutils-2.33.1.tar.xz [21490848 bytes, previous was binutils-2.32.tar.xz ] 
  • bison-3.5.tar.xz [2341024 bytes, previous was bison-3.3.2.tar.xz ] 
  • bzip2-1.0.8.tar.gz [810029 bytes, previous was bzip2-1.0.6.tar.gz ] 
  • check-0.13.0.tar.gz [771029 bytes, previous was check-0.12.0.tar.gz ]  
  • coreutils-8.31.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • dbus-1.12.16.tar.gz [2093296 bytes, previous was dbus-1.12.12.tar.gz ] 
  • dejagnu-1.6.2.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • diffutils-3.7.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • e2fsprogs-1.45.5.tar.gz [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • elfutils-0.178.tar.bz2 [9007557 bytes, previous was elfutils-0.176.tar.bz2 ] 
  • eudev-3.2.9.tar.gz [1959836 bytes, previous was eudev-3.2.7.tar.gz ]
  • expat-2.2.9.tar.gz [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • expect5.45.4.tar.gz [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • file-5.38.tar.gz [9325288 bytes, previous was file-5.37.tar.gz ] 
  • findutils-4.7.0.tar.xz [1895048 bytes, previous was findutils-4.6.0.tar.gz ] 
  • flex-2.6.4.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • gawk-5.0.1.tar.xz [3136004 bytes, previous was gawk-5.0.0.tar.xz] 
  • gcc-9.2.0.tar.xz [70607648 bytes, previous was gcc-9.1.0.tar.xz] 
  • gbdm-1.18-1.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • gettext-0.20.1.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • glibc-2.30.tar.xz  [16576920 bytes, previous was glibc-2.29.tar.xz] 
  • gmp-6.1.2.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • gperf-3.1.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • grep-3.4.tar.xz [1555820 bytes, previous was grep-3.3.tar.xz] 
  • groff-1.22.4.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • grub-2.04.tar.xz  [6393864 bytes, previous was grub-2.02.tar.xz] 
  • gzip-1.10.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • iana-etc-2.30.tar.bz2 [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • inetutils-1.9.4.tar.xz  [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • intltool-0.51.0.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • iproute2-5.4.0.tar.xz [741328 bytes, previous was iproute2-5.1.0.tar.xz] 
  • kbd-2.2.0.tar.xz [1115220 bytes, previous was kbd-2.0.4.tar.xz] 
  • kmod-26.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • less-551.tar.gz [347007 bytes, previous was less-530.tar.gz]  
  • lfs-bootscripts-20191031.tar.xz  [32632 bytes, previous was lfs-bootscripts-20190524.tar.bz2] 
  • libcap-2.30.tar.xz [98528 bytes, previous was libcap-2.27.tar.xz]  
  • libffi-3.3.tar.gz  [1305466 bytes, previous was libffi-3.2.1.tar.gz] 
  • libpipeline-1.5.2.tar.gz [994071 bytes, previous was libpipeline-1.5.1.tar.gz]  
  • libtool-2.4.6.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • linux-5.4.8.tar.xz [109456792 bytes, previous was linux-5.1.3.tar.xz] 
  • m4-1.4.18.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • make-4.2.1.tar.gz [1977576 bytes, previous was make-4.2.1.tar.bz2 (same ver; diff archive)]  
  • man-db-2.9.0.tar.xz [1857216 bytes, previous was man-db-2.8.5.tar.xz]  
  • man-pages-5.04.tar.xz [1684044 bytes, previous was man-pages-5.01.tar.xz]  
  • meson-0.53.0.tar.gz [1548224 bytes, previous was meson-0.50.1.tar.gz] 
  • mpc-1.1.0.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]  
  • mpfr-4.0.2.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • ninja-1.9.0.tar.gz [190860 bytes, new did not previosly have ninja tar ball ] 
  • ncurses-6.1.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • openssl-1.1.1d.tar.gz [8845861 bytes, previous was openssl-1.1.1b.tar.gz] 
  • patch-2.7.6.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • perl-5.30.1.tar.xz [12367844 bytes, previous was perl-5.28.2.tar.xz]  
  • pkg-config-0.29.2.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • procps-ng-3.3.15.tar.xz [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • psmisc-23.2.tar.xz [sslv3 alert handshake failure / Unable to establish SSL connection.]    
  • Python-3.8.1.tar.xz [17828408 bytes, previous was Python-3.7.3.tar.xz]  
  • python-3.8.1-docs-html.tar.bz2  [6527362 bytes, previous was python-3.7.3-docs-html.tar.bz2] 
  • readline-8.0.tar.gz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • sed-4.7.tar.xz [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • shadow-4.8.tar.xz [1609060 bytes, previous was shadow-4.6.tar.xz]  
  • sysklogd-1.5.1.tar.gz  [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • systemd-244.tar.gz [8445963 bytes, previous was systemd-241.tar.gz]  
  • systemd-man-pages-244.tar.xz [517875 bytes, previous was systemd-man-pages-241.tar.xz]  
  • sysvinit-2.96.tar.xz [122164 bytes, previous was sysvinit-2.94.tar.xz]  
  • tar-1.32.tar.xz  [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • tcl8.6.10-src.tar.gz [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • texinfo-6.7.tar.xz [4337984 bytes, previous was texinfo-6.6.tar.xz]  
  • tzdata2019c.tar.gz [392087 bytes, previous was tzdata2019a.tar.gz] 
  • udev-lfs-20171102.tar.xz [10280 bytes, previous was udev-lfs-20171102.tar.bz2 (same ver; diff archive)]  
  • util-linux-2.34.tar.xz [4974812 bytes, previous was util-linux-2.33.2.tar.xz]  
  • vim-8.2.0024.tar.gz [14650417 bytes, previous was vim-8.1.tar.bz2] 
  • XML-Parser-2.46.tar.gz [254763 bytes, previous was XML-Parser-2.44.tar.gz]  
  • xz-5.2.4.tar.xz  [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • zlib-1.2.11.tar.xz  [sslv3 alert handshake failure / Unable to establish SSL connection.]   
  • bash-5.0-upstream_fixes-1.patch [21672 bytes, new did not previosuly have patch ] 
  • bzip2-1.0.8-install_docs-1.patch  [1684 bytes, new did not previously have patch ]  
  • coreutils-8.31-i18n-1.patch   [FILE ALREADY RETRIEVED; NOTHING TO DO]   
  • glibc-2.30-fhs-1.patch [2804 bytes, previous was glibc-2.29-fhs-1.patch]  
  • kbd-2.2.0-backspace-1.patch [12640 bytes, previous was kbd-2.0.4-backspace-1.patch]  
  • sysvinit-2.96-consolidated-1.patch [2468 bytes, previous was sysvinit-2.94-consolidated-4.patch] 
Like in my previous blog entry, there were several files that failed due to SSL errors, the list of packages this time are as follows:

  • e2fsprogs-1.45.5.tar.gz
  • expat-2.2.9.tar.gz
  • expect5.45.4.tar.gz (same ver previously manually downloaded)
  • procps-ng-3.3.15.tar.xz (same ver previously manually downloaded)
  • psmisc-23.2.tar.xz (same ver previously manually downloaded)
  • tcl8.6.10-src.tar.gz
  • xz-5.2.4.tar.xz (same ver previously manually downloaded)
  • zlib-1.2.11.tar.xz (same ver previously manually downloaded)
I manually downloaded the 3 new/updated files that had download errors from the wget script, and created a new snapshot.  Afterwards I removed the 'old' versions and created a new snapshot.

LFS Section 3.2 All Package changes.

For the all packages, I already have the repositories, however some repositories may need to be updated and/or switch or checkout and updated tagged version.  For each package I will fetch a repository update, afterwhich I will perform a single snapshot for the updated repositories.
  • acl (book 2.2.53)
    • no updates found, at version v2.2.53 already
  • attr (book 2.4.48)
    •  updates found, at version v2.4.48 already
  • autoconf (book 2.69)
    •  updates found, at version v2.69 already
  • automake (book 1.16.1)
    • updates found, at version v1.16.1 already
  • bash (book 5.0)
    •  updates found, at version bash-5.0 already
  • bc (book 2.4.0)
    • no updates found, at version bc-1.07.1+LFS
    • Need to change origin
      • previous origin used was https://github.com/fivepiece/gnu-bc
      • LFS points to https://github.com/gavinhoward/bc
        • git fetch --tags https://github.com/gavinhoward/bc
        • git checkout 2.4.0
        • git remote set-url origin https://github.com/gavinhoward/bc
  • binutils (book 2.33.1)
    • updates found, updated from binutils-2_32 to binutils-2_33_1
  • bison (book 3.5)
    • updates found, updates from v3.3.2 to v3.4.92 (no tag/release for 3.5...)
  • bzip2 (book 1.0.8)
    • no git updates found, only version 1.0.6
    • Need to change origin
      • Original origin used was git://git.code.sf.net/p/bzip2/bzip2
      • Additional research turns up git://sourceware.org/git/bzip2.git
        • git remote add originallfs git://git.code.sf.net/p/pzip2/bzip2
        • git remote set-url origin git://sourceware.org/git/bzip2.git
        • git fetch --tags
        • git checkout bzip2-1.0.8
  • check (book 0.13.0)
    • updates found, updated from 0.12.0 to 0.13.0
  • coreutils (book 8.31)
    • updates found, at version v8.31 already
  • dbus (book 1.12.16)
    • updates found, updated from dbus-1.12.12 to dbus-1.12.16
  • dejagnu (book 1.6.2)
    • no updates found, at version 1.6.2 already
  • diffutils (book 3.7)
    • updates found, already at version v3.7
  • e2fsprogs (book 1.45.5)
    • no updates found, already at version v1.45.5
  • elfutils (book 0.178)
    • updates found, updated from elfutils-0.176 to elfutils-0.178
  • expat (book 2.2.9)
    • updates found, updated from R_2_2_6 to R_2_2_9
  • expect (book 5.45.4)
    • Not an original git repo, re-performed expect steps found in 'LFS -- All Packages from Section 3.2' again (after moving older versions to an 'old' dirctory.)
    • Tags still only had 'exepect_5_45' as current latest.
  • file (book 5.38)
    • updates found, updated from FILE5_37 to FILE5_38
  • findutils (book 4.7.0)
    • updates found, updated from v4.6.0 to v4.7.0
  • flex (book 2.6.4)
    • updates found, already at v2.6.4
  • gawk (book 5.0.1)
    • updates found, updated from gawk-5.0.0 to gawk-5.0.1
  • gcc (book 9.2.0)
    • updates found, updates from gcc-9_1_0-release to releases/gcc-9.2.0
  • gdbm (book 1.18.1)
    • updates found, already at v1.18.1
  • gettext (book 0.20.1)
    • updates found, already at v0.20.1
  • glibc (book 2.30)
    • updates found, updated from release/2.29/master to glibc-2.30
  • gmp (book 6.1.2)
    • no updates found, only have tag gmp-6.1.0
  • gperf (book 3.1)
    • no updates, already at v3.1
  • grep (book 3.4)
    • updates found, updated from v3.3 to v3.4
  • groff (book 1.22.4)
    • updates found, already at version 1.22.4
  • grub (book 2.04)
    • updates found, already at version 2.04
  • gzip (book 1.10)
    • updates found, already at v1.10
  • iana-etc (book 2.30)
    • No updates, no src repo, local 2.30 extracted tarball, already at 2.30 on head.
  • inetutils (book 1.9.4)
    •  updates found, already at inetutils-1_9_4
  • intltool (book 0.51.0)
    • No updates found, bazaar appears to be at 0.51.0 already.
  • iproute2 (book 5.4.0)
    • Updates found, updated from v5.1.0 to v5.4.0
  • kbd (book 2.2.0)
    • Updates found, updated from 2.0.4 to v2.2.0
  • kmod (book 26)
    • Updates found, already at v26
  • less (book 551)
    • Updates found, updated from v530 to v551
  • libcap (book 2.30)
    • Updates found, updated from libcap-2.27 to libcap-2.30
  • libffi (book 3.3)
    •  updates found, updated from v3.2.1 to v3.3
  • libpipeline (book 1.5.2)
    •  updates found, updated from 1.5.1 to 1.5.2
  • libtool (book 2.4.6)
    •  No updates, already at v2.4.6
  • linux (book 5.4.8)
    •  Updates found, updated from v5.1.3 to v5.4.10 (latest 5.4)
  • m4 (book 1.4.18)
    • No updates, already at 1.4.18
  • make (book 4.2.1) 
    • Updates found, already at 4.2.1
  • man-db (book 2.9.0)
    • Updates found, updated from 2.8.5 to 2.9.0
  • man-pages (book 5.04)
    •  Updates found, updated from man-pages-5.01 to man-pages-5.04
  • meson (book 0.53.0)
    •  Updates found, updated from 0.50.1 to 0.53.0
  • mpc (book 1.1.0)
    •  Updates found, already at 1.1.0
  • mpfr (book 4.0.2)
    • No remote git repo, already at 4.0.2, ignoring updates for now.
  • ninja (book 1.9.0)
    • Updates found, already at v1.9.0
  • ncurses (book 6.1)
    • Updates found, already at v6.1
  • openssl (book 1.1.1d)
    • Updates found, already at OpenSSL_1_1_1d
  • patch (book 2.7.6)
    • No updates, already at v2.7.6
  • perl (book 5.30.1)
    • Updates found, at v5.31.7
  • pkg-confg (book 0.29.2)
    • No updates, already at pkg-config-0.29.2
  • procps (book 3.3.15)
    •  No updates, at v3.3.16
  • psmisc (book 23.2)
    • No updates, already at v23.3
  • python (book 3.8.1)
    • Updates found, already at v3.8.1
  • python doc (book 3.8.1)
    • Will build from source above, at v3.8.1
  • readline (book 8.0)
    • No updates, already at readline-8.0
  • sed (book 4.7)
    • Updates found, already at v4.7
  • shadow (book 4.8)
    • Updates found, already at 4.8
  • systemd (book 244)
    •  Updates found, already at v244
  • systemd man pages (book 244)
    • Will build from source above, at v244
  • tar (book 1.32)
    • No updates, already at release_1_32
  • tcl (book 8.6.10)
    • Updates found, updated from core-8-6-9 to core-8-6-10
  • texinfo (book 6.7)
    • No updates, already at texinfo-6.7
  • time zone data (book 2019c)
    •  Updates found, already at 2019c
  • util-linux (book 2.34)
    •  No updates, already at v2.34
  • vim (book 8.2.0024)
    •  Updates found, updated from v8.2.0109 to v8.2.0111
  • XML::Parser (book 2.46)
    •  No updates, already at 2.46
  • xz utils (book 5.2.4)
    •  No updates, already at v5.2.4
  • zlib (book 1.2.11)
    •  No updates, already at v1.2.11

LFS Errata Updates

Security Errata

  1. OpenSSL: CVE-2019-1549, CVE-2019-1563, CVE-2019-1547 (Medium to Low). Upgrade to OpenSSL-1.1.1d using the instructions in OpenSSL-1.1.1d.

    Already have downloaded version 1.1.1d, when Chapter 6 is reached, correct version will be installed.

    NO ADDITIONAL INSTRUCTIONS OR PATCHES NEEDED.
  2. e2fsprogs: CVE-2019-5094 (buffer overruns in e2fsck). Update to e2fsprogs-1.45.4 or later using the instructions in e2fsprogs-1.45.4.

    Already have downloaded version 1.45.5, when Chapter 6 is reached, correct version will be isntalled.

    NO ADDITIONAL INSTRUCTIONS OR PATCHES NEEDED.
  3. systemd: CVE-2019-6454 (access control bypass). Apply systemd-241-security_patch-1.patch to systemd and rebuild.

    Manually examined the patch file, the lines that need to be removed for "sd_bus_set_trusted(bus,true);", right before the "sd_bus_negotiate_creds" no longer exists in systemd-v244.

    Looking through the log files for changes to src/shared/bus-util.c provides the following commit log, already applied in my systemd-v244 version:

    commit 35e528018f315798d3bffcb592b32a0d8f5162bd
    Author: Zbigniew J<C4><99>drzejewski-Szmek <zbyszek@in.waw.pl>
    Date:   Tue Aug 27 19:00:34 2019 +0200

        shared/but-util: drop trusted annotation from bus_open_system_watch_bind_with_description()
       
        https://bugzilla.redhat.com/show_bug.cgi?id=1746057
       
        This only affects systemd-resolved. bus_open_system_watch_bind_with_description()
        is also used in timesyncd, but it has no methods, only read-only properties, and
        in networkd, but it annotates all methods with SD_BUS_VTABLE_UNPRIVILEGED and does
        polkit checks.
    The version of systemd that will be installed, has this security patch already applied.

    NO ADDITIONAL INSTRUCTIONS OR PATCHES NEEDED.

Misc Errata

  1. There are no current errata items for LFS 9.0-systemd.

Saturday, January 11, 2020

PSA -- A word about backups.

Backups are important.

RAID is not a backup!!!

ZFS provides redundancy features much better than traditional RAID, however like RAID, it is not a backup solution in itself.  ZFS does however provide features that make it easy to backup a ZFS dataset.  I have been periodically backing up the LFS system I am building to an external hard drive, and thought I should share how those backups are being done.

 

Current LFS backups.

 

Current zpools.

I have two zpools on my LFS build host system:
 livecd gentoo # zpool list  
 NAME           SIZE ALLOC  FREE EXPANDSZ  FRAG  CAP DEDUP HEALTH ALTROOT  
 gentooScratch 3.72G 3.34G  391M        -   65%  89% 1.00x ONLINE -  
 rootPool       119G 21.2G 97.8G        -    9%  17% 1.00x ONLINE -  
   

The gentooScratch is a small file backed pool on the gentoo boot thumb drive that contains build-host persistence changes needed across reboots.  This includes home directory, and scripts and source needed to perform the LFS bootstrap steps.

The rootPool is a pool backed by a single partition on one of the installed M.2 NVME drive.  It contains all the LFS base source downloads, and the LFS target partition $LFS.

 

Preparing for backup.

Zfs provides features that make for easy backups.  This starts with using snapshots.  A snapshot records the exact state of the filesystem at the time the snapshot was taken.

 livecd gentoo # zfs snapshot -r gentooScratch@`date +backup_%Y%m%d_%H%M`  
 livecd gentoo # zfs snapshot -r rootPool@`date +backup_%Y%m%d_%H%M`  

The recursive option on the snapshot command (-r) for the entire pool will create snapshots for every file system in the pool.

I have also mounted an external USB hard drive formatted as ext4 under /mnt/backups and created a new date based directory '2019-01-10' under the backup directory.

 

Performing the backup.

 livecd gentoo # zfs send -R gentooScratch/persistentHomeGentoo@backup_20200111_0116 > /mnt/backups/2019-01-10/persistentHomeGentoo.zfs  
 livecd gentoo # zfs send -R gentooScratch/persistentPortageDistfiles@backup_20200111_0116 > /mnt/backups/2019-01-10/persistentPortageDistfiles.zfs  
 livecd gentoo # zfs send -R gentooScratch/persistentPortageTmp@backup_20200111_0116 > /mnt/backups/2019-01-10/persistentPortageTmp.zfs  
 livecd gentoo # zfs send -R gentooScratch/scripts@backup_20200111_0116 > /mnt/backups/2019-01-10/scripts.zfs  
 livecd gentoo # zfs send -R gentooScratch/sources@backup_20200111_0116 > /mnt/backups/2019-01-10/sources.zfs  
 livecd gentoo # zfs send -R rootPool/root_fs@backup_20200111_0116 > /mnt/backups/2019-01-10/rootPool-root_fs_recursive.zfs  
 livecd gentoo # cd /mnt/backups/2019-01-10/  
 livecd 2019-01-10 # ls -lh *  
 -rw-r--r-- 1 root root 3.3G Jan 11 01:24 persistentHomeGentoo.zfs  
 -rw-r--r-- 1 root root 31M Jan 11 01:25 persistentPortageDistfiles.zfs  
 -rw-r--r-- 1 root root 86K Jan 11 01:26 persistentPortageTmp.zfs  
 -rw-r--r-- 1 root root 21G Jan 11 01:32 rootPool-root_fs_recursive.zfs  
 -rw-r--r-- 1 root root 659K Jan 11 01:27 scripts.zfs  
 -rw-r--r-- 1 root root 74M Jan 11 01:28 sources.zfs  
 livecd 2019-01-10 #   

The 'zfs send' command creates a stream of data that can be used to reconstruct the file system up to the snapshot that was sent.  The '-R' option is a recursive option that will include all descendant data sets.

As as an example the 'zfs send -R gentooScratch/sources@backup_20200111_0116 > /mnt/backups/2019-01-10/sources.zfs' command performs a full backup from the creation of the sources file system, and all descendant filesystems, up to the snapshot I made in the preparation step above which got labeled as 'backup_20200111_0116'.  This backup should contain the following files systems and mount points: 

 ZFS File System                             Mount Point
 gentooScratch/sources                       /gentooScratch/sources  
 gentooScratch/sources/gentoo                /gentooScratch/sources/gentoo  
 gentooScratch/sources/gentoo/cpuid2cpuflags /gentooScratch/sources/gentoo/cpuid2cpuflags  
 gentooScratch/sources/lfs_book              /gentooScratch/sources/lfs_book  

And can be seen in the `ls -lh` command performed in the destination backup directory:
-rw-r--r-- 1 root root 74M Jan 11 01:28 sources.zfs

I go on and use the xz compression command so that my backups take less space on my backup drive.
 livecd 2019-01-10 # xz -v9eT 0 *.zfs  
 persistentHomeGentoo.zfs (1/6)  
  100 %   2308.9 MiB / 3305.7 MiB = 0.698  21 MiB/s    2:34         
 persistentPortageDistfiles.zfs (2/6)  
  100 %     28.0 MiB / 30.0 MiB = 0.933  2.7 MiB/s    0:10         
 persistentPortageTmp.zfs (3/6)  
  100 %      7632 B / 85.0 KiB = 0.088                    
 rootPool-root_fs_recursive.zfs (4/6)  
  100 %     14.6 GiB / 20.5 GiB = 0.709  24 MiB/s   14:48         
 scripts.zfs (5/6)  
  100 %    59.6 KiB / 658.3 KiB = 0.090                    
 sources.zfs (6/6)  
  100 %    5623.1 KiB / 73.9 MiB = 0.074  2.7 MiB/s    0:27    livecd 2019-01-10 #


A backup is not a backup if it can't be restored!!!

Testing that the backup can be restored.

Preparing a restore location.

The first step needed to test that a backup can be restored is a place to restore the backup to.  Normally a backup would be restored to it's original location.  However for this we are
  1. Testing the backup
  2. Original system the backup came from is still functioning with original data.
For now, to demonstrate the restore and test,  I am using a currently unused HDD that will eventually become part of my LFS server's data pool.
 livecd gentoo # zpool create -o ashift=12 -o comment="Backups Testing Area" -o altroot="/backups" tempTestDataPool /dev/sdg  

 livecd gentoo # zpool list  
 NAME               SIZE  ALLOC    FREE EXPANDSZ  FRAG  CAP DEDUP HEALTH ALTROOT  
 gentooScratch     3.72G  3.35G    377M        -   65%  90% 1.00x ONLINE -  
 rootPool           119G  21.3G   97.7G        -    9%  17% 1.00x ONLINE -  
 tempTestDataPool  2.72T   288K   2.72T        -    0%   0% 1.00x ONLINE /backups
 
 livecd gentoo # zpool status  
    pool: gentooScratch  
   state: ONLINE  
    scan: none requested  
  config:  
     NAME                                        STATE   READ WRITE CKSUM  
     gentooScratch                               ONLINE     0     0     0  
       /mnt/cdrom/scratch/LFSBootstrap/disk1.img ONLINE     0     0     0  
  errors: No known data errors  
   
    pool: rootPool  
   state: ONLINE  
    scan: none requested  
  config:  
     NAME                                        STATE   READ WRITE CKSUM  
     rootPool                                    ONLINE     0     0     0  
      nvme0n1p6                                  ONLINE     0     0     0  
  errors: No known data errors  
   
    pool: tempTestDataPool  
   state: ONLINE  
    scan: none requested  
  config:  
     NAME                                        STATE   READ WRITE CKSUM  
     tempTestDataPool                            ONLINE     0     0     0  
      sdg                                        ONLINE     0     0     0  
  errors: No known data errors  
 livecd gentoo #   
   

The 'altroot' option is important, since my restore test location is on the same box, and I don't want the restored filesystem to attempt to mount over the original source location.  With the altroot value, all mount points of filesystems in this zpool will be mounted relative to that altroot location, instead of the original location.

Now create file system locations that the backups can be restored to:

 livecd gentoo # zfs create -o mountpoint=/backupPool tempTestDataPool/backups  
 livecd gentoo # zfs create tempTestDataPool/backups/scratch  
 livecd gentoo # zfs create tempTestDataPool/backups/scratch/home   
 livecd gentoo # zfs create tempTestDataPool/backups/scratch/dist  
 livecd gentoo # zfs create tempTestDataPool/backups/scratch/tmp   
 livecd gentoo # zfs create tempTestDataPool/backups/scratch/scripts  
 livecd gentoo # zfs create tempTestDataPool/backups/scratch/src    
 livecd gentoo # zfs create tempTestDataPool/backups/root  
 livecd gentoo # zfs create tempTestDataPool/backups/root/all_root_fs_backup
 livecd gentoo # zfs mount -a livecd gentoo # ls /backupPool/
[Note no file systems under the specified mount point of /backupPool]
 livecd gentoo # ls /backups/ 
 backupPool tempTestDataPool  
[Note the actual filesystems get mounted under the '/backups/' altroot]
 livecd gentoo #   

Performing a restore.

Now I will use the 'zfs receive' command to restore the data.

 livecd gentoo # ls /mnt/backups/2019-01-10/  
 persistentHomeGentoo.zfs.xz    persistentPortageTmp.zfs.xz    scripts.zfs.xz  
 persistentPortageDistfiles.zfs.xz rootPool-root_fs_recursive.zfs.xz sources.zfs.xz  
 livecd gentoo # xzcat /mnt/backups/2019-01-10/persistentHomeGentoo.zfs.xz | zfs receive -F tempTestDataPool/backups/scratch/home  
 livecd gentoo # xzcat /mnt/backups/2019-01-10/persistentPortageTmp.zfs.xz | zfs receive -F tempTestDataPool/backups/scratch/tmp  
 livecd gentoo # xzcat /mnt/backups/2019-01-10/scripts.zfs.xz | zfs receive -F tempTestDataPool/backups/scratch/scripts  
 livecd gentoo # xzcat /mnt/backups/2019-01-10/persistentPortageDistfiles.zfs.xz | zfs receive -F tempTestDataPool/backups/scratch/dist  
 livecd gentoo # xzcat /mnt/backups/2019-01-10/rootPool-root_fs_recursive.zfs.xz | zfs receive -F tempTestDataPool/backups/root/all_root_fs_backup  
 livecd gentoo # xzcat /mnt/backups/2019-01-10/sources.zfs.xz | zfs receive -F tempTestDataPool/backups/scratch/src    
 livecd gentoo # ls /backups
 backupPool  home  tempTestDataPool  tmp  usr   
 livecd gentoo # ls /backups/backupPool/root/all_root_fs_backup/
 esp  home  opt  sources  tmp  usr livecd gentoo #

For each of the 5 file systems I backed up above, I xzcat (to decompress the backup file) and pipe the data stream into the 'zfs receive' command.  The '-F' option forces the restore, overwriting any existing data with the version from the backup.  I also specify the zpool and zfs filesystem location I want to restore each backup to.  I did a quick ls to see the restored locations, and the recieve commands completed without any errors, so I can be decently confident that the backup was good.

Validating the restore.

Even though I am confident that the backup was good, I will go through the procedure of validating the files to ensure that my backup method is working.

The first backup to verify is the persistentHomeGentoo backup.  Performing my zfs list, looking for home I see the following:

 livecd gentoo # zfs list | grep -i home  
 gentooScratch/persistentHomeGentoo                                 3.17G  257M  714M /home/gentoo  
 rootPool/root_fs/home                                              3.95G 94.0G   96K /rootPool/root_fs/home  
 rootPool/root_fs/home/gentooExtra                                  3.95G 94.0G 3.95G /home/gentoo/extraSpace  
 tempTestDataPool/backups/root/all_root_fs_backup/home              3.95G 2.61T   96K /backups/backupPool/root/all_root_fs_backup/home  
 tempTestDataPool/backups/root/all_root_fs_backup/home/gentooExtra  3.95G 2.61T 3.95G /backups/home/gentoo/extraSpace  
 tempTestDataPool/backups/scratch/home                              3.24G 2.61T  737M /backups/home/gentoo  
 livecd gentoo # 

Performing a diff between '/home/gentoo' and '/backups/home/gentoo' will also validate the 'rootPool/root_fs/home/gentooExtra' and 'tempTestDataPool/backups/root/all_root_fs_backup/home/gentooExtra'.  Another problem I will find for the home directory comparison will be any changes since the backup, so what I really want to do is compare it against the snapshot the backup was made from.  Running the following diff command will show me any differences between the backup, and the snapshot:

 livecd gentoo # diff -rq /home/gentoo/.zfs/snapshot/backup_20200111_0116/ /backups/home/gentoo  
 diff: /home/gentoo/.zfs/snapshot/backup_20200111_0116/.kde4/cache-livecd: No such file or directory  
 diff: /backups/home/gentoo/.kde4/cache-livecd: No such file or directory  
 diff: /home/gentoo/.zfs/snapshot/backup_20200111_0116/.kde4/socket-livecd: No such file or directory  
 diff: /backups/home/gentoo/.kde4/socket-livecd: No such file or directory  
 diff: /home/gentoo/.zfs/snapshot/backup_20200111_0116/.kde4/tmp-livecd: No such file or directory  
 diff: /backups/home/gentoo/.kde4/tmp-livecd: No such file or directory  
 diff: /home/gentoo/.zfs/snapshot/backup_20200111_0116/.mozilla/firefox/7040su8h.default-release/lock: No such file or directory  
 diff: /backups/home/gentoo/.mozilla/firefox/7040su8h.default-release/lock: No such file or directory  
 diff: /home/gentoo/.zfs/snapshot/backup_20200111_0116/.mozilla/firefox/hmud4qrk.default-1558998297270/lock: No such file or directory  
 diff: /backups/home/gentoo/.mozilla/firefox/hmud4qrk.default-1558998297270/lock: No such file or directory  
 livecd gentoo #   

There appears to be some differences, however looking closer, these are 'diff' errors, not actual differences.  For each file, it complains about 'No such file or directory' for both locations.  Examining the missing files/directories, it turns out that the directories/files are symbolic links to elsewhere in the system for temporary files that have since been cleaned up.  I would consider this to  be a validated backup.

The next backup to validate is the persistentPortageTmp, using the command:
 livecd gentoo # diff -qr /tmp/portage/.zfs/snapshot/backup_20200111_0116/ /backups/tmp/portage/  
 livecd gentoo #      

This shows no difference, so another validated backup.

The next backup to validate is the scripts backup.  I have not made any changes to the scripts folder since the backup, so no need to compare against a specific snapshot
 livecd gentoo # diff -qr /gentooScratch/scripts/ /backups/backupPool/scratch/scripts/  
 livecd gentoo #   
This shows no difference, so another validated backup.

The next backup to validate is the  persistentPortageDistfiles.  Once again I have made no changes, so I can just compare the directories instead of a snapshot.
 livecd gentoo # diff -qr /usr/portage/distfiles/ /backups/usr/portage/distfiles/  
 livecd gentoo #   
This shows no difference, so another validated backup.

The next backup to validate is the sources backup.  I have not made any changes to the sources folder since the backup, so no need to compare against a specific snapshot
 livecd gentoo # diff -qr /gentooScratch/sources/ /backups/backupPool/scratch/src/  
 livecd gentoo #
This shows no difference, so another validated backup.

The final backup to validate is the full recursive $LFS root location using

 livecd gentoo # diff -qr /rootPool/root_fs/ /backups/backupPool/root/all_root_fs_backup/  
 diff: /rootPool/root_fs/sources/bison/bison/build-aux/move-if-change: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/bison/bison/build-aux/move-if-change: No such file or directory  
 diff: /rootPool/root_fs/sources/bison/bison/data/m4sugar/foreach.m4: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/bison/bison/data/m4sugar/foreach.m4: No such file or directory  
 diff: /rootPool/root_fs/sources/bison/bison/data/m4sugar/m4sugar.m4: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/bison/bison/data/m4sugar/m4sugar.m4: No such file or directory  
 diff: /rootPool/root_fs/sources/bison/bison/m4/m4.m4: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/bison/bison/m4/m4.m4: No such file or directory  
 diff: /rootPool/root_fs/sources/kmod/kmod/testsuite/rootfs-pristine/test-loaded/sys/module/btusb/drivers/usb:btusb: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/kmod/kmod/testsuite/rootfs-pristine/test-loaded/sys/module/btusb/drivers/usb:btusb: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/COPYING: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/COPYING: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/INSTALL: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/INSTALL: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/compile: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/compile: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/config.guess: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/config.guess: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/config.sub: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/config.sub: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/depcomp: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/depcomp: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/install-sh: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/install-sh: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/mdate-sh: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/mdate-sh: No such file or directory  
 diff: /rootPool/root_fs/sources/m4/m4/build-aux/texinfo.tex: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/m4/m4/build-aux/texinfo.tex: No such file or directory  
 diff: /rootPool/root_fs/sources/texinfo/texinfo/js/build-aux/texinfo.tex: No such file or directory  
 diff: /backups/backupPool/root/all_root_fs_backup/sources/texinfo/texinfo/js/build-aux/texinfo.tex: No such file or directory  
 livecd gentoo #

Once again, similar to the gentoo's host home direcorty we have some diff errors, but no differences.  I took a look at some of the 'missing' files, and once again they are symbolic links, some of which are absolute and expecting the 'chroot' environment of using '$LFS'.  Since every 'missing' file is the same between the two datasets, this backup is also 'valid'.

Future backups.

Once the LFS system is fully up and running, I plan on having some automated backups and some real time replication to another server.  At that time I will go over extended zfs send/recv syntax, including incremental, and (near) realtime replication for additional backup strategies.  For now I just wanted to point out I am not 'relying' on ZFS redundancy or RAID as a backup solution, and show you what my 'current' solution is so I do not loose any work I have currently put into my Linux From Scratch server build.