Sunday, June 2, 2019

Additional Persistence for LFS bootstrap

In my previous post I determined I needed my Gentoo live system to include the tidy utility to even build the LFS systemd installation book from source.  Since my live system is not persistent I can not currently make system changes on the Live USB permanent across reboots.  However I can add some additional persistence and create some utility scripts to get my Live system back to a known state for future LFS work.

The last command I used prior to shutdown was:
sudo /sbin/zpool export gentooScratch
Now that  I have rebooted, I can remount all the ZFS filesystems by issuing the following command:  
sudo /sbin/zpool import -a -c /mnt/cdrom/scratch/zfs_bootstrap.cache  
/sbin/zfs list
Producing the following output:
gentoo@livecd ~ $ sudo /sbin/zpool import -a -c /mnt/cdrom/scratch/zfs_bootstrap.cache
gentoo@livecd ~ $ /sbin/zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
gentooScratch                   60.0M  3.54G    19K  /gentooScratch
gentooScratch/sources           59.9M  3.54G    19K  /gentooScratch/sources
gentooScratch/sources/lfs_book  59.8M  3.54G  59.6M  /gentooScratch/sources/lfs_book
gentoo@livecd ~ $


Previously I forgot to mention, that I typically take snapshots for different steps, so I can roll-back if something goes wrong.  The current state of the system is as follows:
gentoo@livecd ~ $ /sbin/zfs list -t snapshot
NAME                                                   USED  AVAIL  REFER  MOUNTPOINT
gentooScratch/sources/lfs_book@initial                   9K      -    19K  -
gentooScratch/sources/lfs_book@SVN_CO_rev_11610        201K      -  59.6M  -
gentooScratch/sources/lfs_book@make_failed_need_tidy      0      -  59.6M  -
gentoo@livecd ~ $


So my current persistent storage is working, but I now need additional storage for portage package downloads and would also like my desktop environment to persist. To that end, I need to create a few additional filesystems, and backup the gentoo user's home directory so I can apply the values on the over-layed/masked home directory after the zpool comes up.  Note these commands (particularly those altering the gentoo user's home directory) need to be run from a standard console as root so that the /home/gentoo directory does not have open files in use from using the graphical login.

rm -rf /usr/portage/distfiles  
zfs create -o mountpoint=/usr/portage/distfiles gentooScratch/persistentPortageDistfiles
rm -rf /tmp/portage
zfs create -o mountpoint=/tmp/portage gentooScratch/persistentPortageTmp
tar -cf /gentooScratch/home_gentoo.tar /home/gentoo
rm -rf /home/gentoo
zfs create -o mountpoint=/home/gentoo gentooScratch/persistentHomeGentoo
tar -C / -xf /gentooScratch/home_gentoo.tar
df -h
zfs snapshot gentooScratch/persistentPortageDistfiles@init
zfs snapshot gentooScratch/persistentPortageTmp@init
zfs snapshot gentooScratch/persistentHomeGentoo@init

Producing the following output on the console (CTRL-ALT-F1):
livecd ~ # rm -rf /usr/portage/distfiles
livecd ~ # zfs create -o mountpoint=/usr/portage/distfiles gentooScratch/persistentPortageDistfiles
livecd ~ # rm -rf /tmp/portage
livecd ~ # zfs create -o mountpoint=/tmp/portage gentooScratch/persistentPortageTmp
livecd ~ # tar -cf /gentooScratch/home_gentoo.tar /home/gentoo

tar: Removing leading `/' from member name
livecd ~ # rm -rf /home/gentoo
livecd ~ # zfs create -o mountpoint=/home/gentoo gentooScratch/persistentHomeGentoo
livecd ~ # tar -C / -xf /gentooScratch/home_gentoo.tar  

livecd ~ # df -h
Filesystem                                Size  Used Avail Use% Mounted on
udev                                       10M  4.0K   10M   1% /dev
/dev/sda1                                  29G  5.9G   23G  21% /mnt/cdrom
tmpfs                                     3.9G   29M  3.9G   1% /.unions/memory
aufs                                      3.9G   29M  3.9G   1% /
/dev/loop0                                2.0G  2.0G     0 100% /mnt/livecd
none                                      3.9G   20K  3.9G   1% /mnt/aufs-rw-branch
tmpfs                                     786M  1.6M  784M   1% /run
shm                                       3.9G     0  3.9G   0% /dev/shm
cgroup_root                                10M     0   10M   0% /sys/fs/cgroup
vartmp                                    3.9G   36K  3.9G   1% /var/tmp
tmp                                       3.9G  4.0K  3.9G   1% /tmp
gentooScratch                             3.3G   79M  3.3G   3% /gentooScratch
gentooScratch/sources                     3.3G     0  3.3G   0% /gentooScratch/sources
gentooScratch/sources/lfs_book            3.3G   60M  3.3G   2% /gentooScratch/sources/lfs_book
gentooScratch/persistentPortageDistfiles  3.3G     0  3.3G   0% /usr/portage/distfiles
gentooScratch/persistentPortageTmp        3.3G     0  3.3G   0% /tmp/portage
gentooScratch/persistentHomeGentoo        3.5G  244M  3.3G   7% /home/gentoo
none                                      3.9G  8.0K  3.9G   1% /run/user/1000

livecd ~ # zfs snapshot gentooScratch/persistentPortageDistfiles@init
livecd ~ # zfs snapshot gentooScratch/persistentPortageTmp@init
livecd ~ # zfs snapshot gentooScratch/persistentHomeGentoo@init

The next step is I need a persistent script to prep and mount the zfs filesystems after a restart.  This script will of course need to also be run as root from the console, prior to logging into gentoo system to remove the filesystems that exist in the Live squash filesystem.

Created a /mnt/cdrom/scratch/mountfs.sh with the following contents:
#!/bin/bash -x
# Script removes the existing directories on the Gentoo LiveUSB
# Then the script imports the persistent zfs storage, including mounting/replacing the removed directories
# Finally the script creates a new snapshot marking the begining of the live session.

rm -rf /usr/portage/distfiles
rm -rf /tmp/portage
rm -rf /home/gentoo

zpool import -a -c /mnt/cdrom/scratch/zfs_bootstrap.cache
zfs snapshot -r gentooScratch@`date +Persistence_Remount_%Y%h%d_%H%M%S.%N`

*tangent* Found the following code formatter for the above script, and now I am debating changing previous entries with the above format. *end tangent*

After logging out, exporting the zpool, rebooting,  running the mountfs.sh from the root console, and finally logging back in as gentoo user... The system state is:

gentoo@livecd ~ $ /sbin/zfs list -t snapshot
NAME                                                                                      USED  AVAIL  REFER  MOUNTPOINT
gentooScratch@Persistence_Remount_2019May28_003919.036868361                                 0      -  79.1M  -
gentooScratch/persistentHomeGentoo@init                                                  12.2M      -  84.4M  -
gentooScratch/persistentHomeGentoo@Persistence_Remount_2019May28_003919.036868361        21.5M      -   255M  -
gentooScratch/persistentPortageDistfiles@init                                                0      -    19K  -
gentooScratch/persistentPortageDistfiles@Persistence_Remount_2019May28_003919.036868361      0      -    19K  -
gentooScratch/persistentPortageTmp@init                                                      0      -    19K  -
gentooScratch/persistentPortageTmp@Persistence_Remount_2019May28_003919.036868361            0      -    19K  -
gentooScratch/sources@Persistence_Remount_2019May28_003919.036868361                         0      -    19K  -
gentooScratch/sources/lfs_book@initial                                                      9K      -    19K  -
gentooScratch/sources/lfs_book@SVN_CO_rev_11610                                           201K      -  59.6M  -
gentooScratch/sources/lfs_book@make_failed_need_tidy                                         0      -  59.6M  -
gentooScratch/sources/lfs_book@Persistence_Remount_2019May28_003919.036868361                0      -  59.6M  -
gentoo@livecd ~ $ df -h
Filesystem                                Size  Used Avail Use% Mounted on
udev                                       10M  4.0K   10M   1% /dev
/dev/sda1                                  29G  5.9G   23G  21% /mnt/cdrom
tmpfs                                     3.9G   29M  3.9G   1% /.unions/memory
aufs                                      3.9G   29M  3.9G   1% /
/dev/loop0                                2.0G  2.0G     0 100% /mnt/livecd
none                                      3.9G   20K  3.9G   1% /mnt/aufs-rw-branch
tmpfs                                     786M  1.6M  784M   1% /run
shm                                       3.9G     0  3.9G   0% /dev/shm
cgroup_root                                10M     0   10M   0% /sys/fs/cgroup
vartmp                                    3.9G   36K  3.9G   1% /var/tmp
tmp                                       3.9G  4.0K  3.9G   1% /tmp
gentooScratch                             3.3G   80M  3.2G   3% /gentooScratch
gentooScratch/sources                     3.2G     0  3.2G   0% /gentooScratch/sources
gentooScratch/sources/lfs_book            3.3G   60M  3.2G   2% /gentooScratch/sources/lfs_book
gentooScratch/persistentHomeGentoo        3.5G  248M  3.2G   8% /home/gentoo
gentooScratch/persistentPortageTmp        3.2G     0  3.2G   0% /tmp/portage
gentooScratch/persistentPortageDistfiles  3.2G     0  3.2G   0% /usr/portage/distfiles
none                                      3.9G  8.0K  3.9G   1% /run/user/1000
gentoo@livecd ~ $ 


Now that the portage system will be mostly persistent (for downloads/builds), we can build tidy for our Gentoo Live system using the following command 

sudo emerge app-text/tidy-html5
sudo zfs snapshot gentooScratch/persistentPortageTmp@tidy_html_cached 
sudo zfs snapshot gentooScratch/persistentPortageDistfiles@tidy_html_cached
The next step is to provide a persistent location that can be used for utility scripts to track changes and reconfigure the environment and settings needed for the LFS boot strapping project.

sudo zfs create gentooScratch/scriptssudo zfs create gentooScratch/scripts/datasudo zfs create gentooScratch/scripts/data
sudo chown -R gentoo:users  /gentooScratch/scripts
sudo zfs snapshot -r gentooScratch/scripts@init
cp /etc/portage/make.conf /gentooScratch/scripts/data/

Now I can edit the /gentooScratch/scripts/data/make.conf file, to customize the gentoo build options for any tools that we need on the way to initially bootstrapping the LFS system. The first thing we want to update is the CPU_FLAGS_X86 property to optimize the tools we have to build to setup our LFS system.  A good way to generate this is to use the app-portage/cpuinfo2cpuflags package.  So the next step is:

sudo emerge app-portage/cpuinfo2cpuflags

Unfortunately the emerge fails, unable to fetch the required package.  After a bit of digging I came across a similar piece of software, cpuid2cpuflags, available on github here.  So now we need to download and compile that program.

sudo zfs create gentooScratch/sources/gentoo
sudo zfs create gentooScratch/sources/gentoo/cpuid2cpuflags
sudo chown -R gentoo:users /gentooScratch/sources/gentoo
sudo zfs snapshot -r gentooScratch/sources/gentoo@init
cd /gentooScratch/sources/gentoo/cpuid2cpuflags
git clone https://github.com/mgorny/cpuid2cpuflags
sudo zfs snapshot gentooScratch/sources/gentoo/cpuid2cpuflags@init_git_clone
cd cpuid2cpuflags/

Looking around at the source there appears to be no Makefile, and only some of the files needed for input to automake are present.  The README file does not help with instructions on how to compile it either.  It looks like I will have to manually figure out how to compile and build from this source.  After a bit of experimentation I came up with the following command line:

cd src; g++ -DPACKAGE_STRING='"v5"' -DHAVE_CPUID_H=1 -DHAVE_STDINT_H=1 -o ../cpuid2cpuflags  main.c x86.c ; cd ..
 
Running the program I get the following output, which looks about right for a modern 64-bit CPU:

CPU_FLAGS_X86: aes avx avx2 f16c fma3 mmx mmxext pclmul popcnt sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3

I  can now take these values and modify my persistence system to incorporate these values into gentoo's emerge system. So first I will edit the /gentooScratch/scripts/data/make.conf file and add these flags.  Line 16 will go from 'CPU_FLAGS_X86="mmx sse sse2"' to 'CPU_FLAGS_X86="aes avx avx2 f16c fma3 mmx mmxext pclmul popcnt sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3"'.  I will also change the MAKEOPTS on line 25 from =-j2 to -j48, which is 1.5 * the number of total threads I have on this AMD Threadripper system.  I will also create a 00cpuflags file for portage to use with the following contents:
*/* CPU_FLAGS_X86: aes avx avx2 f16c fma3 mmx mmxext pclmul popcnt sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3

The final changes will be to create a 2nd executable restore script /gentooScratch/scripts/restoreGentooEnv.sh with the following contents:
#!/bin/bash -x                                                    

# Restore make/portage settings for this box.         
cp /gentooScratch/scripts/data/make.conf /etc/portage/make.conf
cp /gentooScratch/scripts/data/00cpuflags /etc/portage/package.use/

# Restore needed packages
emerge app-text/tidy-html5

Now I can snapshot the scripts directory and change the /mnt/cdrom/scratch/mountfs.sh script to run the restore script from the persistent storage after it mounts the storage.  Test everything with another reboot and now we should be able to finally build our LFS instruction book from scratch.

cd /gentooScratch/sources/lfs_book
mkdir book_output
cd BOOK
svn update
make REV=systemd BASEDIR=/gentooScratch/sources/lfs_book/book_output
sudo zfs snapshot gentooScratch/sources/lfs_book@book_11611_built
I now check loading the book and get the following:


So now I can FINALLY start in on the LFS install, using these instructions.  My next blog post will hopefully make progress on getting LFS actually installed, now that my live USB has persistence to get me back to this point after any reboot.



 








No comments:

Post a Comment