Sunday, June 2, 2019

LFS sections 2.5 through 3.1

 Section 2 Preparing the Host System

We have already completed sections 2.1-2.4 in the previous blog.

 Section 2.5 'Creating a File System on the Partition'

Section 2.5 drones on about the 'ext' family of file systems, which I am not using. In addition I need to create a fat32 system on the EFI esp partition, and create some swap.  Since I will be using ZFS, I will create a ZPOOL and the subsuquent file systems.

My deviations from the book include running the following commands:
gentoo@livecd ~ $ sudo mkfs.fat -F 32 /dev/nvme0n1p1
mkfs.fat 4.0 (2016-05-06)
gentoo@livecd ~ $ sudo mkswap /dev/nvme0n1p2
Setting up swapspace version 1, size = 32 GiB (34359734272 bytes)
no label, UUID=f17abe47-d409-489e-920b-4b8341f3f1a8
gentoo@livecd ~ $ sudo swapon /dev/nvme0n1p2
gentoo@livecd ~ $ free
              total        used        free      shared  buff/cache   available
Mem:       65871904     1453280    61927076       88796     2491548    63707984
Swap:      33554428           0    33554428
 gentoo@livecd ~ $ sudo zpool create -o ashift=12 -o cachefile=/mnt/cdrom/scratch/lfs_root.cache -o comment="LFS Root" rootPool /dev/nvme0n1p6
 gentoo@livecd ~ $ sudo zfs create rootPool/root_fs
 gentoo@livecd ~ $ sudo zfs create rootPool/root_fs/home
 gentoo@livecd ~ $ sudo zfs create rootPool/root_fs/usr
 gentoo@livecd ~ $ sudo zfs create rootPool/root_fs/opt
 gentoo@livecd ~ $ sudo zfs create rootPool/root_fs/usr/src
 gentoo@livecd ~ $ sudo zfs create rootPool/root_fs/tmp
 gentoo@livecd ~ $ sudo mkdir /rootPool/root_fs/esp
 gentoo@livecd ~ $ sudo mount /dev/nvme0n1p1 /rootPool/root_fs/esp
 gentoo@livecd ~ $ sudo zfs snapshot -r rootPool@init
 gentoo@livecd ~ $ sudo zpool list
NAME            SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
gentooScratch  3.72G   565M  3.17G         -    16%    14%  1.00x  ONLINE  -
rootPool        119G   206K   119G         -     0%     0%  1.00x  ONLINE  -
 gentoo@livecd ~ $ sudo zfs list
NAME                                          USED  AVAIL  REFER  MOUNTPOINT
gentooScratch                                 564M  3.05G  79.1M  /gentooScratch
gentooScratch/persistentHomeGentoo            418M  3.05G   335M  /home/gentoo
gentooScratch/persistentPortageDistfiles      714K  3.05G   660K  /usr/portage/distfiles
gentooScratch/persistentPortageTmp             28K  3.05G    19K  /tmp/portage
gentooScratch/scripts                         101K  3.05G  23.5K  /gentooScratch/scripts
gentooScratch/scripts/data                   40.5K  3.05G  22.5K  /gentooScratch/scripts/data
gentooScratch/sources                        65.3M  3.05G    19K  /gentooScratch/sources
gentooScratch/sources/gentoo                  205K  3.05G    19K  /gentooScratch/sources/gentoo
gentooScratch/sources/gentoo/cpuid2cpuflags   159K  3.05G   123K  /gentooScratch/sources/gentoo/cpuid2cpuflags
gentooScratch/sources/lfs_book               65.0M  3.05G  62.0M  /gentooScratch/sources/lfs_book
rootPool                                      196K   115G    19K  /rootPool
rootPool/root_fs                              115K   115G    20K  /rootPool/root_fs
rootPool/root_fs/home                          19K   115G    19K  /rootPool/root_fs/home
rootPool/root_fs/opt                           19K   115G    19K  /rootPool/root_fs/opt
rootPool/root_fs/tmp                           19K   115G    19K  /rootPool/root_fs/tmp
rootPool/root_fs/usr                           38K   115G    19K  /rootPool/root_fs/usr
rootPool/root_fs/usr/src                       19K   115G    19K  /rootPool/root_fs/usr/src

Eight steps down... lots to go.

Section 2.6 Setting the $LFS Variable

We need to setup the LFS variable, since we want this to be persistent and set everytime we reboot, we will update our persitence scripts.

The /mnt/cdrom/scratch/mountfs.sh contents will be updated to also import our new LFS pool:
#!/bin/bash -x
# Script removes the existing directories on the Gentoo LiveUSB
# Then the script imports the persistent zfs storage, including                                           
# mounting/replacing the removed directories. Finally the script                                                      
# creates a new snapshot marking the begining of the live session.                   
                             
rm -rf /usr/portage/distfiles
rm -rf /tmp/portage
rm -rf /home/gentoo
                                                         
zpool import -a -c /mnt/cdrom/scratch/zfs_bootstrap.cache                 
zfs snapshot -r gentooScratch@`date +Persistence_Remount_%Y%h%d_%H%M%S.%N`
                                                                        
# Now the script will import our LFS root pool for the target system,   
# create a new snapshot marking the begining of this LFS modification   
# session, and then restore any gentoo/lfs values needed                
                                                                        
zpool import -a -c /mnt/cdrom/scratch/lfs_root.cache                    
mount /dev/nvme0n1p1 /rootPool/root_fs/esp
zfs snapshot -r rootPool@`date +Persistence_Remount_%Y%h%d_%H%M%S.%N`   
                                                                          
# Restore the portage settings, and the LFS bootstrap packages/settings.
/gentooScratch/scripts/restoreGentooEnv.sh      


The updated /gentooScratch/scripts/restoreGentooEnv.sh becomes:

#!/bin/bash -x

# Restore make/portage settings for this box.
cp /gentooScratch/scripts/data/make.conf /etc/portage/make.conf
cp /gentooScratch/scripts/data/00cpuflags /etc/portage/package.use/

# Restore needed gentoo packages
emerge app-text/tidy-html5

# Setup our LFS environment
export LFS=/rootPool/root_fs/
cp /gentooScratch/scripts/data/lfs_var.sh /etc/profile.d


And the new /gentooScratch/scripts/data/lfs_var.sh file is:

export LFS=/rootPool/root_fs/


Of course we create a new snapshot to mark this improvement.
sudo zfs snapshot -r gentooScratch/scripts@lfs_var
Now we will reboot to ensure that LFS is properly set.  Upon logging back in:


gentoo@livecd ~ $ echo $LFS
/rootPool/root_fs/
gentoo@livecd ~ $ df -h
Filesystem                                   Size  Used Avail Use% Mounted on
udev                                          10M  4.0K   10M   1% /dev
/dev/sda1                                     29G  5.9G   23G  21% /mnt/cdrom
tmpfs                                         32G   33M   32G   1% /.unions/memory
aufs                                          32G   33M   32G   1% /
/dev/loop0                                   2.0G  2.0G     0 100% /mnt/livecd
none                                          32G   20K   32G   1% /mnt/aufs-rw-branch
tmpfs                                        6.3G  1.7M  6.3G   1% /run
shm                                           32G     0   32G   0% /dev/shm
cgroup_root                                   10M     0   10M   0% /sys/fs/cgroup
vartmp                                        32G   36K   32G   1% /var/tmp
tmp                                           32G  4.0K   32G   1% /tmp
gentooScratch                                3.2G   79M  3.1G   3% /gentooScratch
gentooScratch/scripts                        3.1G     0  3.1G   0% /gentooScratch/scripts
gentooScratch/scripts/data                   3.1G     0  3.1G   0% /gentooScratch/scripts/data
gentooScratch/sources                        3.1G     0  3.1G   0% /gentooScratch/sources
gentooScratch/sources/gentoo                 3.1G     0  3.1G   0% /gentooScratch/sources/gentoo
gentooScratch/sources/gentoo/cpuid2cpuflags  3.1G  128K  3.1G   1% /gentooScratch/sources/gentoo/cpuid2cpuflags
gentooScratch/sources/lfs_book               3.1G   62M  3.1G   2% /gentooScratch/sources/lfs_book
gentooScratch/persistentHomeGentoo           3.4G  338M  3.1G  10% /home/gentoo
gentooScratch/persistentPortageTmp           3.1G     0  3.1G   0% /tmp/portage
gentooScratch/persistentPortageDistfiles     3.1G  640K  3.1G   1% /usr/portage/distfiles
rootPool                                     116G     0  116G   0% /rootPool
rootPool/root_fs                             116G     0  116G   0% /rootPool/root_fs
rootPool/root_fs/home                        116G     0  116G   0% /rootPool/root_fs/home
rootPool/root_fs/opt                         116G     0  116G   0% /rootPool/root_fs/opt
rootPool/root_fs/tmp                         116G     0  116G   0% /rootPool/root_fs/tmp
rootPool/root_fs/usr                         116G     0  116G   0% /rootPool/root_fs/usr
rootPool/root_fs/usr/src                     116G     0  116G   0% /rootPool/root_fs/usr/src
/dev/nvme0n1p1                               1.2G  4.0K  1.2G   1% /rootPool/root_fs/esp
none                                          32G  8.0K   32G   1% /run/user/1000



Nine steps down... lots to go.

Section 2.7 Mounting the new Partition

We will be basically skipping this step. ZPool management automounts the filesystems as seen in the previous output, furthermore this mounting and LFS are persistent by re-running our mountfs.sh script.

Ten steps down... lots to go.

Section 3 Packages and Patches

Section 3.1 Introduction

Rather then create a new directory for the sources, we just create a descendant ZFS file system, and the the permissions as specified:

sudo zfs create rootPool/root_fs/sources
sudo zfs create rootPool/root_fs/sources/lfs_tarballs
sudo chmod -v a+wt $LFS/sources
sudo chmod -v a+wt $LFS/sources/lfs_tarballs
sudo zfs snapshot -r rootPool/root_fs/sources@init

We will start with the 'easy way' using the wget-list file provided by the book and downloading all the packages and patches.

wget --input-file=/gentooScratch/sources/lfs_book/book_output/wget-list --continue --directory-prefix=$LFS/sources/lfs_tarballs/ 2&> $LFS/sources/tarball_download.log
pushd $LFS/sources/lfs_tarballs
md5sum -c /gentooScratch/sources/lfs_book/book_output/md5sums
popd  
sudo zfs snapshot rootPool/root_fs/sources/lfs_tarballs@initial_download

This lists 7 files as not being downloaded, all of which appear to be related to SSL connections to the sourceforge server.  My initial guess is an expired certificate, so I will attempt to re-download them using a non ssl connection, relying on the MD5 sums to validate the source. [I know MD5 is no longer considered secured, it is possible, though not probable, that a MITM attack could replace the source tar with a file that appears to be valid and has a valid md5sum, but this is so little of a chance... and later on where possible I plan on cloning source repositories...]
  
The list of packages that were not downloaded was:
  • e2fsprogs-1.45.1.tar.gz
  • expat-2.2.6.tar.bz2
  • expect5.45.4.tar.gz
  • procps-ng-3.3.15.tar.xz
  • psmisc-23.2.tar.xz
  • tcl8.6.9-src.tar.gz
  • xz-5.2.4.tar.xz
By manually replacing the https to http, or going to the sourceforge project page I was able to download the packages. Running MD5 sums again, and every file checks out as ok, so yet another snapshot.

So now quite a bit of time has been spent getting the packages, so for now we will call that a session.
 

No comments:

Post a Comment