Introduction
With the growth of devices on which EVE runs, we have seen more "flavours" of EVE. Not just architecture - amd64 and arm64 - but also board-specific, e.g. imx.
These variants are referred to as "platforms". Different variants of EVE are built by calling make eve PLATFORM=<platform>
.
With some of these variants, we need not only specific kernels, but also specific files to be available on the system. These cannot be downloaded runtime because:
- The data source with the packages might not be available to end devices at boot time;
- Even if the data source is available, it might be bandwidth constrained;
- The device might need the files very soon after startup.
Thus, the only option is to make these available built-into the platform-specific-variant of EVE.
This document describes a standard method for making these device-specific files available on an EVE build, as well as executing necessary startup scripts, at both a device level and an EVE-specific services (pillar) level.
File Locations
All device-specific files should be placed on the root filesystem in /opt/vendor/<vendorname>/
.
Note that, by definition, no 2 platforms of EVE are the same and built for the same platform. Thus, it is highly likely that in all cases there will be either zero (no device-specific flavour) or one (device-specific flavour) subdirectory under /opt/vendor/
. Thus, an Nvidia Jetson EVE will have /opt/vendor/nvidia
, while an EVE will have /opt/vendor/imx
. In theory, we could simply place everything under /opt/vendor
and avoid another layer of device-name-specific subdirectory. We choose not to do this for two reasons.
- We cannot guarantee that in the future we never will have a device with files from two distinct vendors; the
<vendor_name>
structure provides future flexibility. - We make analysis and debugging much easier if it explicitly states what the device the vendor-specific files are for.
The directory /opt/vendor
will be available to any system container that needs it as a mount in rootfs.yml
. In addition, it will be mounted as /opt/vendor:/opt/vendor
into pillar as a standard.
Populating Vendor Directory
As /opt/vendor
is in the base OS, which is read-only, it must be populated build-time. We use the init
section of rootfs.yml
to populate it.
Any device that requires a Board Support Package (BSP) should create a container via a directory in lf-edge/eve/pkg/bsp-<vendor>
, e.g. bsp-imx
(which already exists) or bsp-nvidia
. The contents of the final stage in the Dockerfile must be FROM scratch
and must save files solely to /opt/vendor/<vendorname>
so that the files will be properly placed in the rootfs.
Long-Running Services
If the device requires device-specific long-running services, for example a fan or other device controller, these are considered system-level services, and should be added to the services
section of rootfs.yml
.
The source to such services should create a container via a directory in lf-edge/eve/pkg/services-<vendor>
, e.g. services-nvidia
. As this runs in services
, and therefore in its own container, it can be structured however it wants internally.
It should avoid duplicating any files already in /opt/vendor/<vendor>
, instead mounting those in, if at all possible.
Startup Services
If the device requires startup services, it depends upon the nature of the startup service:
- system-wide
- pillar-specific
System-Wide
System-wide startup services, e.g. initializing a device, should be performed in an onboot
container in rootfs.yml
.
The source to such startup services should create a container via a directory in lf-edge/eve/pkg/onboot-<vendor>
, e.g. onboot-nvidia
. As this runs in onboot
, and therefore in its own container, it can be structured however it wants internally.
It should avoid duplicating any files already in /opt/vendor/<vendor>
, instead mounting those in, if at all possible.
Pillar-Specific
Pillar-specific startup services, e.g. modifications to user containerd config.toml
or communications, should be performed by pillar itself.
On startup, pillar will execute any files found in /opt/vendor/*/init.d/
. Obviously, if no /opt/vendor/<vendorname>
directories exist, or those that do have no init.d/
subdirectory, or those have no executable files, then nothing will get executed.
The init.d/
startup programs should be created as part of bsp-<vendor>
. Since those files are in /opt/vendor/<vendorname>
, and are mounted into pillar, they will be available to pillar on startup.
Updating rootfs.yml
The above requires both permanent and platform-dependent dynamic changes to rootfs.yml
.
Permanent
The only permanent change is to always have /opt/vendor:/opt/vendor
mounted into pillar. This is done by modifying pkg/pillar/build.yml:
org: lfedge image: eve-pillar config: binds: - /lib/modules:/lib/modules - /dev:/dev - /etc/resolv.conf:/etc/resolv.conf - /run:/run - /config:/config - /:/hostfs - /persist:/persist:rshared,rbind - /usr/bin/containerd:/usr/bin/containerd - /opt/vendor:/opt/vendor # <---- NEW net: host capabilities: - all pid: host rootfsPropagation: shared devices: - path: all type: a
Dynamic
Extending and Standardizing yml generation
rootfs.yml
is composed from a template rootfs.yml.in
which is modified by .yq
files, and then filled in by parse-pkgs.sh
with the names of the dynamic images.
Currently, there is no standard way of running any of the .yq
files, although several exist for some variants. The only ones that get executed are for different hypervisors. The rootfs build process in BUILD.md under #generating-yml describes that the final rootfs.yml
is built as follows:
- The Makefile includes kernel-version.mk. This sets the variable
KERNEL_TAG
inside the make process to a specific docker image tag, based on theZARCH
and, if set,PLATFORM
- The Makefile sees a dependency on
images/rootfs-$(HV).yml
- The Makefile runs
tools/compose-image-yml.sh images/rootfs.yml.in images/rootfs-$(HV).yml.in "$(ROOTFS_VERSION)-$(HV)-$(ZARCH)" $(HV)
, i.e. the utility compose-image-yml.sh, passing it:- the base template
images/rootfs.yml.in
, i.e. input file - the template for the specific HV file
images/rootfs-$(HV).yml.in
, i.e. output file - the version string, which is the
ROOTFS_VERSION
, hypervisor, and architecture - the hypervisor
- the base template
compose-image-yml.sh
does the following:- Look for a modifier file
images/rootfs-$(HV).yml.in.yq
; this is identical to the HV-specific template (2nd argument), but with.yq
appended to the filename. - If it finds a modifier file, apply it to the base template, and save the result to HV-specific template.
- Search through the output file for the string
EVE_HV
and, if found, replace it with the hypervisor. - If the version argument, which was generated from the git commit, contains the phrase
dirty
, i.e. uncommitted, then change thePILLAR_TAG
in the output file toPILLAR_DEV_TAG
, which will be used in a later stage.
- Look for a modifier file
- The Makefile runs
./tools/parse-pkgs.sh images/rootfs-$(HV).yml.in > images/rootfs-$(HV).yml
, i.e. the utility parse-pkgs.sh, passing it as an input the HV-specific template generated in the previous steprootfs-$(HV).yml.in
, and saving the output to the finalrootfs-$(HV).yml
file. In addition, the variableKERNEL_TAG
is passed as an environment variable. parse-pkgs.sh
does the following:- Gets the package tag for each directory in pkg/ via
linuxkit pkg show-tag ${dir}
, and save it to variable which looks like<PKGNAME>_TAG
, e.g.PILLAR_TAG
orWWAN_TAG
. - Go through the input file - the HV-specific template - and replace the tags with the appropriate values. This includes the value of
KERNEL_TAG
as passed by the Makefile on callingparse-pkgs.sh
.
- Gets the package tag for each directory in pkg/ via
Notably, compose-image-yml.sh
look for a file named images/rootfs-$(HV).yml.in.yq
and applies it to the base template images/rootfs.yml.in
to generate rootfs-$(HV).yml.in
. This, in turn, is used as input to parse-pkgs.sh
.
This process has the following issue:
- It is limited to hypervisors; theoretically, calling it with something else as if it were a hypervisor, e.g.
make rootfs HV=something
would work, but then would get stuck at other stages, whereHV
really should be hypervisor. - It is limited to just one modifier. For example, if we want multiple different variants on multiple hypervisors, there is no way to do that.
We propose modifying compose-image.yml.sh
as follows:
- Use flagged arguments, i.e. the current mode would be:
tools/compose-image-yml.sh -b images/rootfs.yml.in -m images/rootfs-$(HV).yml.in -v "$(ROOTFS_VERSION)-$(HV)-$(ZARCH)" -h $(HV)
- Replace the usage of a single modifer with multiple, e.g.
tools/compose-image-yml.sh -b images/rootfs.yml.in -v "$(ROOTFS_VERSION)-$(HV)-$(ZARCH)" -h $(HV) images/rootfs-$(HV).yml.in.yq images/modifier1.yq images/modifier2.yq
- Update the Makefile to call
compose-image-yml.sh
with multiple modifiers.compose-image-yml.sh
will largely be "dumb", modifying with as many yq modifiers as passed to it, if they can be found. The Makefile will call pass modifiers forHV
andPLATFORM
Modifier for Specific Device Files
For specific device files, if rootfs.yml
needs to be modified, add rootfs-$(PLATFORM).yml.in.yq
.