armbian-next: lib changes - MEGASQUASH - squashed changes from c9cf3fc241cfb4c872f4aef7bbc41d5854db7ea3 to 6809de3d6063cb041205a8318e19da6a4dee68c9 ref extensions_08_10_2022_pre_v30

- also compile.sh
- shellfmt lib
- split off shell and python tools under lib
- revert removal of stuff a-n no longer uses (ref. compilation): general packaging, mkdeb etc
- editoconfig split off
- extension changes split off
- sources and sources/families split off
- some undue stuff removed or split

armbian-next: manual merge (30) of lib changes between 882f995e21 and 31ac6383e1

armbian-next: manual merge (30) of family/board changes between 882f995e21 and 31ac6383e1

armbian-next: manual merge (29) of family/board changes between 3435c46367 and 882f995e21 (A LOT!)

armbian-next: manual merge (29) of lib changes between 3435c46367 and 882f995e21 (A LOT!)

armbian-next: manual merge (28) of lib changes between revisions af6ceee6c5 and 38df56fbf3

armbian-next: manual merge (28) of sources/families changes between revisions af6ceee6c5 and 38df56fbf3

armbian-next: manual merge (27) of `lib` changes between revisions 9c52562176 and af6ceee6c5

armbian-next: manual merge (27) of `sources/families` changes between revisions 9c52562176 and af6ceee6c5

armbian-next: move `ROOTFSCACHE_VERSION` resolution from GitHub from `main-config` down to `create-cache`

- this way config does not depend on remote...

armbian-next: move `ARMBIAN_MIRROR` selection (network) from `main-config` to `prepare-host`

- this way CONFIG_DEFS_ONLY can run without touching the network

armbian-next: manual merge (26) of MD5-checking via debsums (3955) re-imagined

- @TODO make sure

armbian-next: manual merge (26) of sources/families changes between revisions 20ee8c5450 and 9c52562176

armbian-next: manual merge (26) of lib changes between revisions 20ee8c5450 and 9c52562176

- @TODO NOT including the md5/debsums check, that needs further rewrite

armbian-next: manual merge (25) of lib changes between revisions fe972621c6 and 20ee8c5450

- @TODO hmm Igor is now going out to the network for rootfs cache version during configuration phase!!! BAD BAD BAD

armbian-next: manual merge (25) of family changes between revisions fe972621c6 and 20ee8c5450

armbian-next: manual merge (24) of families changes between revisions 9ca9120420 and 560531a635

armbian-next: manual merge (24) of lib changes between revisions 9ca9120420 and 560531a635

armbian-next: manual merge (23) of all changes between revisions 17b4fb913c and 9ca9120420

armbian-next: manual merge (22) of all changes between revisions 0eb8fe7497 and 1dddf78cd0

- @TODO EXCEPT the insanity about locales/eval/VERYSILENT in #3850, requires deep review

armbian-next: manual merge (21) of all changes between revisions e7d7dab1bb and 0eb8fe7497

armbian-next: fix: patching CREATE_PATCHES=yes

- needed to create output dir

armbian-next: add `python2-dev` dep for old uboots

- cleanup some comments

armbian-next: manual merge (20) of all changes between revisions 6b72ae3c86 and 247c4c45fd

armbian-next: fix: pass `TERM` to kernel's make, so `make menuconfig` can work

armbian-next: fix: git: read commit UNIX timestamp/local date correctly

- `checked_out_revision_ts` was correct; git outputs `%ct` as a UNIX timestamp, UTC-based
- `checked_out_revision_mtime` was incorrect: git output it without converting to local time
- manually convert using `date @xx` so it has correct local time, whatever it is.
- add debugging to `get_file_modification_time()` too

armbian-next: abstract `$QEMU_BINARY` to `qemu-static.sh`: `deploy_qemu_binary_to_chroot()`/`undeploy_qemu_binary_from_chroot()`

- add hackish logic to avoid removing binary that would be needed if image actually contains `qemu-user-static` package

armbian-next: fix `uuidgen` basic dep check; use fake bash `$RANDOM` if uuidgen not available

- not good: we need uuidgen to begin logging, but it may not be installed yet. workaround.

armbian-next: retry 3 times download-only also for `PACKAGE_LIST_BOARD`

- acng is really not helping

armbian-next: allow customizing UBUNTU_MIRROR (ports mirror) with `CUSTOM_UBUNTU_MIRROR_ARM64=host/path`

armbian-next: WiP: kernel make via `env -i` for clean env; show produced /boot tree

armbian-next: manual merge (19) of all changes between revisions b23498b949 and e621d25adc

- the ssh firstrun revert stuff mostly

armbian-next: *breaking change* remove `LIB_TAG` and `.ignore_changes` completely

- one day should be replaced with an "update checker" extension, or even "update-enforcer"
- for now this just causes chaos

armbian-next: `python2` is required for some u-boot builds

- would be "use `python-is-python2` so /usr/bin/python exists and points to Python 2.x" but Jammy does not have that anymore
- python2 is required for some u-boot builds.
- that said, python 2.x is deprecated for a while and needs work thus @TODO

armbian-next: bump Python info gatherer to RELEASE=jammy too

armbian-next: add `KERNEL_MAJOR_MINOR` info to `media` kernel (@balbes150)

- 5.18 is not yet released so might be a problem here

armbian-next: allow to skip submodules during `fetch_from_repo`; introduce hook `fetch_custom_uboot`

- via GIT_SKIP_SUBMODULES=yes, which disables all submodules everywhere
- via UBOOT_GIT_SKIP_SUBMODULES=yes, which disables fetching of submodules during uboot fetch (hidden rkbins anyone?)
- extension hook `fetch_custom_uboot` so we can fetch our own stuff if needed

armbian-next: `initrd` caching fixes (always enable hook; if cache hit, convert to uImage too)

armbian-next: introduce `initramfs`/`initrd` caching

- using hashes of (hopefully) all involved files
- cache hits are rewarded with sprinkly hearts.
  - why? this proves we got a reproducible kernel modules build!
  - also, you just saved yourself 2-10 minutes of pain

armbian-next: manual merge (18) of changes between revisions 08cf31de73 and c8855aa08d

- heh; most bash code changes are for things already done in -next, or no longer used
- some version bumps, etc

armbian-next: cleanup entrypoint and shuffle `prepare_host_basic()` into logging section

armbian-next: *breaking change* add global extlinux killswitch `ALLOW_EXTLINUX`

- unless you set `ALLOW_EXTLINUX=yes`, then `SRC_EXTLINUX` will be disabled globally.
- add a bunch of logging regarding extlinux, armbianEnv and bootscripts for clarity during build
- this is due to nand-sata-install problems with extlinux
- some boards _only work_ with extlinux; we'll have to handle it later

armbian-next: extensions: `image-output-{qcow2|ovf}`: virtual output formats

- which use `qemu-utils` for `qemu-img` conversion of the .img

armbian-next: extension: `kernel-localmodconfig`: faster/slimmer kernel builds with `make localmodconfig`

armbian-next: extension: `cleanup-space-final-image`: zerofree, slim down firmware, show used space

armbian-next: introduce `do_with_ccache_statistics` and use it for kernel compile

- some TODOs
- better logging for .config copying

armbian-next: *breaking change* really disable apt sources for non-desktop builds

armbian-next: fix: don't manage apt-cacher-ng if told NOT to, not the other way around

armbian-next: `JUST_UBOOT=yes` + hooks `build_custom_uboot()`/`post_write_uboot_platform()`

- post_write_uboot_platform()
  - only runs during build, for now (not on device)
- build_custom_uboot()
  - allow fully custom, extension driven, building of u-boot
  - also partial preparation of uboot source combined with default Armbian build
- HACK: u-boot: downgrade some errors to warnings via KCFLAGS
- fix copy of atf bins to uboot, don't do it if atf's not there

armbian-next: fix: no use testing the host for resolvconf if we're manipulating the SDCARD

armbian-next: sunxi_common: avoid shortcircuit error on family_tweaks_bsp when family_tweaks_bsp_s is not defined

armbian-next: fix: add `zstd` and `parallel` to hostdeps

armbian-next: manual merge (17) of all changes between revisions 64410fb74b and 08cf31de73

- changes about `git safe dir` ignored, I've done the same in a different way
- hash calculation changes ignored, fasthash is completely different

armbian-next: add `crossbuild-essential-armel` so `arm-linux-gnueabi-gcc` is available with system toolchains

- need to for some ATF builds, at least.

armbian-next: rockchip64_common: lotsa logging and debugging

- supposedly no practical changes

armbian-next: grub: better logging

armbian-next: fix for chaos caused by git's fix of CVE-2022-24765 otherwise "fatal: unsafe repository"

- might not be the best solution, but it's the only one I found

partitioning: fix: don't try fixing a bootscript that's not there

- this fixes a bug when "rootpart=2" without rootpart 1 being /boot

armbian-next: cleanups: umount tmpfs-based $SDCARD during cleanup too

armbian-next: indented heredoc, no functional changes

armbian-next: fix shortcircuit as last statement in case of extlinux

- yes, I wasted 3 hours on this tiny bit, so *you* don't have to!
- better logging for rootfs `mkfs` et al
- introduce `PRESERVE_SDCARD_MOUNT=yes` to preserve SDCARD, MOUNT, and LOOP for debugging

armbian-next: kernel-headers: less verbose, trimmed down tools a bit (perf and testing)

khadas-vim3l: add asound.state for Khadas VIM3L

armbian-next: introduce hook `extension_finish_config()` - late hook for ext configuration

- `extension_finish_config()` is the last thing done in config phase
    - use it for determining stuff based on kernel version details, package names, etc
- also tune down some logging which was too verbose
- CI logs with no ANSI escape codes

armbian-next: shuffle around code and logic of `add_desktop_package_sources()`

- @TODO: still needs proper asset logging for sources.list(.d)
- @TODO: tunes down adding of sources/packages to CLI builds, check with Igor

armbian-next: 4.x can't build objtool in kernel-headers; allow for handling that later

- 4.x has a lot more obtuse dependencies
- introduce KERNEL_HAS_WORKING_HEADERS calculated based on KERNEL_MAJOR_MINOR

armbian-next: downgrade `error=misleading-indentation` to warning

- some 4.x kernels patches are really messy
- newer gcc's make that an error now

armbian-next: *allow cross compilation*, even the so-called "reverse cross-compile" (amd64 on arm64)

armbian-next: add `zfs` extension, which installs headers and builds ZFS via DKMS in chroot

- similar to how `nvidia` extension does it

armbian-next: x86: enable `nvidia` extension for all releases (only desktop)

armbian-next: `headers-debian-byteshift.patch` is dead; long-live cross-compiled source-only kernel-headers

- kernel-headers package now only includes _sources_
- postinst does the compilation and preparation for DKMS compatibility
- `tools` dir is included now, which includes the byteshift utilities
- handle special scripts/module.lds case after 5.10
- tested on a 6 combinations of `x86` / `arm64` / `armhf` (3x targets, 2x hosts)
- @TODO: we might be able to reduce the size of tools a bit (perf/tests/etc)
- @TODO: still missing ARCH vs ARCHITECTURE vs SRC_ARCH clarity elsewhere

armbian-next: allow `use_clean_environment=yes` for `chroot_sdcard_apt_get()` and descendants

- this causes command to be run under `env -i`, for a clean environment

armbian-next: manual merge (16) of all changes between revisions be9b5156a4 and 2a8e1ecac1

- many `traps` ignored: we don't use them anymore

armbian-next: fix logging for apt sources/gpg keys

armbian-next: don't leak `if_error_xxx` vars across runner helper invocations; always clean then (even if no error)

- also: fix wireguard-tools install, had a double parameter there

bcm2711: rpi4b: add `pi-bluetooth` which provides working Bluetooth

armbian-next: fixes for (non-)logging when interactively configuring kernel (`KERNEL_CONFIGURE=yes`)

armbian-next: move `lz4` rootfs caches to `zstd`, multithreaded

armbian-next: customize.sh: error handling, do not mount overlay if it doesn't exist

armbian-next: extra info for runners; `if_error_detail_message` and `if_error_find_files_sdcard` globals

- those are unset after running any command
- if error occur, message and/or found files will be included in log, for clarity

armbian-next: manual merge (15) of all changes between revisions 0f7200c793 and 101eaec907

armbian-next: better logging for `rsync` calls everywhere

- make rsync verbose

armbian-next: downloads: skip download if no `ARMBIAN_MIRROR` nor `DOWNLOAD_MIRROR`; less logs

armbian-next: update rockchip.conf from master and use runners

armbian-next: update mvebu64.conf from master and use functions

armbian-next: git: fix `fetch_from_repo` with actual submodules usage

armbian-next: `armbian-next`ify the `nvidia` extension after rebase from master

- driver version is configurable via `NVIDIA_DRIVER_VERSION`
- use runner function to log/error-handle/use apt cache/etc

rpi4b: there's no legacy branch anymore, remove it from KERNEL_TARGET

armbian-next: `download_and_verify` non-error handled; logging is messy [WiP] [HACK]

armbian-next: logging: let ANSI colors pass to logfile; CALLER_PID instead of BASHPID in subshell

armbian-next: enable HTTPS CONNECT in Armbian-managed apt-cacher-ng configuration

- PPAs require it

armbian-next: don't loop forever if we can't obtain ARMBIAN_MIRROR from redirector

- also, don't even try to do it if `SKIP_ARMBIAN_REPO=yes`

armbian-next: manual merge (14) of all changes between revisions 13469fd8a9 and 09e416e31c

- also editorconfig and compile.sh (root) changes

armbian-next: *much* improved logging to HTML; log archiving; consistency

- keep only current logfile
- log to LOGFILE also if SHOW_LOG=yes
- log cmd runtime and success/error directly in runner

armbian-next: *breaking change* use `MemAvailable` (not including swap) and up requirements for tmpfs

- of course add debugging logs
- rename vars
- should really only use this if we've really a lot of completely free RAM to spare
- otherwise OOM killer comes knocking
- or swapping to disk, that is counter-productive

armbian-next: *breaking change* `DEB_COMPRESS=none` by default if not running in CI/GHA

armbian-next: *breaking change* `CLEAN_LEVEL=make` is no more; new `make-kernel`, `make-atf`, `make-uboot`

- allows individual control of what to clean
- this effectively disables `make clean` by default
- rebuilds work and timestamping works for patching, so no reason to clean everytime by default.

armbian-next: refactor `prepare_host`, give `apt-cacher-ng` some much needed attention

- library dir for host-related stuff, pull it out of "general" finally

armbian-next: hostdeps: all toolchains via `crossbuild-essential-arm64`/`armhf`/`amd64`

- trying to sort out hostdeps for Jammy [WiP]

armbian-next: remove `eatmydata` usage, leftover from failed tries to make git faster

armbian-next: fix git origin check, recreate working copy if origin does not match

- fix cold bundle https download progress reporting

armbian-next: finally consolidating logs into output/logs; colorized HTML logs

armbian-next: introduce `do_with_retries()` and use it for apt remote operations during image build

armbian-next: another round of logging tuning/fixes; log assets; git logging

- introduce `do_with_log_asset()` and `LOG_ASSET=xxx`
- separate "git" logging level
- add `trap_handler_cleanup_destimg()` to cleanup DESTIMG

armbian-next: kernel: use parallel compressors; reproducible kernel builds

- also remove leftover mkdebian/builddeb parameters in make invocation
- add pbzip2 to hostdeps

armbian-next: tuning logging for timestamp/fasthash related stuff which is very verbose

- idea is to not completely overwhelm `SHOW_DEBUG=yes` case
- make patching quieter and use file instead of stdin
- set checked_out_revision_ts during git checkout (timestamp version of _mtime)
- timestamp | fasthash logging level (via `SHOW_FASTHASH=yes`)

armbian-next: completely remove mkdebian/builddeb/general-packaging kernel packaging stuff

armbian-next: manual merge (12) of all changes between revisions 34d4be6b7b and 5fe0f36aa8

armbian-next: introduce `PRESERVE_WORKDIR=yes` for heavy debugging

armbian-next: packaging linux-headers again

- do NOT use any output from `make headers_install` - that's for libc headers
- grabs "headers" (and binary tools) directly from the kernel build tree, not install target
- does not produce headers if cross compiling, for now
- produces kernel-headers package for the architecture on which it was built
- doing a single make invocation with build and install for packaging
  - using 'make all' in place of vmlinuz/bzImage/image/zImage

armbian-next: apt download-only retried 3 times before installing main packages

armbian-next: fix `VER=` read from kernel-image package, also add `linux` default

armbian-next: some logging for atf compilation

armbian-next: rewrite hostdeps as array, add armhf toolchains

armbian-next: distro-agnostic: cleanups

armbian-next: armbianEnv hooks/debugs (bsp / image)

armbian-next: rpi: completely remove dtb hacks, allowing flash-kernel to work again

armbian-next: refactor new kernel packaging; add linux-dtb package back in finally, upgrades untested

armbian-next: refactor new kernel packaging; extract hook helper, fix kernel symlink

armbian-next: refactor new kernel packaging; add linux-dtb package back in finally, all hooks untested

flash-kernel: fix short-circuits as last statement in functions

armbian-next: do not force `SHOW_LOG=yes` if `CI=true`; let's _trust_ logging and error handling works

armbian-next: back out of setting mtime to the revision time during git checkout.

- of course this causes huge recompiles when wanted revision moves forward

armbian-next: sync 'config' dir from master revision ed589b248a

- this is _also_ getting out of hand... gotta merge soon

armbian-next: sync 'packages' dir from master revision ed589b248a

armbian-next: manual merge (11) of all lib/*.sh changes between revisions 3305d45b81 and ed589b248a

armbian-next: more refactorings, general logging; fixes; split image stuff

- logging flowing correct to LOGDIR, still needs packaging

armbian-next: complete removal of usages of `LOG_SUBPATH`; 100% error handled

- loose ends, use new LOGDIR
- remove the last shortcircuit in extensions execution, now it's 100% error handled
- many logging fixes
- still missing: final log consolidation/cleanup

logging: blue groups if `SHOW_DEBUG=yes` or `SHOW_GROUPS=yes` (console equivalent of CI's avocado)

armbian-next: shut down some too-verbose logging: logo building and update-initramfs

armbian-next: git/patching, kernel: use date from git as mtime minimum for patched files

- use revision's date from git log as mtime for all fetch_from_repo
- fix patched files date at least checkout date, otherwise some patches never build

armbian-next: first attempt at kernel packaging rework; just linux-image pkg, no dtbs yet

- correctly predict KERNELRELEASE, put image-dtbs in the right spot for flash-kernel
- remove dpkg-gencontrol, do it all directly

armbian-next: rework kernel source packaging, now exporting from git, to .tar.zst

- compress with zstdmt (multi-thread zstd), remove pv indicator, it's much faster anyway
- export from git (soon will have all patches et al too)
- better logging, show pkg name
- much, much faster due to zstdmt and deb with none compression

armbian-next: a bit atrocious, nameref loaded, `get_list_of_all_buildable_boards()`

- in the process, add support for userpatches/config structure mirroring core, for easy upstreaming

armbian-next: make `SKIP_EXTERNAL_TOOLCHAINS=yes` default. lets see what breaks [WiP]

armbian-next: keeping stdout clean, use display_alert() for cleanup logging

armbian-next: library cleanups; remove `build-all.sh` for good; bring `patching.sh` back

armbian-next: `interactive_desktop_main_configuration()` and stderr'ed + error handl'ed dialog

- use redirection to fd 3 for dialog, now cleanly on stderr
- `show_menu()` -> `dialog_menu()` et al
- interactive configuration now works again!

armbian-next: logging: `SHOW_PIDS=yes`

armbian-next: refactor and error-handle most of desktop configuration, incl menus/dialog

- `dialog_if_terminal_set_vars()` in place of `dialog_if_terminal()`

[WiP] ddk stuff, allow if not in `$KERNEL_TARGET`

armbian-next: split `compile_kernel()` function into smaller functions (+logging)

- `do_with_logging()` galore, much better error reporting for kernel
- `do_with_hooks()` is for the future, just a placeholder for now

armbian-next: `do_with_hooks()` placeholder for future ideas

armbian-next: logging: small refactor and `do_with_logging` admit it does not do error control

armbian-next: fix: traps: `trap_manager_error_handled` is integer (`-i`) not array (`-a`)

armbian-next: sunxi-tools: fix logging for sunxi-tools compilation

armbian-next: runners now run bash with `-o pipefail` in addition to `-e`

- attention, only affects stuff run through the functions in runners.sh

armbian-next: kernel: reduce logging clutter (CC,LD,AR)

- hide fasthash_debug under `SHOW_FASTHASH`

armbian-next: `armhf` should make `zImage` -- or should it?

armbian-next: show logs through ccze; avoid ANSI escapes in file; `SHOW_xxx` control

- `SHOW_DEBUG` shows the debug level
- `SHOW_COMMAND` shows all invoked commands through the runners
- `SHOW_TRAPS` to show 'cleanup' and 'trap' level
- `SHOW_TIMING` to show $SECONDS but pretty
- replace hardcoded traps/cleanups logging

armbian-next: add `KERNEL_MAJOR_MINOR=x.z` to every family, manually from the `KERNELBRANCH`

armbian-next: cold/warm bundles import/download/export for fetch_from_repo

- warm remote, if present, can be exported shallow
- if warm remote bundle is present, can be imported shallow too
- fallback to cold bundle if warm not present
- export (from cold, if exists + warm) shallow bundle
- use temp clone and DATE (not rev or tag) for shallowing, WORKS!
- info JSON/CSV, include "config_ok" true/false, kernel/uboot info
  - include logs for failed configs too
  - core reports ARMBIAN_WILL_BUILD_KERNEL and ARMBIAN_WILL_BUILD_UBOOT now with package names

armbian-next: `KERNELDIR` is out, `KERNEL_MAJOR_MINOR` is in for all `meson64`, `rpi4b` and `uefi`

armbian-next: new kernel workdir layout: cache/sources/kernel/<ARCH>-<KERNEL_MAJOR_MINOR>-<LINUXFAMILY>

- `GIT_FIXED_WORKDIR` is used to ignore 2nd param and use a specific dir
- this now REQUIRES `KERNEL_MAJOR_MINOR` to be set.
- prepare some `WARM_REMOTE_NAME` and related, based on it

armbian-next: JUST_KERNEL=yes (opposed to KERNEL_ONLY=yes) is really just the kernel build

armbian-next: fetch_from_repos now done when actually compiling atf/uboot/kernel, not before

- lib regen after removing empty files (sources.sh and errors.sh are now gone)

armbian-next: linux: back to Torvalds bundle, no tags; reminder about export

armbian-next: full cached kernel build; refactor all make's in a single place, even for packaging

- 2nd+ runs build in less than a minute
- kernel: compile and package in a single step, more efficient?
- KERNEL_BUILD_DTBS yes/no to build or not dtbs, replaces KERNEL_EXTRA_TARGETS
- dtbs_install, modules_install and headers_install now called by Armbian, not packaging
- kernel with split, but identical, build and install makes for modules/headers/dtbs
- make mkdebian and builddeb idempotent as possible
- keep a lot more cache, specially 'debian' folder
- filtering logging of install stuff
- might be a few leftovers, revisit gains with packaging later
  - keeping the arm64 makefile Image.gz vs Image hack
  - fix order of packaging patch byteshift, but still there
  - cleaning of scripts tools on cross compile removed (!)

armbian-next: minor stylistic changes that I end up doing while working on other stuff

- I am `OFFLINE_WORK`'ing, I don't wanna waste 3 seconds, thanks
- OCD-fix of double `local` declarations

[giga-wip] rework patching, introducing fasthash

[wip] git: experiment with stable kernel bundle, and all tags. nice, but for what?

- also: never delete working copy, exit with error instead.

[wip] disable make clean during packaging. I wanna rebuild fast, always [NO PR?]

armbian-next: export CHOSEN_KERNEL_WITH_ARCH for reporting info

- fix info gathering, parse all boards first, and stop if some failed
- fix KERNEL_TARGET regex by including optional "export "
- add export from info to CSV, very basic stuff, but works

[squash] remove ddk bullshit from KERNEL_TARGET

armbian-next: remove file logging of aggregation stuff. config phase can't touch disk anymore.

[WiP] git cold bundle; no: shallow clones/fetched; yes: locally packed repo

armbian-next: reorder functions in file, they have a ~logical call-tree order

armbian-next: move `fingerprint_image()` out of `git.sh` into its own file

logging: fix for double stderr redirect during `fakeroot_dpkg_deb_build`

logging: subdued "command" logging for debugging low level cmd invocations ("frog")

armbian-next: when showing log, emit all host-side invocations as `debug` too.

[WiP] trap handler abstraction, somewhat works!

armbian-next: manual merge (10) of all lib/*.sh changes between revisions a4ae3a2270 and 3305d45b81 - but NOT the git unshallow stuff, that will be rewritten

armbian-next: trapmanager pt1, identifying spots for trap manager intervention

armbian-next: `install_pkg_deb` -> `install_host_side_packages` which is completely rewritten version

- much simplified; compare installed packages vs wanted, and only update if some missing

armbian-next: force u-boot and kernel's gcc to output colors to make easy to spot warnings and errors

docker: pass the `CI` env var through Docker invocation, for GitHub Actions

armbian-next: avoid warning if "file" utility not installed

- should not happen, but better safe than sorry

armbian-next: disable long_running `pv` progress bar for custom case too

- will rework later, for now pipe causes subshell and caos

armbian-next: if `CI=true` then `SHOW_LOG=yes` always

docker: add arm64 toolchain to Dockerfile; warn, but don't break, on modprobe failure

armbian-next: docker: use ubuntu:rolling, fix deps, use `$SRC/cache` as container's cache dir

armbian-next: logging fixes (padding, don't show error more than once, don't remove trap)

armbian-next: fixes for early error handling and logging

- split stacktrace-related functions into their own lib file
- simplify the traps
- some stacktrace formatting for readability

armbian-next: fix: don't `trap` `ERR` twice, it causes bash to go bananas regarding `caller`

armbian-next: `UPSTEM_VER` -> `UBOOT_REPO_VERSION` and related fixes

armbian-next: oops, fix some non-lib changes I missed, up to revision ff4346c468

armbian-next: manual merge (9) of all lib/*.sh changes between revisions 3b7f5b1f34 and ff4346c468

armbian-next: more error handling fixes. avoid shortcircuits.

- store previous error message in `MSG_IF_ERROR` (still to be actually shown somewhere during error)

armbian-next: more error handling fixes. avoid subshells, shortcircuits, and pipes

- add `CFLAGS=-fdiagnostics-color=always` to kernel compile; would need also for u-boot soon

WiP: indexing JSON into OpenSearch, all-hardcoded version

rpi: add DTB symlink in Debian/Ubuntu standard location /lib/firmware/$version/device-tree; remove build-time-only hacks

- this allows us to remove the most horrible hack
- should allow for correctly working DTB upgrades
- should NOT impact other families, although a new symlink will be created, nothing uses it.

rpi: fix: flash-kernel fix to ignore kernel 'flavour' for all raspi's

armbian-next: don't try to remove packages that are not installed to begin with

- much faster
- new chroot_sdcard_with_stdout() runner, without bash or any escaping.

armbian-next: don't try to enable systemd services for units that don't exist

- those might be removed by a bsp extension, so check for existence before trying to enable

armbian-next: don't error/warn on failure to enable bootsplash when systemd units missing

armbian-next: use indented HEREDOCS for all call_extension_method's

armbian-next: manual merge (8) of all lib/*.sh changes between revisions 1d499d9ac2 and 3b7f5b1f34

armbian-next: manual merge (7) of all lib/*.sh changes between revisions d885bfc97d and 1d499d9ac2

armbian-next: manual merge (6) of all lib/*.sh changes between revisions c7f3c239fe and d885bfc97d

armbian-next: avoid writing to disk during configuration; `ANSI_COLOR=none` logging; make CONFIG_DEFS_ONLY=yes runnable without sudo

- when `CONFIG_DEFS_ONLY=yes`, avoid writing the config summary output.log file.
  - refactor that into a function as to be easy to if-out-of
  - don't write to disk during aggregate_content() if `CONFIG_DEFS_ONLY=yes`
  - don't write to disk during show_checklist_variables() if `CONFIG_DEFS_ONLY=yes`
  - don't write to disk during write_deboostrap_list_debug_log() if `CONFIG_DEFS_ONLY=yes`
  - don't compress and rotate logs if `CONFIG_DEFS_ONLY=yes`
- don't pretend to be handling errors we can't handle during var capture
- I foresee a world we can build all .debs without sudo
- and a some kind of split of codebase entrypoint due to that future feature
- some python info.py enhancements, not ready yet

armbian-next: shellfmt and regen library (after rebase from master n.5)

tools/shellfmt.sh: exclude "cache" and ".tmp" from formatting, for obvious reasons

tools/gen-library.sh: sort function files, so it does not keep changing between runs on different machines.

- order should not be important, since files only contain functions, but avoid git churn

armbian-next: manual merge (5) of all lib/*.sh changes between revisions 1b18df3c83 and e7962bb2b5

- most PKG_PREFIX work was already done

armbian-next: `TMPDIR` for all, many logging fixes, error handling: leave-no-garbage-behind without needing traps.

- set `MOUNT_UUID` and `WORKDIR`/`MOUNT`/`SDCARD`/`EXTENSION_MANAGER_TMP_DIR`/`DESTIMG` early in do_main_configuration()
  - but, they're just _set_ there, dirs are not created early, but on demand later
  - still @TODO: actually clean those during error trap. (unhappy path leaves garbage still)
  - but does not leave garbage behind during "successful" runs at least (happy path works)
- actually export `TMPDIR` (== `WORKDIR`) during start of build (not config!), so all `mktemp` are subject to it
  - `runners.sh` has helpers to avoid passing `TMPDIR` to chroot. Use the helpers! don't call `chroot` directly.
  - don't trap/cleanup individual `mktemp` temp dirs during .deb packaging's, all is handled at once now.
  - kernel packaging, for example, automatically picks up `TMPDIR` too. So now hosts `/tmp` is mostly left alone.
- fix some "infodumps" that are done into `.log` files directly.
- don't use sudo if `CONFIG_DEFS_ONLY=yes`; we'll only be collecting info, not doing anything.
- simpler logging for `rsync` operations (just dump to stdout, logging will handle it!)
- use padded counter for section logfiles, so we know which order things ran. exported as `CURRENT_LOGGING_COUNTER`
- no reason to use `apt-get` with `-yqq` anymore, since all logging is handled, so now `-y` by default
- desktop: using runners helpers for rootfs-desktop.sh, which should help a lot with acng caching and finding of problems
- extensions: correctly cleanup temp stuff, extensions has its own tmp/workdir now, and is always cleaned up at end of build.

armbian-next: bye `PKG_PREFIX`, hello `run_host_x86_binary_logged()` wrapper function; better error handling

- we've x86-only utilities that might need to be run on non-x86 build machines
- previously duplicated logic in PKG_PREFIX variable refactored into logged function
- added centralized debug logging
- replace all PKG_PREFIX usage with the new wrapper function, which already handles logging and errors.
  - mostly FIP tooling invocations
  - but also the boot_logo image builder
  - wrapper function delegates to common `run_host_command_logged`
- wrap other FIP invocations with `run_host_command_logged` too, for tidy logging
- avoid using conditionals when invoking functions; that completely disables error handling inside the called function
- use explicit bash opts instead of shortcuts like `set -e`
- a _lot_ of debug logging added

armbian-next: always use UPPERCASE labels for FAT32 UEFI filesystems (rpi4b, uefi-*)

armbian-next: shellfmt after rebase onto master

armbian-next: manual merge (4) of all lib/*.sh changes between revisions 23afccf56e and e610f00bc7

- plus ooops

atf: fix for `set -e` mode; fix CROSS_COMPILE quoting

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: predict the future location of .img file

- otherwise it's really unhelpful

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

uefi: alias `BRANCH=ddk` to `current`'s `DISTRO_GENERIC_KERNEL=yes`

- no real change, just to match rpi4b's BRANCH=style
- opens space for Armbian-built `current` soon

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

rpi: `legacy`->`ddk` (distro default kernel), remove overclock

- common vars in bcm2711.conf moved to top
- removed overclock/overvolt that was leftover my old setup
- confirmed: works with rpi3b too, should work with CM4/CM3 and others
- use valid UPPERCASE FAT label for RPICFG (in place of `rpicfg`)

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

armbian-next: shellfmt again after rebase

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

armbian-next: manual merge (3) of all lib/*.sh changes between revisions 1035905760 and e4e4ab0791

- missed non lib change on Several improvements for RPi builds (#3391)
- I just realized I will have to drop all non lib changes

rockship: fixes for `set -e` mode in rockship armhf family and bsp tweaks

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

armhf: enable building armhf targets on amd64 using system toolchains

- SKIP_EXTERNAL_TOOLCHAINS=yes on amd64 should use the same system toolchains as an arm64 build

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: better logging about family_tweaks and family_tweaks_bsp

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

kernel: unblock cross compilation, warn about headers package

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: fixes for sunxi/megous stuff with `set -e`

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: fix shellcheck references generation

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: manual merge (2) of all lib/*.sh changes between revisions 117633687e and 3083038855

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: renaming function files a bit more consistently

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: removing leftover empty file after all the moving around

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: really insist on set -e during library loading

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: shellfmt again after rebasing master

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: manual merge of all lib/*.sh changes between revisions f6143eff67 and f3388b9aee

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: generic do_capturing_defs wrapper; Python parser

- enabled by passing CONFIG_DEFS_ONLY=yes; in this case does not build anything
- [WiP] Python3 info reader / matrix expander
  - multithreaded version

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: move some interactive parts of config into its own functions

- mostly from config-prepare;
- there is still a lot of others in main-config

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: use chroot_custom for grub and flash-kernel extension logging

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: use line buffering, fix runner output color for GHA

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: wrap dpkg-deb; set TMPDIR (not in chroot); refactor kernel make

- And a huge amount of @TODO's added
- Add "debug" and "deprecation" `display_alert()` levels
- insist that `install_common` is now `install_distribution_agnostic`
- unrelated: realtek 8822CS is EXTRAWIFI=yes only now, sorry.
- many debug statements for desktop

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: don't bail out on patching error

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: bunch of fixes; no-stdin; traps; better stacks

- mostly no-stdin dialog handling (desktop et al)
- let ERR trap run together with unmount trap (EXIT etc)

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

logging: trap ERR very early, pass-in caller info

Signed-off-by: Ricardo Pardini <ricardo@pardini.net>

armbian-next: huge refactor, shellfmt, codesplit, logging/error handling

- *this commit changes most/all the lines of bash code in armbian/build*
- *author is set to IgorPec for historical reasons, rpardini is to blame for the actual changes*
- logging: refactorings, pt.4: autogen lib, shellfmt tool, extract cli
  - shellfmt: auto-downloader and runner of shellfmt
    - darwin/linux
    - amd64/arm64
    - find ~correct files to format
    - run formatting
    - check formatting soon
  - refactor compile's CLI stuff out into function/cli
  - gen, and use genned library with tool
- logging: refactoring pt3: HUGE split of everything into everything else
  - plus rebasing fixes
- logging: refactorings, pt. 2: error handling
  - fix: no errors during umount_chroot()
  - no progress for CI=true builds
  - WiP disable kernel hashing. too crazy.
  - a few builds now working with "set -e"
  - wtf. or something
  - kernel logging and long_running stuff - a mess - needs moving around in files
  - rewrite uboot compile loop without using subshells. remove ALL traps. refactor host command
  - better logging for u-boot
  - more fixes, u-boot
  - more fixes for logging et al
  - git stuff
  - many (many) fixes
  - new color scheme
  - a monster. make sure unmounted at the end. remove set -e's, to-be-readded.
  - remove set -e "for safety reasons"
  - more alerts. we gotta know whats failing
  - some more logging stuff and fixes for error checking
  - more logging and error handling stuff
  - fixes; some set -e's
  - more logging stuff
- logging: refactoring codebase, pt.1: functions-only
  - Refactor the hell out of everything into functions
  - rename build-all-ng to build-multi; other fixes, extensions init
  - slight nudge
  - some were already good, like this one.
  - syntax fixes
  - some need a little nudge
  - another clean one
  - some just need a better name (and splitting later)
  - syntax fixes
  - some were already good, like this desktop one
  - some were already good, like this other one
  - some were already good, like this one.
  - debootstrap is gone.
  - extract functions from compile.sh
  - add logging to main_default_build
  - more stuff
  - cleanups and refactors of main.sh
- logging: first steps
- logging: pt. 0: shellfmt everything

- add riscv64 do SRC_ARCH/ARCH/ARCHITECTURE mess; add warn
This commit is contained in:
Ricardo Pardini
2022-10-09 12:37:11 +02:00
parent ad21c12c2b
commit b25b3cf499
60 changed files with 4990 additions and 2588 deletions

View File

@@ -13,7 +13,15 @@
# use configuration files like config-default.conf to set the build configuration # use configuration files like config-default.conf to set the build configuration
# check Armbian documentation https://docs.armbian.com/ for more info # check Armbian documentation https://docs.armbian.com/ for more info
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -e
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
# Important, go read http://mywiki.wooledge.org/BashFAQ/105 NOW!
SRC="$(dirname "$(realpath "${BASH_SOURCE[0]}")")" SRC="$(dirname "$(realpath "${BASH_SOURCE[0]}")")"
cd "${SRC}" || exit
# check for whitespace in ${SRC} and exit for safety reasons # check for whitespace in ${SRC} and exit for safety reasons
grep -q "[[:space:]]" <<< "${SRC}" && { grep -q "[[:space:]]" <<< "${SRC}" && {
@@ -21,24 +29,27 @@ grep -q "[[:space:]]" <<< "${SRC}" && {
exit 1 exit 1
} }
cd "${SRC}" || exit # Sanity check.
if [[ ! -f "${SRC}"/lib/single.sh ]]; then
if [[ -f "${SRC}"/lib/import-functions.sh ]]; then
# Declare this folder as safe
if ! grep -q "directory = \*" "$HOME/.gitconfig" 2> /dev/null; then
git config --global --add safe.directory "*"
fi
# shellcheck source=lib/import-functions.sh
source "${SRC}"/lib/import-functions.sh
else
echo "Error: missing build directory structure" echo "Error: missing build directory structure"
echo "Please clone the full repository https://github.com/armbian/build/" echo "Please clone the full repository https://github.com/armbian/build/"
exit 255 exit 255
fi fi
# shellcheck source=lib/single.sh
source "${SRC}"/lib/single.sh
# initialize logging variables.
logging_init
# initialize the traps
traps_init
# make sure git considers our build system dir as a safe dir (only if actually building)
[[ "${CONFIG_DEFS_ONLY}" != "yes" ]] && git_ensure_safe_directory "${SRC}"
# Execute the main CLI entrypoint.
cli_entrypoint "$@" cli_entrypoint "$@"
# Log the last statement of this script for debugging purposes.
display_alert "Armbian build script exiting" "very last thing" "cleanup"

View File

@@ -6,7 +6,7 @@ declare -A defined_hook_point_functions # keeps a map of hook point fu
declare -A hook_point_function_trace_sources # keeps a map of hook point functions that were actually called and their source declare -A hook_point_function_trace_sources # keeps a map of hook point functions that were actually called and their source
declare -A hook_point_function_trace_lines # keeps a map of hook point functions that were actually called and their source declare -A hook_point_function_trace_lines # keeps a map of hook point functions that were actually called and their source
declare fragment_manager_cleanup_file # this is a file used to cleanup the manager's produced functions, for build_all_ng declare fragment_manager_cleanup_file # this is a file used to cleanup the manager's produced functions, for build_all_ng
# configuration. # configuration. Command line might override this.
export DEBUG_EXTENSION_CALLS=no # set to yes to log every hook function called to the main build log export DEBUG_EXTENSION_CALLS=no # set to yes to log every hook function called to the main build log
export LOG_ENABLE_EXTENSION=yes # colorful logs with stacktrace when enable_extension is called. export LOG_ENABLE_EXTENSION=yes # colorful logs with stacktrace when enable_extension is called.
@@ -21,7 +21,7 @@ export LOG_ENABLE_EXTENSION=yes # colorful logs with stacktrace when enable_exte
# notice: this is not involved in how the hook functions came to be. read below for that. # notice: this is not involved in how the hook functions came to be. read below for that.
call_extension_method() { call_extension_method() {
# First, consume the stdin and write metadata about the call. # First, consume the stdin and write metadata about the call.
write_hook_point_metadata "$@" || true write_hook_point_metadata "$@"
# @TODO: hack to handle stdin again, possibly with '< /dev/tty' # @TODO: hack to handle stdin again, possibly with '< /dev/tty'
@@ -37,8 +37,9 @@ call_extension_method() {
# Then call the hooks, if they are defined. # Then call the hooks, if they are defined.
for hook_name in "$@"; do for hook_name in "$@"; do
echo "-- Extension Method being called: ${hook_name}" >> "${EXTENSION_MANAGER_LOG_FILE}" echo "-- Extension Method being called: ${hook_name}" >> "${EXTENSION_MANAGER_LOG_FILE}"
# shellcheck disable=SC2086 if [[ $(type -t ${hook_name} || true) == function ]]; then
[[ $(type -t ${hook_name}) == function ]] && { ${hook_name}; } ${hook_name}
fi
done done
} }
@@ -61,13 +62,16 @@ initialize_extension_manager() {
# This marks the manager as initialized, no more extensions are allowed to load after this. # This marks the manager as initialized, no more extensions are allowed to load after this.
export initialize_extension_manager_counter=$((initialize_extension_manager_counter + 1)) export initialize_extension_manager_counter=$((initialize_extension_manager_counter + 1))
# Have a unique temporary dir, even if being built concurrently by build_all_ng. # Extensions has its own work/tmp directory, defined by do_main_configuration, with build UUID. We just create it here, unless told not to.
export EXTENSION_MANAGER_TMP_DIR="${SRC}/.tmp/.extensions/${LOG_SUBPATH}" display_alert "Initializing EXTENSION_MANAGER_TMP_DIR" "${EXTENSION_MANAGER_TMP_DIR}" "debug"
mkdir -p "${EXTENSION_MANAGER_TMP_DIR}" mkdir -p "${EXTENSION_MANAGER_TMP_DIR}"
# Log destination. # Log destination.
export EXTENSION_MANAGER_LOG_FILE="${EXTENSION_MANAGER_TMP_DIR}/extensions.log" export EXTENSION_MANAGER_LOG_FILE="${LOGDIR}/999.extensions.log"
echo -n "" > "${EXTENSION_MANAGER_TMP_DIR}/hook_point_calls.txt" [[ "${WRITE_EXTENSIONS_METADATA:-yes}" == "no" ]] && echo -n "" > "${EXTENSION_MANAGER_TMP_DIR}/hook_point_calls.txt"
# Add trap handler to cleanup and not leave garbage behind when exiting.
add_cleanup_handler cleanup_handler_extensions
# globally initialize the extensions log. # globally initialize the extensions log.
echo "-- lib/extensions.sh included. logs will be below, followed by the debug generated by the initialize_extension_manager() function." > "${EXTENSION_MANAGER_LOG_FILE}" echo "-- lib/extensions.sh included. logs will be below, followed by the debug generated by the initialize_extension_manager() function." > "${EXTENSION_MANAGER_LOG_FILE}"
@@ -87,7 +91,7 @@ initialize_extension_manager() {
declare -i hook_points_counter=0 hook_functions_counter=0 hook_point_functions_counter=0 declare -i hook_points_counter=0 hook_functions_counter=0 hook_point_functions_counter=0
# initialize the cleanups file. # initialize the cleanups file.
fragment_manager_cleanup_file="${SRC}"/.tmp/extension_function_cleanup.sh fragment_manager_cleanup_file="${EXTENSION_MANAGER_TMP_DIR}/extension_function_cleanup.sh"
echo "# cleanups: " > "${fragment_manager_cleanup_file}" echo "# cleanups: " > "${fragment_manager_cleanup_file}"
local FUNCTION_SORT_OPTIONS="--general-numeric-sort --ignore-case" # --random-sort could be used to introduce chaos local FUNCTION_SORT_OPTIONS="--general-numeric-sort --ignore-case" # --random-sort could be used to introduce chaos
@@ -101,7 +105,7 @@ initialize_extension_manager() {
# for now, just warn, but we could devise a way to actually integrate it in the call list. # for now, just warn, but we could devise a way to actually integrate it in the call list.
# or: advise the user to rename their user_config() function to something like user_config__make_it_awesome() # or: advise the user to rename their user_config() function to something like user_config__make_it_awesome()
local existing_hook_point_function local existing_hook_point_function
existing_hook_point_function="$(compgen -A function | grep "^${hook_point}\$")" existing_hook_point_function="$(compgen -A function | grep "^${hook_point}\$" || true)"
if [[ "${existing_hook_point_function}" == "${hook_point}" ]]; then if [[ "${existing_hook_point_function}" == "${hook_point}" ]]; then
echo "--- hook_point_functions (final sorted realnames): ${hook_point_functions}" >> "${EXTENSION_MANAGER_LOG_FILE}" echo "--- hook_point_functions (final sorted realnames): ${hook_point_functions}" >> "${EXTENSION_MANAGER_LOG_FILE}"
display_alert "Extension conflict" "function ${hook_point} already defined! ignoring functions: $(compgen -A function | grep "^${hook_point}${hook_extension_delimiter}")" "wrn" display_alert "Extension conflict" "function ${hook_point} already defined! ignoring functions: $(compgen -A function | grep "^${hook_point}${hook_extension_delimiter}")" "wrn"
@@ -205,7 +209,6 @@ initialize_extension_manager() {
# output the call, passing arguments, and also logging the output to the extensions log. # output the call, passing arguments, and also logging the output to the extensions log.
# attention: don't pipe here (eg, capture output), otherwise hook function cant modify the environment (which is mostly the point) # attention: don't pipe here (eg, capture output), otherwise hook function cant modify the environment (which is mostly the point)
# @TODO: better error handling. we have a good opportunity to 'set -e' here, and 'set +e' after, so that extension authors are encouraged to write error-free handling code
cat <<- FUNCTION_DEFINITION_CALLSITE >> "${temp_source_file_for_hook_point}" cat <<- FUNCTION_DEFINITION_CALLSITE >> "${temp_source_file_for_hook_point}"
hook_point_function_trace_sources["${hook_point}${hook_extension_delimiter}${hook_point_function}"]="\${BASH_SOURCE[*]}" hook_point_function_trace_sources["${hook_point}${hook_extension_delimiter}${hook_point_function}"]="\${BASH_SOURCE[*]}"
hook_point_function_trace_lines["${hook_point}${hook_extension_delimiter}${hook_point_function}"]="\${BASH_LINENO[*]}" hook_point_function_trace_lines["${hook_point}${hook_extension_delimiter}${hook_point_function}"]="\${BASH_LINENO[*]}"
@@ -246,20 +249,38 @@ initialize_extension_manager() {
# Dont show any output until we have more than 1 hook function (we implement one already, below) # Dont show any output until we have more than 1 hook function (we implement one already, below)
[[ ${hook_functions_counter} -gt 0 ]] && [[ ${hook_functions_counter} -gt 0 ]] &&
display_alert "Extension manager" "processed ${hook_points_counter} Extension Methods calls and ${hook_functions_counter} Extension Method implementations" "info" | tee -a "${EXTENSION_MANAGER_LOG_FILE}" display_alert "Extension manager" "processed ${hook_points_counter} Extension Methods calls and ${hook_functions_counter} Extension Method implementations" "info" | tee -a "${EXTENSION_MANAGER_LOG_FILE}"
return 0 # exit with success, short-circuit above.
} }
cleanup_extension_manager() { cleanup_extension_manager() {
if [[ -f "${fragment_manager_cleanup_file}" ]]; then if [[ -f "${fragment_manager_cleanup_file}" ]]; then
display_alert "Cleaning up" "extension manager" "info" display_alert "Cleaning up" "extension manager" "info"
# this will unset all the functions.
# shellcheck disable=SC1090 # dynamic source, thanks, shellcheck # shellcheck disable=SC1090 # dynamic source, thanks, shellcheck
source "${fragment_manager_cleanup_file}" source "${fragment_manager_cleanup_file}" # this will unset all the functions.
rm -f "${fragment_manager_cleanup_file}" # also remove the file.
unset fragment_manager_cleanup_file # and unset the var.
fi fi
# reset/unset the variables used # reset/unset the variables used
initialize_extension_manager_counter=0 initialize_extension_manager_counter=0
unset extension_function_info defined_hook_point_functions hook_point_function_trace_sources hook_point_function_trace_lines fragment_manager_cleanup_file unset extension_function_info defined_hook_point_functions hook_point_function_trace_sources hook_point_function_trace_lines fragment_manager_cleanup_file
} }
function cleanup_handler_extensions() {
display_alert "yeah the extensions trap handler..." "cleanup_handler_extensions" "cleanup"
cleanup_extension_manager
# Stop logging.
unset EXTENSION_MANAGER_LOG_FILE
# cleanup our tmpdir.
if [[ -d "${EXTENSION_MANAGER_TMP_DIR}" ]]; then
rm -rf "${EXTENSION_MANAGER_TMP_DIR}"
fi
unset EXTENSION_MANAGER_TMP_DIR
}
# why not eat our own dog food? # why not eat our own dog food?
# process everything that happened during extension related activities # process everything that happened during extension related activities
# and write it to the log. also, move the log from the .tmp dir to its # and write it to the log. also, move the log from the .tmp dir to its
@@ -269,32 +290,28 @@ run_after_build__999_finish_extension_manager() {
export defined_hook_point_functions hook_point_function_trace_sources export defined_hook_point_functions hook_point_function_trace_sources
# eat our own dog food, pt2. # eat our own dog food, pt2.
call_extension_method "extension_metadata_ready" << 'EXTENSION_METADATA_READY' call_extension_method "extension_metadata_ready" <<- 'EXTENSION_METADATA_READY'
*meta-Meta time!* *meta-Meta time!*
Implement this hook to work with/on the meta-data made available by the extension manager. Implement this hook to work with/on the meta-data made available by the extension manager.
Interesting stuff to process: Interesting stuff to process:
- `"${EXTENSION_MANAGER_TMP_DIR}/hook_point_calls.txt"` contains a list of all hook points called, in order. - `"${EXTENSION_MANAGER_TMP_DIR}/hook_point_calls.txt"` contains a list of all hook points called, in order.
- For each hook_point in the list, more files will have metadata about that hook point. - For each hook_point in the list, more files will have metadata about that hook point.
- `${EXTENSION_MANAGER_TMP_DIR}/hook_point.orig.md` contains the hook documentation at the call site (inline docs), hopefully in Markdown format. - `${EXTENSION_MANAGER_TMP_DIR}/hook_point.orig.md` contains the hook documentation at the call site (inline docs), hopefully in Markdown format.
- `${EXTENSION_MANAGER_TMP_DIR}/hook_point.compat` contains the compatibility names for the hooks. - `${EXTENSION_MANAGER_TMP_DIR}/hook_point.compat` contains the compatibility names for the hooks.
- `${EXTENSION_MANAGER_TMP_DIR}/hook_point.exports` contains _exported_ environment variables. - `${EXTENSION_MANAGER_TMP_DIR}/hook_point.exports` contains _exported_ environment variables.
- `${EXTENSION_MANAGER_TMP_DIR}/hook_point.vars` contains _all_ environment variables. - `${EXTENSION_MANAGER_TMP_DIR}/hook_point.vars` contains _all_ environment variables.
- `${defined_hook_point_functions}` is a map of _all_ the defined hook point functions and their extension information. - `${defined_hook_point_functions}` is a map of _all_ the defined hook point functions and their extension information.
- `${hook_point_function_trace_sources}` is a map of all the hook point functions _that were really called during the build_ and their BASH_SOURCE information. - `${hook_point_function_trace_sources}` is a map of all the hook point functions _that were really called during the build_ and their BASH_SOURCE information.
- `${hook_point_function_trace_lines}` is the same, but BASH_LINENO info. - `${hook_point_function_trace_lines}` is the same, but BASH_LINENO info.
After this hook is done, the `${EXTENSION_MANAGER_TMP_DIR}` will be removed. After this hook is done, the `${EXTENSION_MANAGER_TMP_DIR}` will be removed.
EXTENSION_METADATA_READY EXTENSION_METADATA_READY
# Move temporary log file over to final destination, and start writing to it instead (although 999 is pretty late in the game)
mv "${EXTENSION_MANAGER_LOG_FILE}" "${DEST}/${LOG_SUBPATH:-debug}/extensions.log"
export EXTENSION_MANAGER_LOG_FILE="${DEST}/${LOG_SUBPATH:-debug}/extensions.log"
# Cleanup. Leave no trace...
[[ -d "${EXTENSION_MANAGER_TMP_DIR}" ]] && rm -rf "${EXTENSION_MANAGER_TMP_DIR}"
} }
# This is called by call_extension_method(). To say the truth, this should be in an extension. But then it gets too meta for anyone's head. # This is called by call_extension_method(). To say the truth, this should be in an extension. But then it gets too meta for anyone's head.
write_hook_point_metadata() { write_hook_point_metadata() {
# Dont do anything if told not to.
[[ "${WRITE_EXTENSIONS_METADATA:-yes}" == "no" ]] && return 0
local main_hook_point_name="$1" local main_hook_point_name="$1"
[[ ! -d "${EXTENSION_MANAGER_TMP_DIR}" ]] && mkdir -p "${EXTENSION_MANAGER_TMP_DIR}" [[ ! -d "${EXTENSION_MANAGER_TMP_DIR}" ]] && mkdir -p "${EXTENSION_MANAGER_TMP_DIR}"
@@ -308,38 +325,6 @@ write_hook_point_metadata() {
echo "${main_hook_point_name}" >> "${EXTENSION_MANAGER_TMP_DIR}/hook_point_calls.txt" echo "${main_hook_point_name}" >> "${EXTENSION_MANAGER_TMP_DIR}/hook_point_calls.txt"
} }
# Helper function, to get clean "stack traces" that do not include the hook/extension infrastructure code.
get_extension_hook_stracktrace() {
local sources_str="$1" # Give this ${BASH_SOURCE[*]} - expanded
local lines_str="$2" # And this # Give this ${BASH_LINENO[*]} - expanded
local sources lines index final_stack=""
IFS=' ' read -r -a sources <<< "${sources_str}"
IFS=' ' read -r -a lines <<< "${lines_str}"
for index in "${!sources[@]}"; do
local source="${sources[index]}" line="${lines[((index - 1))]}"
# skip extension infrastructure sources, these only pollute the trace and add no insight to users
[[ ${source} == */.tmp/extension_function_definition.sh ]] && continue
[[ ${source} == *lib/extensions.sh ]] && continue
[[ ${source} == */compile.sh ]] && continue
# relativize the source, otherwise too long to display
source="${source#"${SRC}/"}"
# remove 'lib/'. hope this is not too confusing.
source="${source#"lib/"}"
# add to the list
arrow="$([[ "$final_stack" != "" ]] && echo "-> ")"
final_stack="${source}:${line} ${arrow} ${final_stack} "
done
# output the result, no newline
# shellcheck disable=SC2086 # I wanna suppress double spacing, thanks
echo -n $final_stack
}
show_caller_full() {
local frame=0
while caller $frame; do
((frame++))
done
}
# can be called by board, family, config or user to make sure an extension is included. # can be called by board, family, config or user to make sure an extension is included.
# single argument is the extension name. # single argument is the extension name.
# will look for it in /userpatches/extensions first. # will look for it in /userpatches/extensions first.
@@ -352,15 +337,19 @@ enable_extension() {
local extension_dir extension_file extension_file_in_dir extension_floating_file local extension_dir extension_file extension_file_in_dir extension_floating_file
local stacktrace local stacktrace
# capture the stack leading to this, possibly with a hint in front.
stacktrace="${ENABLE_EXTENSION_TRACE_HINT}$(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")"
# if LOG_ENABLE_EXTENSION, output useful stack, so user can figure out which extensions are being added where # if LOG_ENABLE_EXTENSION, output useful stack, so user can figure out which extensions are being added where
[[ "${LOG_ENABLE_EXTENSION}" == "yes" ]] && if [[ "${LOG_ENABLE_EXTENSION}" == "yes" ]]; then
display_alert "Extension being added" "${extension_name} :: added by ${stacktrace}" "" if [[ "${SHOW_DEBUG}" == "yes" ]]; then
stacktrace="${ENABLE_EXTENSION_TRACE_HINT}$(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")"
display_alert "Enabling extension" "${extension_name} :: added by ${stacktrace}" "debug"
else
display_alert "Enabling extension" "${extension_name}" ""
fi
fi
# first a check, has the extension manager already initialized? then it is too late to enable_extension(). bail. # first a check, has the extension manager already initialized? then it is too late to enable_extension(). bail.
if [[ ${initialize_extension_manager_counter} -gt 0 ]]; then if [[ ${initialize_extension_manager_counter} -gt 0 ]]; then
stacktrace="${ENABLE_EXTENSION_TRACE_HINT}$(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")"
display_alert "Extension problem" "already initialized -- too late to add '${extension_name}' (trace: ${stacktrace})" "err" display_alert "Extension problem" "already initialized -- too late to add '${extension_name}' (trace: ${stacktrace})" "err"
exit 2 exit 2
fi fi
@@ -401,29 +390,12 @@ enable_extension() {
# store a list of existing functions at this point, before sourcing the extension. # store a list of existing functions at this point, before sourcing the extension.
before_function_list="$(compgen -A function)" before_function_list="$(compgen -A function)"
# error handling during a 'source' call is quite insane in bash after 4.3.
# to be able to catch errors in sourced scripts the only way is to trap
declare -i extension_source_generated_error=0
trap 'extension_source_generated_error=1;' ERR
# source the file. extensions are not supposed to do anything except export variables and define functions, so nothing should happen here.
# there is no way to enforce it though, short of static analysis.
# we could punish the extension authors who violate it by removing some essential variables temporarily from the environment during this source, and restore them later.
# shellcheck disable=SC1090 # shellcheck disable=SC1090
source "${extension_file}" source "${extension_file}"
# remove the trap we set.
trap - ERR
# decrement the recurse counter, so calls to this method are allowed again. # decrement the recurse counter, so calls to this method are allowed again.
enable_extension_recurse_counter=$((enable_extension_recurse_counter - 1)) enable_extension_recurse_counter=$((enable_extension_recurse_counter - 1))
# test if it fell into the trap, and abort immediately with an error.
if [[ $extension_source_generated_error != 0 ]]; then
display_alert "Extension failed to load" "${extension_file}" "err"
exit 4
fi
# get a new list of functions after sourcing the extension # get a new list of functions after sourcing the extension
after_function_list="$(compgen -A function)" after_function_list="$(compgen -A function)"
@@ -433,6 +405,7 @@ enable_extension() {
# iterate over defined functions, store them in global associative array extension_function_info # iterate over defined functions, store them in global associative array extension_function_info
for newly_defined_function in ${new_function_list}; do for newly_defined_function in ${new_function_list}; do
#echo "func: ${newly_defined_function} has DIR: ${extension_dir}"
extension_function_info["${newly_defined_function}"]="EXTENSION=\"${extension_name}\" EXTENSION_DIR=\"${extension_dir}\" EXTENSION_FILE=\"${extension_file}\" EXTENSION_ADDED_BY=\"${stacktrace}\"" extension_function_info["${newly_defined_function}"]="EXTENSION=\"${extension_name}\" EXTENSION_DIR=\"${extension_dir}\" EXTENSION_FILE=\"${extension_file}\" EXTENSION_ADDED_BY=\"${stacktrace}\""
done done
@@ -446,3 +419,8 @@ enable_extension() {
done done
} }
# Fancy placeholder for future ideas. allow any core function to be hooked. maybe with "voters" infrastructure?
function do_with_hooks() {
"$@"
}

View File

@@ -2,9 +2,9 @@
create_board_package() { create_board_package() {
display_alert "Creating board support package for CLI" "$CHOSEN_ROOTFS" "info" display_alert "Creating board support package for CLI" "$CHOSEN_ROOTFS" "info"
bsptempdir=$(mktemp -d) bsptempdir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
chmod 700 ${bsptempdir} chmod 700 ${bsptempdir}
trap "ret=\$?; rm -rf \"${bsptempdir}\" ; exit \$ret" 0 1 2 3 15
local destination=${bsptempdir}/${BSP_CLI_PACKAGE_FULLNAME} local destination=${bsptempdir}/${BSP_CLI_PACKAGE_FULLNAME}
mkdir -p "${destination}"/DEBIAN mkdir -p "${destination}"/DEBIAN
cd $destination cd $destination
@@ -19,25 +19,33 @@ create_board_package() {
local bootscript_dst=${BOOTSCRIPT##*:} local bootscript_dst=${BOOTSCRIPT##*:}
mkdir -p "${destination}"/usr/share/armbian/ mkdir -p "${destination}"/usr/share/armbian/
if [ -f "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" ]; then display_alert "BOOTSCRIPT" "${BOOTSCRIPT}" "debug"
cp "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" "${destination}/usr/share/armbian/${bootscript_dst}" display_alert "bootscript_src" "${bootscript_src}" "debug"
display_alert "bootscript_dst" "${bootscript_dst}" "debug"
# if not using extlinux, copy armbianEnv from template; prefer userpatches source
if [[ $SRC_EXTLINUX != yes ]]; then
if [ -f "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" ]; then
run_host_command_logged cp -pv "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" "${destination}/usr/share/armbian/${bootscript_dst}"
else
run_host_command_logged cp -pv "${SRC}/config/bootscripts/${bootscript_src}" "${destination}/usr/share/armbian/${bootscript_dst}"
fi
if [[ -n $BOOTENV_FILE && -f $SRC/config/bootenv/$BOOTENV_FILE ]]; then
run_host_command_logged cp -pv "${SRC}/config/bootenv/${BOOTENV_FILE}" "${destination}"/usr/share/armbian/armbianEnv.txt
fi
else else
cp "${SRC}/config/bootscripts/${bootscript_src}" "${destination}/usr/share/armbian/${bootscript_dst}" display_alert "Using extlinux, regular bootscripts ignored" "SRC_EXTLINUX=${SRC_EXTLINUX}" "warn"
fi fi
if [[ -n $BOOTENV_FILE && -f $SRC/config/bootenv/$BOOTENV_FILE ]]; then # add configuration for setting uboot environment from userspace with: fw_setenv fw_printenv
cp "${SRC}/config/bootenv/${BOOTENV_FILE}" "${destination}"/usr/share/armbian/armbianEnv.txt if [[ -n $UBOOT_FW_ENV ]]; then
UBOOT_FW_ENV=($(tr ',' ' ' <<< "$UBOOT_FW_ENV"))
mkdir -p "${destination}"/etc
echo "# Device to access offset env size" > "${destination}"/etc/fw_env.config
echo "/dev/mmcblk0 ${UBOOT_FW_ENV[0]} ${UBOOT_FW_ENV[1]}" >> "${destination}"/etc/fw_env.config
fi fi
fi fi
# add configuration for setting uboot environment from userspace with: fw_setenv fw_printenv
if [[ -n $UBOOT_FW_ENV ]]; then
UBOOT_FW_ENV=($(tr ',' ' ' <<< "$UBOOT_FW_ENV"))
mkdir -p "${destination}"/etc
echo "# Device to access offset env size" > "${destination}"/etc/fw_env.config
echo "/dev/mmcblk0 ${UBOOT_FW_ENV[0]} ${UBOOT_FW_ENV[1]}" >> "${destination}"/etc/fw_env.config
fi
# Replaces: base-files is needed to replace /etc/update-motd.d/ files on Xenial # Replaces: base-files is needed to replace /etc/update-motd.d/ files on Xenial
# Replaces: unattended-upgrades may be needed to replace /etc/apt/apt.conf.d/50unattended-upgrades # Replaces: unattended-upgrades may be needed to replace /etc/apt/apt.conf.d/50unattended-upgrades
# (distributions provide good defaults, so this is not needed currently) # (distributions provide good defaults, so this is not needed currently)
@@ -174,7 +182,7 @@ create_board_package() {
cat <<- EOF >> "${destination}"/DEBIAN/postinst cat <<- EOF >> "${destination}"/DEBIAN/postinst
if [ true ]; then if [ true ]; then
# this package recreate boot scripts # this package recreate boot scripts
EOF EOF
else else
cat <<- EOF >> "${destination}"/DEBIAN/postinst cat <<- EOF >> "${destination}"/DEBIAN/postinst
@@ -233,7 +241,7 @@ create_board_package() {
mv /usr/lib/chromium-browser/master_preferences.dpkg-dist /usr/lib/chromium-browser/master_preferences mv /usr/lib/chromium-browser/master_preferences.dpkg-dist /usr/lib/chromium-browser/master_preferences
fi fi
# Read release value # Read release value
if [ -f /etc/lsb-release ]; then if [ -f /etc/lsb-release ]; then
RELEASE=\$(cat /etc/lsb-release | grep CODENAME | cut -d"=" -f2 | sed 's/.*/\u&/') RELEASE=\$(cat /etc/lsb-release | grep CODENAME | cut -d"=" -f2 | sed 's/.*/\u&/')
sed -i "s/^PRETTY_NAME=.*/PRETTY_NAME=\"${VENDOR} $REVISION "\${RELEASE}"\"/" /etc/os-release sed -i "s/^PRETTY_NAME=.*/PRETTY_NAME=\"${VENDOR} $REVISION "\${RELEASE}"\"/" /etc/os-release
@@ -257,7 +265,7 @@ create_board_package() {
EOF EOF
# copy common files from a premade directory structure # copy common files from a premade directory structure
rsync -a ${SRC}/packages/bsp/common/* ${destination} run_host_command_logged rsync -av ${SRC}/packages/bsp/common/* ${destination}
# trigger uInitrd creation after installation, to apply # trigger uInitrd creation after installation, to apply
# /etc/initramfs/post-update.d/99-uboot # /etc/initramfs/post-update.d/99-uboot
@@ -292,12 +300,16 @@ create_board_package() {
sed -i 's/#no-auto-down/no-auto-down/g' "${destination}"/etc/network/interfaces.default sed -i 's/#no-auto-down/no-auto-down/g' "${destination}"/etc/network/interfaces.default
# execute $LINUXFAMILY-specific tweaks # execute $LINUXFAMILY-specific tweaks
[[ $(type -t family_tweaks_bsp) == function ]] && family_tweaks_bsp if [[ $(type -t family_tweaks_bsp) == function ]]; then
display_alert "Running family_tweaks_bsp" "${LINUXFAMILY} - ${BOARDFAMILY}" "debug"
family_tweaks_bsp
display_alert "Done with family_tweaks_bsp" "${LINUXFAMILY} - ${BOARDFAMILY}" "debug"
fi
call_extension_method "post_family_tweaks_bsp" << 'POST_FAMILY_TWEAKS_BSP' call_extension_method "post_family_tweaks_bsp" <<- 'POST_FAMILY_TWEAKS_BSP'
*family_tweaks_bsp overrrides what is in the config, so give it a chance to override the family tweaks* *family_tweaks_bsp overrrides what is in the config, so give it a chance to override the family tweaks*
This should be implemented by the config to tweak the BSP, after the board or family has had the chance to. This should be implemented by the config to tweak the BSP, after the board or family has had the chance to.
POST_FAMILY_TWEAKS_BSP POST_FAMILY_TWEAKS_BSP
# add some summary to the image # add some summary to the image
fingerprint_image "${destination}/etc/armbian.txt" fingerprint_image "${destination}/etc/armbian.txt"
@@ -307,10 +319,9 @@ POST_FAMILY_TWEAKS_BSP
find "${destination}" ! -type l -print0 2> /dev/null | xargs -0r chmod 'go=rX,u+rw,a-s' find "${destination}" ! -type l -print0 2> /dev/null | xargs -0r chmod 'go=rX,u+rw,a-s'
# create board DEB file # create board DEB file
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "${destination}" "${destination}.deb" >> "${DEST}"/${LOG_SUBPATH}/output.log 2>&1 fakeroot_dpkg_deb_build "${destination}" "${destination}.deb"
mkdir -p "${DEB_STORAGE}/" mkdir -p "${DEB_STORAGE}/"
rsync --remove-source-files -rq "${destination}.deb" "${DEB_STORAGE}/" run_host_command_logged rsync --remove-source-files -r "${destination}.deb" "${DEB_STORAGE}/"
# cleanup display_alert "Done building BSP CLI package" "${destination}" "debug"
rm -rf ${bsptempdir}
} }

View File

@@ -1,9 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
create_desktop_package() {
echo "Showing PACKAGE_LIST_DESKTOP before postprocessing" >> "${DEST}"/${LOG_SUBPATH}/output.log create_desktop_package() {
# Use quotes to show leading and trailing spaces display_alert "bsp-desktop: PACKAGE_LIST_DESKTOP" "'${PACKAGE_LIST_DESKTOP}'" "debug"
echo "\"$PACKAGE_LIST_DESKTOP\"" >> "${DEST}"/${LOG_SUBPATH}/output.log
# Remove leading and trailing spaces with some bash monstruosity # Remove leading and trailing spaces with some bash monstruosity
# https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-a-bash-variable#12973694 # https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-a-bash-variable#12973694
@@ -14,7 +12,7 @@ create_desktop_package() {
# Remove others 'spacing characters' (like tabs) # Remove others 'spacing characters' (like tabs)
DEBIAN_RECOMMENDS=${DEBIAN_RECOMMENDS//[[:space:]]/} DEBIAN_RECOMMENDS=${DEBIAN_RECOMMENDS//[[:space:]]/}
echo "DEBIAN_RECOMMENDS : ${DEBIAN_RECOMMENDS}" >> "${DEST}"/${LOG_SUBPATH}/output.log display_alert "bsp-desktop: DEBIAN_RECOMMENDS" "'${DEBIAN_RECOMMENDS}'" "debug"
# Replace whitespace characters by commas # Replace whitespace characters by commas
PACKAGE_LIST_PREDEPENDS=${PACKAGE_LIST_PREDEPENDS// /,} PACKAGE_LIST_PREDEPENDS=${PACKAGE_LIST_PREDEPENDS// /,}
@@ -22,12 +20,12 @@ create_desktop_package() {
PACKAGE_LIST_PREDEPENDS=${PACKAGE_LIST_PREDEPENDS//[[:space:]]/} PACKAGE_LIST_PREDEPENDS=${PACKAGE_LIST_PREDEPENDS//[[:space:]]/}
local destination tmp_dir local destination tmp_dir
tmp_dir=$(mktemp -d) tmp_dir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
destination=${tmp_dir}/${BOARD}/${CHOSEN_DESKTOP}_${REVISION}_all destination=${tmp_dir}/${BOARD}/${CHOSEN_DESKTOP}_${REVISION}_all
rm -rf "${destination}" rm -rf "${destination}"
mkdir -p "${destination}"/DEBIAN mkdir -p "${destination}"/DEBIAN
echo "${PACKAGE_LIST_PREDEPENDS}" >> "${DEST}"/${LOG_SUBPATH}/output.log display_alert "bsp-desktop: PACKAGE_LIST_PREDEPENDS" "'${PACKAGE_LIST_PREDEPENDS}'" "debug"
# set up control file # set up control file
cat <<- EOF > "${destination}"/DEBIAN/control cat <<- EOF > "${destination}"/DEBIAN/control
@@ -56,9 +54,6 @@ create_desktop_package() {
chmod 755 "${destination}"/DEBIAN/postinst chmod 755 "${destination}"/DEBIAN/postinst
#display_alert "Showing ${destination}/DEBIAN/postinst"
cat "${destination}/DEBIAN/postinst" >> "${DEST}"/${LOG_SUBPATH}/install.log
# Armbian create_desktop_package scripts # Armbian create_desktop_package scripts
unset aggregated_content unset aggregated_content
@@ -75,10 +70,7 @@ create_desktop_package() {
mkdir -p "${DEB_STORAGE}/${RELEASE}" mkdir -p "${DEB_STORAGE}/${RELEASE}"
cd "${destination}" cd "${destination}"
cd .. cd ..
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "${destination}" "${DEB_STORAGE}/${RELEASE}/${CHOSEN_DESKTOP}_${REVISION}_all.deb" > /dev/null fakeroot_dpkg_deb_build "${destination}" "${DEB_STORAGE}/${RELEASE}/${CHOSEN_DESKTOP}_${REVISION}_all.deb"
# cleanup
rm -rf "${tmp_dir}"
unset aggregated_content unset aggregated_content
@@ -91,7 +83,7 @@ create_bsp_desktop_package() {
local package_name="${BSP_DESKTOP_PACKAGE_FULLNAME}" local package_name="${BSP_DESKTOP_PACKAGE_FULLNAME}"
local destination tmp_dir local destination tmp_dir
tmp_dir=$(mktemp -d) tmp_dir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
destination=${tmp_dir}/${BOARD}/${BSP_DESKTOP_PACKAGE_FULLNAME} destination=${tmp_dir}/${BOARD}/${BSP_DESKTOP_PACKAGE_FULLNAME}
rm -rf "${destination}" rm -rf "${destination}"
mkdir -p "${destination}"/DEBIAN mkdir -p "${destination}"/DEBIAN
@@ -132,15 +124,12 @@ create_bsp_desktop_package() {
local aggregated_content="" local aggregated_content=""
aggregate_all_desktop "debian/armbian-bsp-desktop/prepare.sh" $'\n' aggregate_all_desktop "debian/armbian-bsp-desktop/prepare.sh" $'\n'
eval "${aggregated_content}" eval "${aggregated_content}"
[[ $? -ne 0 ]] && display_alert "prepare.sh exec error" "" "wrn" [[ $? -ne 0 ]] && display_alert "prepare.sh exec error" "" "wrn" # @TODO: this is a fantasy, error would be thrown in line above
mkdir -p "${DEB_STORAGE}/${RELEASE}" mkdir -p "${DEB_STORAGE}/${RELEASE}"
cd "${destination}" cd "${destination}"
cd .. cd ..
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "${destination}" "${DEB_STORAGE}/${RELEASE}/${package_name}.deb" > /dev/null fakeroot_dpkg_deb_build "${destination}" "${DEB_STORAGE}/${RELEASE}/${package_name}.deb"
# cleanup
rm -rf "${tmp_dir}"
unset aggregated_content unset aggregated_content

View File

@@ -1,43 +1,26 @@
#!/usr/bin/env bash #!/usr/bin/env bash
function cli_entrypoint() { function cli_entrypoint() {
# array, readonly, global, for future reference, "exported" to shutup shellcheck
declare -rg -x -a ARMBIAN_ORIGINAL_ARGV=("${@}")
if [[ "${ARMBIAN_ENABLE_CALL_TRACING}" == "yes" ]]; then if [[ "${ARMBIAN_ENABLE_CALL_TRACING}" == "yes" ]]; then
set -T # inherit return/debug traps set -T # inherit return/debug traps
mkdir -p "${SRC}"/output/debug mkdir -p "${SRC}"/output/call-traces
echo -n "" > "${SRC}"/output/debug/calls.txt echo -n "" > "${SRC}"/output/call-traces/calls.txt
trap 'echo "${BASH_LINENO[@]}|${BASH_SOURCE[@]}|${FUNCNAME[@]}" >> ${SRC}/output/debug/calls.txt ;' RETURN trap 'echo "${BASH_LINENO[@]}|${BASH_SOURCE[@]}|${FUNCNAME[@]}" >> ${SRC}/output/call-traces/calls.txt ;' RETURN
fi fi
check_args "$@"
do_update_src
if [[ "${EUID}" == "0" ]] || [[ "${1}" == "vagrant" ]]; then if [[ "${EUID}" == "0" ]] || [[ "${1}" == "vagrant" ]]; then
: :
elif [[ "${1}" == docker || "${1}" == dockerpurge || "${1}" == docker-shell ]] && grep -q "$(whoami)" <(getent group docker); then elif [[ "${1}" == docker || "${1}" == dockerpurge || "${1}" == docker-shell ]] && grep -q "$(whoami)" <(getent group docker); then
: :
elif [[ "${CONFIG_DEFS_ONLY}" == "yes" ]]; then # this var is set in the ENVIRONMENT, not as parameter.
display_alert "No sudo for" "env CONFIG_DEFS_ONLY=yes" "debug" # not really building in this case, just gathering meta-data.
else else
display_alert "This script requires root privileges, trying to use sudo" "" "wrn" display_alert "This script requires root privileges, trying to use sudo" "" "wrn"
sudo "${SRC}/compile.sh" "$@" sudo "${SRC}/compile.sh" "$@"
exit $?
fi fi
if [ "$OFFLINE_WORK" == "yes" ]; then
echo -e "\n"
display_alert "* " "You are working offline."
display_alert "* " "Sources, time and host will not be checked"
echo -e "\n"
sleep 3s
else
# check and install the basic utilities here
prepare_host_basic
fi
handle_vagrant "$@"
# Purge Armbian Docker images # Purge Armbian Docker images
if [[ "${1}" == dockerpurge && -f /etc/debian_version ]]; then if [[ "${1}" == dockerpurge && -f /etc/debian_version ]]; then
display_alert "Purging Armbian Docker containers" "" "wrn" display_alert "Purging Armbian Docker containers" "" "wrn"
@@ -50,21 +33,20 @@ function cli_entrypoint() {
# Docker shell # Docker shell
if [[ "${1}" == docker-shell ]]; then if [[ "${1}" == docker-shell ]]; then
shift shift
#shellcheck disable=SC2034
SHELL_ONLY=yes SHELL_ONLY=yes
set -- "docker" "$@" set -- "docker" "$@"
fi fi
handle_docker "$@" handle_docker_vagrant "$@"
prepare_userpatches prepare_userpatches "$@"
if [[ -z "${CONFIG}" && -n "$1" && -f "${SRC}/userpatches/config-$1.conf" ]]; then if [[ -z "${CONFIG}" && -n "$1" && -f "${SRC}/userpatches/config-$1.conf" ]]; then
CONFIG="userpatches/config-$1.conf" CONFIG="userpatches/config-$1.conf"
shift shift
fi fi
# usind default if custom not found # using default if custom not found
if [[ -z "${CONFIG}" && -f "${SRC}/userpatches/config-default.conf" ]]; then if [[ -z "${CONFIG}" && -f "${SRC}/userpatches/config-default.conf" ]]; then
CONFIG="userpatches/config-default.conf" CONFIG="userpatches/config-default.conf"
fi fi
@@ -79,6 +61,43 @@ function cli_entrypoint() {
CONFIG_PATH=$(dirname "${CONFIG_FILE}") CONFIG_PATH=$(dirname "${CONFIG_FILE}")
# DEST is the main output dir.
declare DEST="${SRC}/output"
if [ -d "$CONFIG_PATH/output" ]; then
DEST="${CONFIG_PATH}/output"
fi
display_alert "Output directory DEST:" "${DEST}" "debug"
# set unique mounting directory for this build.
# basic deps, which include "uuidgen", will be installed _after_ this, so we gotta tolerate it not being there yet.
declare -g ARMBIAN_BUILD_UUID
if [[ -f /usr/bin/uuidgen ]]; then
ARMBIAN_BUILD_UUID="$(uuidgen)"
else
display_alert "uuidgen not found" "uuidgen not installed yet" "info"
ARMBIAN_BUILD_UUID="no-uuidgen-yet-${RANDOM}-$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))$((1 + $RANDOM % 10))"
fi
display_alert "Build UUID:" "${ARMBIAN_BUILD_UUID}" "debug"
# Super-global variables, used everywhere. The directories are NOT _created_ here, since this very early stage.
export WORKDIR="${SRC}/.tmp/work-${ARMBIAN_BUILD_UUID}" # WORKDIR at this stage. It will become TMPDIR later. It has special significance to `mktemp` and others!
export SDCARD="${SRC}/.tmp/rootfs-${ARMBIAN_BUILD_UUID}" # SDCARD (which is NOT an sdcard, but will be, maybe, one day) is where we work the rootfs before final imaging. "rootfs" stage.
export MOUNT="${SRC}/.tmp/mount-${ARMBIAN_BUILD_UUID}" # MOUNT ("mounted on the loop") is the mounted root on final image (via loop). "image" stage
export EXTENSION_MANAGER_TMP_DIR="${SRC}/.tmp/extensions-${ARMBIAN_BUILD_UUID}" # EXTENSION_MANAGER_TMP_DIR used to store extension-composed functions
export DESTIMG="${SRC}/.tmp/image-${ARMBIAN_BUILD_UUID}" # DESTIMG is where the backing image (raw, huge, sparse file) is kept (not the final destination)
export LOGDIR="${SRC}/.tmp/logs-${ARMBIAN_BUILD_UUID}" # Will be initialized very soon, literally, below.
LOG_SECTION=entrypoint start_logging_section # This creates LOGDIR.
add_cleanup_handler trap_handler_cleanup_logging # cleanup handler for logs; it rolls it up from LOGDIR into DEST/logs
if [ "${OFFLINE_WORK}" == "yes" ]; then
display_alert "* " "You are working offline!"
display_alert "* " "Sources, time and host will not be checked"
else
# check and install the basic utilities.
LOG_SECTION="prepare_host_basic" do_with_logging prepare_host_basic
fi
# Source the extensions manager library at this point, before sourcing the config. # Source the extensions manager library at this point, before sourcing the config.
# This allows early calls to enable_extension(), but initialization proper is done later. # This allows early calls to enable_extension(), but initialization proper is done later.
# shellcheck source=lib/extensions.sh # shellcheck source=lib/extensions.sh
@@ -94,20 +113,40 @@ function cli_entrypoint() {
# Script parameters handling # Script parameters handling
while [[ "${1}" == *=* ]]; do while [[ "${1}" == *=* ]]; do
parameter=${1%%=*} parameter=${1%%=*}
value=${1##*=} value=${1##*=}
shift shift
display_alert "Command line: setting $parameter to" "${value:-(empty)}" "info" display_alert "Command line: setting $parameter to" "${value:-(empty)}" "info"
eval "$parameter=\"$value\"" eval "$parameter=\"$value\""
done done
prepare_and_config_main_build_single ##
## Main entrypoint.
##
if [[ -z $1 ]]; then # reset completely after sourcing config file
build_main #set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
# configuration etc - it initializes the extension manager.
do_capturing_defs prepare_and_config_main_build_single # this sets CAPTURED_VARS
if [[ "${CONFIG_DEFS_ONLY}" == "yes" ]]; then
echo "${CAPTURED_VARS}" # to stdout!
else else
eval "$@" unset CAPTURED_VARS
# Allow for custom user-invoked functions, or do the default build.
if [[ -z $1 ]]; then
main_default_build_single
else
# @TODO: rpardini: check this with extensions usage?
eval "$@"
fi
fi fi
# Build done, run the cleanup handlers explicitly.
# This zeroes out the list of cleanups, so it's not done again when the main script exits normally and trap = 0 runs.
run_cleanup_handlers
} }

View File

@@ -1,98 +1,17 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Add the variables needed at the beginning of the path
check_args() {
for p in "$@"; do # Misc functions from compile.sh
case "${p%=*}" in function handle_docker_vagrant() {
LIB_TAG)
# Take a variable if the branch exists locally
if [ "${p#*=}" == "$(git branch |
gawk -v b="${p#*=}" '{if ( $NF == b ) {print $NF}}')" ]; then
echo -e "[\e[0;35m warn \x1B[0m] Setting $p"
eval "$p"
else
echo -e "[\e[0;35m warn \x1B[0m] Skip $p setting as LIB_TAG=\"\""
eval LIB_TAG=""
fi
;;
esac
done
}
update_src() {
cd "${SRC}" || exit
if [[ ! -f "${SRC}"/.ignore_changes ]]; then
echo -e "[\e[0;32m o.k. \x1B[0m] This script will try to update"
CHANGED_FILES=$(git diff --name-only)
if [[ -n "${CHANGED_FILES}" ]]; then
echo -e "[\e[0;35m warn \x1B[0m] Can't update since you made changes to: \e[0;32m\n${CHANGED_FILES}\x1B[0m"
while true; do
echo -e "Press \e[0;33m<Ctrl-C>\x1B[0m or \e[0;33mexit\x1B[0m to abort compilation" \
", \e[0;33m<Enter>\x1B[0m to ignore and continue, \e[0;33mdiff\x1B[0m to display changes"
read -r
if [[ "${REPLY}" == "diff" ]]; then
git diff
elif [[ "${REPLY}" == "exit" ]]; then
exit 1
elif [[ "${REPLY}" == "" ]]; then
break
else
echo "Unknown command!"
fi
done
elif [[ $(git branch | grep "*" | awk '{print $2}') != "${LIB_TAG}" && -n "${LIB_TAG}" ]]; then
git checkout "${LIB_TAG:-master}"
git pull
fi
fi
}
function do_update_src() {
TMPFILE=$(mktemp)
chmod 644 "${TMPFILE}"
{
echo SRC="$SRC"
echo LIB_TAG="$LIB_TAG"
declare -f update_src
echo "update_src"
} > "$TMPFILE"
#do not update/checkout git with root privileges to messup files onwership.
#due to in docker/VM, we can't su to a normal user, so do not update/checkout git.
if [[ $(systemd-detect-virt) == 'none' ]]; then
if [[ "${EUID}" == "0" ]]; then
su "$(stat --format=%U "${SRC}"/.git)" -c "bash ${TMPFILE}"
else
bash "${TMPFILE}"
fi
fi
rm "${TMPFILE}"
}
function handle_vagrant() {
# Check for Vagrant # Check for Vagrant
if [[ "${1}" == vagrant && -z "$(command -v vagrant)" ]]; then if [[ "${1}" == vagrant && -z "$(command -v vagrant)" ]]; then
display_alert "Vagrant not installed." "Installing" display_alert "Vagrant not installed." "Installing"
sudo apt-get update sudo apt-get update
sudo apt-get install -y vagrant virtualbox sudo apt-get install -y vagrant virtualbox
fi fi
}
function handle_docker() {
# Install Docker if not there but wanted. We cover only Debian based distro install. On other distros, manual Docker install is needed # Install Docker if not there but wanted. We cover only Debian based distro install. On other distros, manual Docker install is needed
if [[ "${1}" == docker && -f /etc/debian_version && -z "$(command -v docker)" ]]; then if [[ "${1}" == docker && -f /etc/debian_version && -z "$(command -v docker)" ]]; then
DOCKER_BINARY="docker-ce" DOCKER_BINARY="docker-ce"
# add exception for Ubuntu Focal until Docker provides dedicated binary # add exception for Ubuntu Focal until Docker provides dedicated binary
@@ -114,8 +33,8 @@ function handle_docker() {
display_alert "Add yourself to docker group to avoid root privileges" "" "wrn" display_alert "Add yourself to docker group to avoid root privileges" "" "wrn"
"${SRC}/compile.sh" "$@" "${SRC}/compile.sh" "$@"
exit $? exit $?
fi fi
} }
function prepare_userpatches() { function prepare_userpatches() {
@@ -164,6 +83,5 @@ function prepare_userpatches() {
if [[ ! -f "${SRC}"/userpatches/Vagrantfile ]]; then if [[ ! -f "${SRC}"/userpatches/Vagrantfile ]]; then
cp "${SRC}"/config/templates/Vagrantfile "${SRC}"/userpatches/Vagrantfile || exit 1 cp "${SRC}"/config/templates/Vagrantfile "${SRC}"/userpatches/Vagrantfile || exit 1
fi fi
fi fi
} }

View File

@@ -1,18 +1,23 @@
#!/usr/bin/env bash #!/usr/bin/env bash
compile_atf() { compile_atf() {
if [[ $CLEAN_LEVEL == *make* ]]; then if [[ -n "${ATFSOURCE}" && "${ATFSOURCE}" != "none" ]]; then
display_alert "Cleaning" "$ATFSOURCEDIR" "info" display_alert "Downloading sources" "atf" "git"
( fetch_from_repo "$ATFSOURCE" "$ATFDIR" "$ATFBRANCH" "yes"
cd "${SRC}/cache/sources/${ATFSOURCEDIR}"
make distclean > /dev/null 2>&1
)
fi fi
if [[ $USE_OVERLAYFS == yes ]]; then if [[ $CLEAN_LEVEL == *make-atf* ]]; then
local atfdir display_alert "Cleaning ATF tree - CLEAN_LEVEL contains 'make-atf'" "$ATFSOURCEDIR" "info"
atfdir=$(overlayfs_wrapper "wrap" "$SRC/cache/sources/$ATFSOURCEDIR" "atf_${LINUXFAMILY}_${BRANCH}") (
cd "${SRC}/cache/sources/${ATFSOURCEDIR}" || exit_with_error "crazy about ${ATFSOURCEDIR}"
run_host_command_logged make distclean
)
else else
local atfdir="$SRC/cache/sources/$ATFSOURCEDIR" display_alert "Not cleaning ATF tree, use CLEAN_LEVEL=make-atf if needed" "CLEAN_LEVEL=${CLEAN_LEVEL}" "debug"
fi
local atfdir="$SRC/cache/sources/$ATFSOURCEDIR"
if [[ $USE_OVERLAYFS == yes ]]; then
atfdir=$(overlayfs_wrapper "wrap" "$SRC/cache/sources/$ATFSOURCEDIR" "atf_${LINUXFAMILY}_${BRANCH}")
fi fi
cd "$atfdir" || exit cd "$atfdir" || exit
@@ -48,23 +53,15 @@ compile_atf() {
# create patch for manual source changes # create patch for manual source changes
[[ $CREATE_PATCHES == yes ]] && userpatch_create "atf" [[ $CREATE_PATCHES == yes ]] && userpatch_create "atf"
echo -e "\n\t== atf ==\n" >> "${DEST}"/${LOG_SUBPATH}/compilation.log
# ENABLE_BACKTRACE="0" has been added to workaround a regression in ATF. # ENABLE_BACKTRACE="0" has been added to workaround a regression in ATF.
# Check: https://github.com/armbian/build/issues/1157 # Check: https://github.com/armbian/build/issues/1157
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${toolchain2}:${PATH}" \ run_host_command_logged CCACHE_BASEDIR="$(pwd)" PATH="${toolchain}:${toolchain2}:${PATH}" \
'make ENABLE_BACKTRACE="0" $target_make $CTHREADS \ make ENABLE_BACKTRACE="0" $target_make "${CTHREADS}" "CROSS_COMPILE='$CCACHE $ATF_COMPILER'"
CROSS_COMPILE="$CCACHE $ATF_COMPILER"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/compilation.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Compiling ATF..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} 2>> "${DEST}"/${LOG_SUBPATH}/compilation.log
[[ ${PIPESTATUS[0]} -ne 0 ]] && exit_with_error "ATF compilation failed" [[ $(type -t atf_custom_postprocess) == function ]] && atf_custom_postprocess 2>&1
[[ $(type -t atf_custom_postprocess) == function ]] && atf_custom_postprocess atftempdir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
atftempdir=$(mktemp -d)
chmod 700 ${atftempdir} chmod 700 ${atftempdir}
trap "ret=\$?; rm -rf \"${atftempdir}\" ; exit \$ret" 0 1 2 3 15
# copy files to temp directory # copy files to temp directory
for f in $target_files; do for f in $target_files; do
@@ -83,4 +80,6 @@ compile_atf() {
# copy license file to pack it to u-boot package later # copy license file to pack it to u-boot package later
[[ -f license.md ]] && cp license.md "${atftempdir}"/ [[ -f license.md ]] && cp license.md "${atftempdir}"/
return 0 # avoid error due to short-circuit above
} }

View File

@@ -4,19 +4,20 @@ compile_firmware() {
local firmwaretempdir plugin_dir local firmwaretempdir plugin_dir
firmwaretempdir=$(mktemp -d) firmwaretempdir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
chmod 700 ${firmwaretempdir} chmod 700 ${firmwaretempdir}
trap "ret=\$?; rm -rf \"${firmwaretempdir}\" ; exit \$ret" 0 1 2 3 15
plugin_dir="armbian-firmware${FULL}" plugin_dir="armbian-firmware${FULL}"
mkdir -p "${firmwaretempdir}/${plugin_dir}/lib/firmware" mkdir -p "${firmwaretempdir}/${plugin_dir}/lib/firmware"
fetch_from_repo "$GITHUB_SOURCE/armbian/firmware" "armbian-firmware-git" "branch:master" fetch_from_repo "https://github.com/armbian/firmware" "armbian-firmware-git" "branch:master"
if [[ -n $FULL ]]; then if [[ -n $FULL ]]; then
fetch_from_repo "$MAINLINE_FIRMWARE_SOURCE" "linux-firmware-git" "branch:main" fetch_from_repo "$MAINLINE_FIRMWARE_SOURCE" "linux-firmware-git" "branch:main"
# cp : create hardlinks # cp : create hardlinks
cp -af --reflink=auto "${SRC}"/cache/sources/linux-firmware-git/* "${firmwaretempdir}/${plugin_dir}/lib/firmware/" run_host_command_logged cp -af --reflink=auto "${SRC}"/cache/sources/linux-firmware-git/* "${firmwaretempdir}/${plugin_dir}/lib/firmware/"
# cp : create hardlinks for ath11k WCN685x hw2.1 firmware since they are using the same firmware with hw2.0 # cp : create hardlinks for ath11k WCN685x hw2.1 firmware since they are using the same firmware with hw2.0
cp -af --reflink=auto "${firmwaretempdir}/${plugin_dir}/lib/firmware/ath11k/WCN6855/hw2.0/" "${firmwaretempdir}/${plugin_dir}/lib/firmware/ath11k/WCN6855/hw2.1/" run_host_command_logged cp -af --reflink=auto "${firmwaretempdir}/${plugin_dir}/lib/firmware/ath11k/WCN6855/hw2.0/" "${firmwaretempdir}/${plugin_dir}/lib/firmware/ath11k/WCN6855/hw2.1/"
fi fi
# overlay our firmware # overlay our firmware
# cp : create hardlinks # cp : create hardlinks
@@ -43,20 +44,18 @@ compile_firmware() {
# pack # pack
mv "armbian-firmware${FULL}" "armbian-firmware${FULL}_${REVISION}_all" mv "armbian-firmware${FULL}" "armbian-firmware${FULL}_${REVISION}_all"
display_alert "Building firmware package" "armbian-firmware${FULL}_${REVISION}_all" "info" display_alert "Building firmware package" "armbian-firmware${FULL}_${REVISION}_all" "info"
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "armbian-firmware${FULL}_${REVISION}_all" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 fakeroot_dpkg_deb_build "armbian-firmware${FULL}_${REVISION}_all"
mv "armbian-firmware${FULL}_${REVISION}_all" "armbian-firmware${FULL}" mv "armbian-firmware${FULL}_${REVISION}_all" "armbian-firmware${FULL}"
rsync -rq "armbian-firmware${FULL}_${REVISION}_all.deb" "${DEB_STORAGE}/" run_host_command_logged rsync -rq "armbian-firmware${FULL}_${REVISION}_all.deb" "${DEB_STORAGE}/"
# remove temp directory
rm -rf "${firmwaretempdir}"
} }
compile_armbian-zsh() { compile_armbian-zsh() {
local tmp_dir armbian_zsh_dir local tmp_dir armbian_zsh_dir
tmp_dir=$(mktemp -d) tmp_dir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
chmod 700 ${tmp_dir} chmod 700 ${tmp_dir}
trap "ret=\$?; rm -rf \"${tmp_dir}\" ; exit \$ret" 0 1 2 3 15
armbian_zsh_dir=armbian-zsh_${REVISION}_all armbian_zsh_dir=armbian-zsh_${REVISION}_all
display_alert "Building deb" "armbian-zsh" "info" display_alert "Building deb" "armbian-zsh" "info"
@@ -117,23 +116,24 @@ compile_armbian-zsh() {
chmod 755 "${tmp_dir}/${armbian_zsh_dir}"/DEBIAN/postinst chmod 755 "${tmp_dir}/${armbian_zsh_dir}"/DEBIAN/postinst
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "${tmp_dir}/${armbian_zsh_dir}" >> "${DEST}"/${LOG_SUBPATH}/output.log 2>&1 fakeroot_dpkg_deb_build "${tmp_dir}/${armbian_zsh_dir}"
rsync --remove-source-files -rq "${tmp_dir}/${armbian_zsh_dir}.deb" "${DEB_STORAGE}/" run_host_command_logged rsync --remove-source-files -r "${tmp_dir}/${armbian_zsh_dir}.deb" "${DEB_STORAGE}/"
rm -rf "${tmp_dir}"
} }
compile_armbian-config() { compile_armbian-config() {
local tmp_dir armbian_config_dir local tmp_dir armbian_config_dir
tmp_dir=$(mktemp -d) tmp_dir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
chmod 700 ${tmp_dir} chmod 700 ${tmp_dir}
trap "ret=\$?; rm -rf \"${tmp_dir}\" ; exit \$ret" 0 1 2 3 15
armbian_config_dir=armbian-config_${REVISION}_all armbian_config_dir=armbian-config_${REVISION}_all
display_alert "Building deb" "armbian-config" "info" display_alert "Building deb" "armbian-config" "info"
fetch_from_repo "$GITHUB_SOURCE/armbian/config" "armbian-config" "branch:master" fetch_from_repo "https://github.com/armbian/config" "armbian-config" "branch:master"
fetch_from_repo "$GITHUB_SOURCE/dylanaraps/neofetch" "neofetch" "tag:7.1.0" fetch_from_repo "https://github.com/dylanaraps/neofetch" "neofetch" "tag:7.1.0"
# @TODO: move this to where it is actually used; not everyone needs to pull this in
fetch_from_repo "$GITHUB_SOURCE/complexorganizations/wireguard-manager" "wireguard-manager" "branch:main" fetch_from_repo "$GITHUB_SOURCE/complexorganizations/wireguard-manager" "wireguard-manager" "branch:main"
mkdir -p "${tmp_dir}/${armbian_config_dir}"/{DEBIAN,usr/bin/,usr/sbin/,usr/lib/armbian-config/} mkdir -p "${tmp_dir}/${armbian_config_dir}"/{DEBIAN,usr/bin/,usr/sbin/,usr/lib/armbian-config/}
@@ -170,31 +170,31 @@ compile_armbian-config() {
ln -sf /usr/sbin/armbian-config "${tmp_dir}/${armbian_config_dir}"/usr/bin/armbian-config ln -sf /usr/sbin/armbian-config "${tmp_dir}/${armbian_config_dir}"/usr/bin/armbian-config
ln -sf /usr/sbin/softy "${tmp_dir}/${armbian_config_dir}"/usr/bin/softy ln -sf /usr/sbin/softy "${tmp_dir}/${armbian_config_dir}"/usr/bin/softy
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "${tmp_dir}/${armbian_config_dir}" > /dev/null fakeroot_dpkg_deb_build "${tmp_dir}/${armbian_config_dir}"
rsync --remove-source-files -rq "${tmp_dir}/${armbian_config_dir}.deb" "${DEB_STORAGE}/" run_host_command_logged rsync --remove-source-files -r "${tmp_dir}/${armbian_config_dir}.deb" "${DEB_STORAGE}/"
rm -rf "${tmp_dir}"
} }
compile_xilinx_bootgen() { compile_xilinx_bootgen() {
# Source code checkout # Source code checkout
(fetch_from_repo "$GITHUB_SOURCE/Xilinx/bootgen.git" "xilinx-bootgen" "branch:master") fetch_from_repo "https://github.com/Xilinx/bootgen.git" "xilinx-bootgen" "branch:master"
pushd "${SRC}"/cache/sources/xilinx-bootgen || exit pushd "${SRC}"/cache/sources/xilinx-bootgen || exit
# Compile and install only if git commit hash changed # Compile and install only if git commit hash changed
# need to check if /usr/local/bin/bootgen to detect new Docker containers with old cached sources # need to check if /usr/local/bin/bootgen to detect new Docker containers with old cached sources
if [[ ! -f .commit_id || $(improved_git rev-parse @ 2> /dev/null) != $(< .commit_id) || ! -f /usr/local/bin/bootgen ]]; then if [[ ! -f .commit_id || $(git rev-parse @ 2> /dev/null) != $(< .commit_id) || ! -f /usr/local/bin/bootgen ]]; then
display_alert "Compiling" "xilinx-bootgen" "info" display_alert "Compiling" "xilinx-bootgen" "info"
make -s clean > /dev/null make -s clean > /dev/null
make -s -j$(nproc) bootgen > /dev/null make -s -j$(nproc) bootgen > /dev/null
mkdir -p /usr/local/bin/ mkdir -p /usr/local/bin/
install bootgen /usr/local/bin > /dev/null 2>&1 install bootgen /usr/local/bin > /dev/null 2>&1
improved_git rev-parse @ 2> /dev/null > .commit_id git rev-parse @ 2> /dev/null > .commit_id
fi fi
popd popd
} }
# @TODO: code from master via Igor; not yet armbian-next'fied! warning!!
compile_plymouth-theme-armbian() { compile_plymouth-theme-armbian() {
local tmp_dir work_dir local tmp_dir work_dir

View File

@@ -1,40 +1,376 @@
#!/usr/bin/env bash #!/usr/bin/env bash
create_linux-source_package() {
ts=$(date +%s)
local sources_pkg_dir tmp_src_dir
tmp_src_dir=$(mktemp -d)
trap "ret=\$?; rm -rf \"${tmp_src_dir}\" ; exit \$ret" 0 1 2 3 15
sources_pkg_dir=${tmp_src_dir}/${CHOSEN_KSRC}_${REVISION}_all
mkdir -p "${sources_pkg_dir}"/usr/src/ \
"${sources_pkg_dir}"/usr/share/doc/linux-source-${version}-${LINUXFAMILY} \
"${sources_pkg_dir}"/DEBIAN
cp "${SRC}/config/kernel/${LINUXCONFIG}.config" "default_${LINUXCONFIG}.config" # This is a re-imagining of mkdebian and builddeb from the kernel tree.
xz < .config > "${sources_pkg_dir}/usr/src/${LINUXCONFIG}_${version}_${REVISION}_config.xz"
display_alert "Compressing sources for the linux-source package" # We wanna produce Debian/Ubuntu compatible packages so we're able to use their standard tools, like
tar cp --directory="$kerneldir" --exclude='.git' --owner=root . | # `flash-kernel`, `u-boot-menu`, `grub2`, and others, so we gotta stick to their conventions.
pv -p -b -r -s "$(du -sb "$kerneldir" --exclude=='.git' | cut -f1)" |
pixz -4 > "${sources_pkg_dir}/usr/src/linux-source-${version}-${LINUXFAMILY}.tar.xz"
cp COPYING "${sources_pkg_dir}/usr/share/doc/linux-source-${version}-${LINUXFAMILY}/LICENSE"
cat <<- EOF > "${sources_pkg_dir}"/DEBIAN/control # The main difference is that this is NOT invoked from KBUILD's Makefile, but instead
Package: linux-source-${version}-${BRANCH}-${LINUXFAMILY} # directly by Armbian, with references to the dirs where KBUILD's
Version: ${version}-${BRANCH}-${LINUXFAMILY}+${REVISION} # `make install dtbs_install modules_install headers_install` have already successfully been run.
Architecture: all
Maintainer: $MAINTAINER <$MAINTAINERMAIL>
Section: kernel
Priority: optional
Depends: binutils, coreutils, linux-base
Provides: linux-source, linux-source-${version}-${LINUXFAMILY}
Recommends: gcc, make
Description: This package provides the source code for the Linux kernel $version
EOF
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} -z0 "${sources_pkg_dir}" "${sources_pkg_dir}.deb" # This will create a SET of packages. It should always create these:
rsync --remove-source-files -rq "${sources_pkg_dir}.deb" "${DEB_STORAGE}/" # image package: vmlinuz and such, config, modules, and dtbs (if exist) in /usr/lib/xxx
# linux-headers package: just the kernel headers, for building out-of-tree modules, dkms, etc.
# linux-dtbs package: only dtbs, if they exist. in /boot/
te=$(date +%s) # So this will handle
display_alert "Make the linux-source package" "$(($te - $ts)) sec." "info" # - Creating .deb package skeleton dir (mktemp)
rm -rf "${tmp_src_dir}" # - Moving/copying around of KBUILD installed stuff for Debian/Ubuntu/Armbian standard locations, in the correct packages
# - Fixing the symlinks to stuff so they fit a target system.
# - building the .debs.
is_enabled() {
grep -q "^$1=y" include/config/auto.conf
}
if_enabled_echo() {
if is_enabled "$1"; then
echo -n "$2"
elif [ $# -ge 3 ]; then
echo -n "$3"
fi
}
function prepare_kernel_packaging_debs() {
declare kernel_work_dir="${1}"
declare kernel_dest_install_dir="${2}"
declare kernel_version="${3}"
declare -n tmp_kernel_install_dirs="${4}" # nameref to declare -n kernel_install_dirs dictionary
declare debs_target_dir="${kernel_work_dir}/.."
# Some variables and settings used throughout the script
declare kernel_version_family="${kernel_version}-${LINUXFAMILY}"
declare package_version="${REVISION}"
# show incoming tree
#display_alert "Kernel install dir" "incoming from KBUILD make" "debug"
#run_host_command_logged tree -C --du -h "${kernel_dest_install_dir}" "| grep --line-buffered -v -e '\.ko' -e '\.h' "
# display_alert "tmp_kernel_install_dirs INSTALL_PATH:" "${tmp_kernel_install_dirs[INSTALL_PATH]}" "debug"
# display_alert "tmp_kernel_install_dirs INSTALL_MOD_PATH:" "${tmp_kernel_install_dirs[INSTALL_MOD_PATH]}" "debug"
# display_alert "tmp_kernel_install_dirs INSTALL_HDR_PATH:" "${tmp_kernel_install_dirs[INSTALL_HDR_PATH]}" "debug"
# display_alert "tmp_kernel_install_dirs INSTALL_DTBS_PATH:" "${tmp_kernel_install_dirs[INSTALL_DTBS_PATH]}" "debug"
# package the linux-image (image, modules, dtbs (if present))
create_kernel_deb "linux-image-${BRANCH}-${LINUXFAMILY}" "${debs_target_dir}" kernel_package_callback_linux_image
# if dtbs present, package those too separately, for u-boot usage.
if [[ -d "${tmp_kernel_install_dirs[INSTALL_DTBS_PATH]}" ]]; then
create_kernel_deb "linux-dtb-${BRANCH}-${LINUXFAMILY}" "${debs_target_dir}" kernel_package_callback_linux_dtb
fi
# Only recent kernels get linux-headers package; some tuning has to be done for 4.x
if [[ "${KERNEL_HAS_WORKING_HEADERS}" == "yes" ]]; then
create_kernel_deb "linux-headers-${BRANCH}-${LINUXFAMILY}" "${debs_target_dir}" kernel_package_callback_linux_headers
else
display_alert "Skipping linux-headers package" "for ${KERNEL_MAJOR_MINOR} kernel version" "warn"
fi
}
function create_kernel_deb() {
declare package_name="${1}"
declare deb_output_dir="${2}"
declare callback_function="${3}"
declare package_directory
package_directory=$(mktemp -d "${WORKDIR}/${package_name}.XXXXXXXXX") # explicitly created in WORKDIR, so is protected by that cleanup trap already
#display_alert "package_directory" "${package_directory}" "debug"
declare package_DEBIAN_dir="${package_directory}/DEBIAN" # DEBIAN dir
mkdir -p "${package_DEBIAN_dir}" # maintainer scripts et al
# Generate copyright file
mkdir -p "${package_directory}/usr/share/doc/${package_name}"
cat <<- COPYRIGHT > "${package_directory}/usr/share/doc/${package_name}/copyright"
This is a packaged Armbian patched version of the Linux kernel.
The sources may be found at most Linux archive sites, including:
https://www.kernel.org/pub/linux/kernel
Copyright: 1991 - 2018 Linus Torvalds and others.
The git repository for mainline kernel development is at:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; version 2 dated June, 1991.
On Debian GNU/Linux systems, the complete text of the GNU General Public
License version 2 can be found in \`/usr/share/common-licenses/GPL-2'.
COPYRIGHT
# Run the callback.
# display_alert "Running callback" "callback: ${callback_function}" "debug"
"${callback_function}" "${@}"
run_host_command_logged chown -R root:root "${package_directory}" # Fix ownership and permissions
run_host_command_logged chmod -R go-w "${package_directory}" # Fix ownership and permissions
run_host_command_logged chmod -R a+rX "${package_directory}" # in case we are in a restrictive umask environment like 0077
run_host_command_logged chmod -R ug-s "${package_directory}" # in case we build in a setuid/setgid directory
cd "${package_directory}" || exit_with_error "major failure 774 for ${package_name}"
# create md5sums file
# sh -c "cd '${package_directory}'; find . -type f ! -path './DEBIAN/*' -printf '%P\0' | xargs -r0 md5sum > DEBIAN/md5sums"
declare unpacked_size
unpacked_size="$(du -h -s "${package_directory}" | awk '{print $1}')"
display_alert "Unpacked ${package_name} tree" "${unpacked_size}" "debug"
# Show it
#display_alert "Package dir" "for package ${package_name}" "debug"
#run_host_command_logged tree -C -h -d --du "${package_directory}"
run_host_command_logged dpkg-deb ${DEB_COMPRESS:+-Z$DEB_COMPRESS} --build "${package_directory}" "${deb_output_dir}" # not KDEB compress, we're not under a Makefile
}
function kernel_package_hook_helper() {
declare script="${1}"
declare contents="${2}"
cat >> "${package_DEBIAN_dir}/${script}" <<- EOT
#!/bin/bash
echo "Armbian '${package_name}' for '${kernel_version_family}': '${script}' starting."
set -e # Error control
set -x # Debugging
$(cat "${contents}")
set +x # Disable debugging
echo "Armbian '${package_name}' for '${kernel_version_family}': '${script}' finishing."
true
EOT
chmod 775 "${package_DEBIAN_dir}/${script}"
# produce log asset for script
LOG_ASSET="deb-${package_name}-${script}.sh" do_with_log_asset run_host_command_logged cat "${package_DEBIAN_dir}/${script}"
}
function kernel_package_callback_linux_image() {
display_alert "linux-image deb packaging" "${package_directory}" "debug"
declare installed_image_path="boot/vmlinuz-${kernel_version_family}" # using old mkdebian terminology here.
declare image_name="Image" # for arm64. or, "zImage" for arm, or "vmlinuz" for others. Why? See where u-boot puts them.
display_alert "Showing contents of Kbuild produced /boot" "linux-image" "debug"
run_host_command_logged tree -C --du -h "${tmp_kernel_install_dirs[INSTALL_PATH]}"
run_host_command_logged cp -rp "${tmp_kernel_install_dirs[INSTALL_PATH]}" "${package_directory}/" # /boot stuff
run_host_command_logged cp -rp "${tmp_kernel_install_dirs[INSTALL_MOD_PATH]}/lib" "${package_directory}/" # so "lib" stuff sits at the root
# Clean up symlinks in lib/modules/${kernel_version_family}/build and lib/modules/${kernel_version_family}/source; will be in the headers package
run_host_command_logged rm -v -f "${package_directory}/lib/modules/${kernel_version_family}/build" "${package_directory}/lib/modules/${kernel_version_family}/source"
if [[ -d "${tmp_kernel_install_dirs[INSTALL_DTBS_PATH]}" ]]; then
# /usr/lib/linux-image-${kernel_version_family} is wanted by flash-kernel
# /lib/firmware/${kernel_version_family}/device-tree/ would also be acceptable
display_alert "DTBs present on kernel output" "DTBs ${package_name}: /usr/lib/linux-image-${kernel_version_family}" "debug"
mkdir -p "${package_directory}/usr/lib"
run_host_command_logged cp -rp "${tmp_kernel_install_dirs[INSTALL_DTBS_PATH]}" "${package_directory}/usr/lib/linux-image-${kernel_version_family}"
fi
# Generate a control file
cat <<- CONTROL_FILE > "${package_DEBIAN_dir}/control"
Package: ${package_name}
Version: ${package_version}
Source: linux-${kernel_version}
Architecture: ${ARCH}
Maintainer: ${MAINTAINER} <${MAINTAINERMAIL}>
Section: kernel
Provides: linux-image, linux-image-armbian, armbian-$BRANCH
Description: Linux kernel, armbian version $kernel_version_family $BRANCH
This package contains the Linux kernel, modules and corresponding other
files, kernel_version_family: $kernel_version_family.
CONTROL_FILE
# Install the maintainer scripts
# Note: hook scripts under /etc/kernel are also executed by official Debian
# kernel packages, as well as kernel packages built using make-kpkg.
# make-kpkg sets $INITRD to indicate whether an initramfs is wanted, and
# so do we; recent versions of dracut and initramfs-tools will obey this.
declare debian_kernel_hook_dir="/etc/kernel"
for script in "postinst" "postrm" "preinst" "prerm"; do
mkdir -p "${package_directory}${debian_kernel_hook_dir}/${script}.d" # create kernel hook dir, make sure.
kernel_package_hook_helper "${script}" <(
cat <<- KERNEL_HOOK_DELEGATION
export DEB_MAINT_PARAMS="\$*" # Pass maintainer script parameters to hook scripts
export INITRD=$(if_enabled_echo CONFIG_BLK_DEV_INITRD Yes No) # Tell initramfs builder whether it's wanted
# Run the same hooks Debian/Ubuntu would for their kernel packages.
test -d ${debian_kernel_hook_dir}/${script}.d && run-parts --arg="${kernel_version_family}" --arg="/${installed_image_path}" ${debian_kernel_hook_dir}/${script}.d
KERNEL_HOOK_DELEGATION
# @TODO: only if u-boot, only for postinst. Gotta find a hook scheme for these...
if [[ "${script}" == "postinst" ]]; then
if [[ "yes" == "yes" ]]; then
cat <<- HOOK_FOR_LINK_TO_LAST_INSTALLED_KERNEL
echo "Armbian: update last-installed kernel symlink..."
ln -sf $(basename "${installed_image_path}") /boot/$image_name || mv /${installed_image_path} /boot/${image_name}
touch /boot/.next
HOOK_FOR_LINK_TO_LAST_INSTALLED_KERNEL
fi
fi
)
done
}
function kernel_package_callback_linux_dtb() {
display_alert "linux-dtb packaging" "${package_directory}" "debug"
mkdir -p "${package_directory}/boot/"
run_host_command_logged cp -rp "${tmp_kernel_install_dirs[INSTALL_DTBS_PATH]}" "${package_directory}/boot/dtb-${kernel_version_family}"
# Generate a control file
cat <<- CONTROL_FILE > "${package_DEBIAN_dir}/control"
Version: ${package_version}
Maintainer: ${MAINTAINER} <${MAINTAINERMAIL}>
Section: kernel
Package: ${package_name}
Architecture: ${ARCH}
Provides: linux-dtb, linux-dtb-armbian, armbian-$BRANCH
Description: Armbian Linux DTB, version ${kernel_version_family} $BRANCH
This package contains device blobs from the Linux kernel, version ${kernel_version_family}
CONTROL_FILE
kernel_package_hook_helper "preinst" <(
cat <<- EOT
rm -rf /boot/dtb
rm -rf /boot/dtb-${kernel_version_family}
EOT
)
kernel_package_hook_helper "postinst" <(
cat <<- EOT
cd /boot
ln -sfT dtb-${kernel_version_family} dtb || mv dtb-${kernel_version_family} dtb
EOT
)
}
function kernel_package_callback_linux_headers() {
display_alert "linux-headers packaging" "${package_directory}" "debug"
# targets.
local headers_target_dir="${package_directory}/usr/src/linux-headers-${kernel_version_family}" # headers/tools etc
local modules_target_dir="${package_directory}/lib/modules/${kernel_version_family}" # symlink to above later
mkdir -p "${headers_target_dir}" "${modules_target_dir}" # create both dirs
run_host_command_logged ln -v -s "/usr/src/linux-headers-${kernel_version_family}" "${modules_target_dir}/build" # Symlink in modules so builds find the headers
run_host_command_logged cp -vp "${kernel_work_dir}"/.config "${headers_target_dir}"/.config # copy .config manually to be where it's expected to be
# gather stuff from the linux source tree: ${kernel_work_dir} (NOT the make install destination)
# those can be source files or object (binary/compiled) stuff
# how to get SRCARCH? only from the makefile itself. ARCH=amd64 then SRCARCH=x86. How to we know? @TODO
local SRC_ARCH="${ARCH}"
[[ "${SRC_ARCH}" == "amd64" ]] && SRC_ARCH="x86"
[[ "${SRC_ARCH}" == "armhf" ]] && SRC_ARCH="arm"
[[ "${SRC_ARCH}" == "riscv64" ]] && SRC_ARCH="riscv"
# @TODO: ok so this actually a duplicate from ARCH/ARCHITECTURE in config/sources/*.conf. We should use that instead.
# Lets check and warn if it isn't. If warns don't popup over time we remove and just use ARCHITECTURE later.
if [[ "${SRC_ARCH}" != "${ARCHITECTURE}" ]]; then
display_alert "WARNING: ARCHITECTURE and SRC_ARCH don't match during kernel build." "ARCHITECTURE=${ARCHITECTURE} SRC_ARCH=${SRC_ARCH}" "wrn"
fi
# Create a list of files to include, path-relative to the kernel tree
local temp_file_list="${WORKDIR}/tmp_file_list_${kernel_version_family}.kernel.headers"
# Find the files we want to include in the package. Those will be later cleaned, etc.
(
cd "${kernel_work_dir}" || exit 2
find . -name Makefile\* -o -name Kconfig\* -o -name \*.pl
find arch/*/include include scripts -type f -o -type l
find security/*/include -type f
[[ -d "arch/${SRC_ARCH}" ]] && {
find "arch/${SRC_ARCH}" -name module.lds -o -name Kbuild.platforms -o -name Platform
# shellcheck disable=SC2046 # I need to expand. Thanks.
find $(find "arch/${SRC_ARCH}" -name include -o -name scripts -type d) -type f
find arch/${SRC_ARCH}/include Module.symvers include scripts -type f
}
# tools/include/tools has the byteshift utilities shared between kernel proper and the build scripts/tools.
# This replaces 'headers-debian-byteshift.patch' which was used for years in Armbian.
find tools -type f # all tools; will trim a bit later
find arch/x86/lib/insn.c # required by objtool stuff...
if is_enabled CONFIG_GCC_PLUGINS; then
find scripts/gcc-plugins -name gcc-common.h # @TODO something else here too?
fi
) > "${temp_file_list}"
# Now include/copy those, using tar as intermediary. Just like builddeb does it.
tar -c -f - -C "${kernel_work_dir}" -T "${temp_file_list}" | tar -xf - -C "${headers_target_dir}"
# ${temp_file_list} is left at WORKDIR for later debugging, will be removed by WORKDIR cleanup trap
# Now, make the script dirs clean.
# This is run in our _target_ dir, not the source tree, so we're free to make clean as we wish without invalidating the next build's cache.
run_host_command_logged cd "${headers_target_dir}" "&&" make -s "ARCH=${SRC_ARCH}" "M=scripts" clean
run_host_command_logged cd "${headers_target_dir}/tools" "&&" make -s "ARCH=${SRC_ARCH}" clean
# Trim down on the tools dir a bit after cleaning.
rm -rf "${headers_target_dir}/tools/perf" "${headers_target_dir}/tools/testing"
# Hack: after cleaning, copy over the scripts/module.lds file from the source tree. It will only exist on 5.10+
# See https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1906131
[[ -f "${kernel_work_dir}/scripts/module.lds" ]] &&
run_host_command_logged cp -v "${kernel_work_dir}/scripts/module.lds" "${headers_target_dir}/scripts/module.lds"
# Check that no binaries are included by now. Expensive... @TODO: remove after me make sure.
(
cd "${headers_target_dir}" || exit 33
find . -type f | grep -v -e "include/config/" -e "\.h$" -e ".c$" -e "Makefile$" -e "Kconfig$" -e "Kbuild$" -e "\.cocci$" | xargs file | grep -v -e "ASCII" -e "script text" -e "empty" -e "Unicode text" -e "symbolic link" -e "CSV text" -e "SAS 7+" || true
)
# Generate a control file
cat <<- CONTROL_FILE > "${package_DEBIAN_dir}/control"
Version: ${package_version}
Maintainer: ${MAINTAINER} <${MAINTAINERMAIL}>
Section: devel
Package: ${package_name}
Architecture: ${ARCH}
Provides: linux-headers, linux-headers-armbian, armbian-$BRANCH
Depends: make, gcc, libc6-dev, bison, flex, libssl-dev, libelf-dev
Description: Linux kernel headers for ${kernel_version_family}
This package provides kernel header files for ${kernel_version_family}
.
This is useful for DKMS and building of external modules.
CONTROL_FILE
# Make sure the target dir is clean/not-existing before installing.
kernel_package_hook_helper "preinst" <(
cat <<- EOT_PREINST
if [[ -d "/usr/src/linux-headers-${kernel_version_family}" ]]; then
echo "Cleaning pre-existing directory /usr/src/linux-headers-${kernel_version_family} ..."
rm -rf "/usr/src/linux-headers-${kernel_version_family}"
fi
EOT_PREINST
)
# Make sure the target dir is removed before removing the package; that way we don't leave eventual compilation artifacts over there.
kernel_package_hook_helper "prerm" <(
cat <<- EOT_PRERM
if [[ -d "/usr/src/linux-headers-${kernel_version_family}" ]]; then
echo "Cleaning directory /usr/src/linux-headers-${kernel_version_family} ..."
rm -rf "/usr/src/linux-headers-${kernel_version_family}"
fi
EOT_PRERM
)
kernel_package_hook_helper "postinst" <(
cat <<- EOT_POSTINST
cd "/usr/src/linux-headers-${kernel_version_family}"
NCPU=\$(grep -c 'processor' /proc/cpuinfo)
echo "Compiling kernel-headers tools (${kernel_version_family}) using \$NCPU CPUs - please wait ..."
yes "" | make ARCH="${SRC_ARCH}" oldconfig
make ARCH="${SRC_ARCH}" -j\$NCPU scripts
make ARCH="${SRC_ARCH}" -j\$NCPU M=scripts/mod/
# make ARCH="${SRC_ARCH}" -j\$NCPU modules_prepare # depends on too much other stuff.
make ARCH="${SRC_ARCH}" -j\$NCPU tools/objtool
echo "Done compiling kernel-headers tools (${kernel_version_family})."
EOT_POSTINST
)
} }

View File

@@ -1,219 +1,390 @@
#!/usr/bin/env bash #!/usr/bin/env bash
compile_kernel() {
if [[ $CLEAN_LEVEL == *make* ]]; then
display_alert "Cleaning" "$LINUXSOURCEDIR" "info"
(
cd "${SRC}/cache/sources/${LINUXSOURCEDIR}"
make ARCH="${ARCHITECTURE}" clean > /dev/null 2>&1
)
fi
if [[ $USE_OVERLAYFS == yes ]]; then function run_kernel_make() {
local kerneldir set -e
kerneldir=$(overlayfs_wrapper "wrap" "$SRC/cache/sources/$LINUXSOURCEDIR" "kernel_${LINUXFAMILY}_${BRANCH}") declare -a common_make_params_quoted common_make_envs full_command
common_make_envs=(
"CCACHE_BASEDIR=\"$(pwd)\"" # Base directory for ccache, for cache reuse # @TODO: experiment with this and the source path to maximize hit rate
"PATH=\"${toolchain}:${PATH}\"" # Insert the toolchain first into the PATH.
"DPKG_COLORS=always" # Use colors for dpkg
"XZ_OPT='--threads=0'" # Use parallel XZ compression
"TERM='${TERM}'" # Pass the terminal type, so that 'make menuconfig' can work.
)
common_make_params_quoted=(
# @TODO: introduce O=path/to/binaries, so sources and bins are not in the same dir.
"$CTHREADS" # Parallel compile, "-j X" for X cpus
"ARCH=${ARCHITECTURE}" # Key param. Everything depends on this.
"LOCALVERSION=-${LINUXFAMILY}" # Change the internal kernel version to include the family. Changing this causes recompiles # @TODO change to "localversion" file
"CROSS_COMPILE=${CCACHE} ${KERNEL_COMPILER}" # added as prefix to every compiler invocation by make
"KCFLAGS=-fdiagnostics-color=always -Wno-error=misleading-indentation" # Force GCC colored messages, downgrade misleading indentation to warning
"SOURCE_DATE_EPOCH=${kernel_base_revision_ts}" # https://reproducible-builds.org/docs/source-date-epoch/ and https://www.kernel.org/doc/html/latest/kbuild/reproducible-builds.html
"KBUILD_BUILD_TIMESTAMP=${kernel_base_revision_date}" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-timestamp
"KBUILD_BUILD_USER=armbian-build" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-user-kbuild-build-host
"KBUILD_BUILD_HOST=armbian-bm" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-user-kbuild-build-host
"KGZIP=pigz" "KBZIP2=pbzip2" # Parallel compression, use explicit parallel compressors https://lore.kernel.org/lkml/20200901151002.988547791@linuxfoundation.org/
)
# last statement, so it passes the result to calling function. "env -i" is used for empty env
full_command=("${KERNEL_MAKE_RUNNER:-run_host_command_logged}" "env" "-i" "${common_make_envs[@]}"
make "${common_make_params_quoted[@]@Q}" "$@" "${make_filter}")
"${full_command[@]}" # and exit with it's code, since it's the last statement
}
function run_kernel_make_dialog() {
KERNEL_MAKE_RUNNER="run_host_command_dialog" run_kernel_make "$@"
}
function run_kernel_make_long_running() {
local seconds_start=${SECONDS} # Bash has a builtin SECONDS that is seconds since start of script
KERNEL_MAKE_RUNNER="run_host_command_logged_long_running" run_kernel_make "$@"
display_alert "Kernel Make '$*' took" "$((SECONDS - seconds_start)) seconds" "debug"
}
function compile_kernel() {
local kernel_work_dir="${SRC}/cache/sources/${LINUXSOURCEDIR}"
display_alert "Kernel build starting" "${LINUXSOURCEDIR}" "info"
declare checked_out_revision_mtime="" checked_out_revision_ts="" # set by fetch_from_repo
LOG_SECTION="kernel_prepare_git" do_with_logging do_with_hooks kernel_prepare_git
# Capture date variables set by fetch_from_repo; it's the date of the last kernel revision
declare kernel_base_revision_date
declare kernel_base_revision_mtime="${checked_out_revision_mtime}"
declare kernel_base_revision_ts="${checked_out_revision_ts}"
kernel_base_revision_date="$(LC_ALL=C date -d "@${kernel_base_revision_ts}")"
LOG_SECTION="kernel_maybe_clean" do_with_logging do_with_hooks kernel_maybe_clean
local version hash pre_patch_version
LOG_SECTION="kernel_prepare_patching" do_with_logging do_with_hooks kernel_prepare_patching
LOG_SECTION="kernel_patching" do_with_logging do_with_hooks kernel_patching
[[ $CREATE_PATCHES == yes ]] && userpatch_create "kernel" # create patch for manual source changes
local version
local toolchain
# Check if we're gonna do some interactive configuration; if so, don't run kernel_config under logging manager.
if [[ $KERNEL_CONFIGURE != yes ]]; then
LOG_SECTION="kernel_config" do_with_logging do_with_hooks kernel_config
else else
local kerneldir="$SRC/cache/sources/$LINUXSOURCEDIR" LOG_SECTION="kernel_config_interactive" do_with_hooks kernel_config
fi fi
cd "${kerneldir}" || exit
LOG_SECTION="kernel_package_source" do_with_logging do_with_hooks kernel_package_source
# @TODO: might be interesting to package kernel-headers at this stage.
# @TODO: would allow us to have a "HEADERS_ONLY=yes" that can prepare arm64 headers on arm64 without building the whole kernel
# @TODO: also it makes sense, logically, to package headers after configuration, since that's all what's needed; it's the same
# @TODO: stage at which `dkms` would run (a configured, tool-built, kernel tree).
# @TODO: might also be interesting to do the same for DTBs.
# @TODO: those get packaged twice (once in linux-dtb and once in linux-image)
# @TODO: but for the u-boot bootloader, only the linux-dtb is what matters.
# @TODO: some users/maintainers do a lot of their work on "DTS/DTB only changes", which do require the kernel tree
# @TODO: but the only testable artifacts are the .dtb themselves. Allow for a `DTB_ONLY=yes` might be useful.
LOG_SECTION="kernel_build_and_package" do_with_logging do_with_hooks kernel_build_and_package
display_alert "Done with" "kernel compile" "debug"
cd "${kernel_work_dir}/.." || exit
rm -f linux-firmware-image-*.deb # remove firmware image packages here - easier than patching ~40 packaging scripts at once
run_host_command_logged rsync --remove-source-files -r ./*.deb "${DEB_STORAGE}/"
return 0
}
function kernel_prepare_git() {
if [[ -n $KERNELSOURCE ]]; then
[[ -d "${kernel_work_dir}" ]] && cd "${kernel_work_dir}" && fasthash_debug "pre git, existing tree"
display_alert "Downloading sources" "kernel" "git"
# Does not work well with rpi for example: GIT_WARM_REMOTE_SHALLOW_AT_TAG="v${KERNEL_MAJOR_MINOR}" \
# GIT_WARM_REMOTE_SHALLOW_AT_TAG sets GIT_WARM_REMOTE_SHALLOW_AT_DATE for you, as long as it is included by GIT_WARM_REMOTE_FETCH_TAGS
# GIT_WARM_REMOTE_SHALLOW_AT_DATE is the only one really used for making shallow
GIT_FIXED_WORKDIR="${LINUXSOURCEDIR}" \
GIT_WARM_REMOTE_NAME="kernel-stable-${KERNEL_MAJOR_MINOR}" \
GIT_WARM_REMOTE_URL="${MAINLINE_KERNEL_SOURCE}" \
GIT_WARM_REMOTE_BRANCH="linux-${KERNEL_MAJOR_MINOR}.y" \
GIT_WARM_REMOTE_FETCH_TAGS="v${KERNEL_MAJOR_MINOR}*" \
GIT_WARM_REMOTE_SHALLOW_AT_TAG="${KERNEL_MAJOR_SHALLOW_TAG}" \
GIT_WARM_REMOTE_BUNDLE="kernel-stable-${KERNEL_MAJOR_MINOR}" \
GIT_COLD_BUNDLE_URL="${MAINLINE_KERNEL_COLD_BUNDLE_URL}" \
fetch_from_repo "$KERNELSOURCE" "unused:set via GIT_FIXED_WORKDIR" "$KERNELBRANCH" "yes"
fi
}
function kernel_maybe_clean() {
if [[ $CLEAN_LEVEL == *make-kernel* ]]; then
display_alert "Cleaning Kernel tree - CLEAN_LEVEL contains 'make-kernel'" "$LINUXSOURCEDIR" "info"
(
cd "${kernel_work_dir}"
run_host_command_logged make ARCH="${ARCHITECTURE}" clean
)
fasthash_debug "post make clean"
else
display_alert "Not cleaning Kernel tree; use CLEAN_LEVEL=make-kernel if needed" "CLEAN_LEVEL=${CLEAN_LEVEL}" "debug"
fi
}
function kernel_prepare_patching() {
if [[ $USE_OVERLAYFS == yes ]]; then # @TODO: when is this set to yes?
display_alert "Using overlayfs_wrapper" "kernel_${LINUXFAMILY}_${BRANCH}" "debug"
kernel_work_dir=$(overlayfs_wrapper "wrap" "$SRC/cache/sources/$LINUXSOURCEDIR" "kernel_${LINUXFAMILY}_${BRANCH}")
fi
cd "${kernel_work_dir}" || exit
# @TODO: why would we delete localversion?
# @TODO: it should be the opposite, writing localversion to disk, _instead_ of passing it via make.
# @TODO: if it turns out to be the case, do a commit with it... (possibly later, after patching?)
rm -f localversion rm -f localversion
# read kernel version # read kernel version
local version hash version=$(grab_version "$kernel_work_dir")
version=$(grab_version "$kerneldir") pre_patch_version="${version}"
display_alert "Pre-patch kernel version" "${pre_patch_version}" "debug"
# read kernel git hash # read kernel git hash
hash=$(improved_git --git-dir="$kerneldir"/.git rev-parse HEAD) hash=$(git --git-dir="$kernel_work_dir"/.git rev-parse HEAD)
}
function kernel_patching() {
## Start kernel patching process.
## There's a few objectives here:
## - (always) produce a fasthash: represents "what would be done" (eg: md5 of a patch, crc32 of description).
## - (optionally) execute modification against living tree (eg: apply a patch, copy a file, etc). only if `DO_MODIFY=yes`
## - (always) call mark_change_commit with the description of what was done and fasthash.
declare -i patch_minimum_target_mtime="${kernel_base_revision_mtime}"
display_alert "patch_minimum_target_mtime:" "${patch_minimum_target_mtime}" "debug"
initialize_fasthash "kernel" "${hash}" "${pre_patch_version}" "${kernel_work_dir}"
fasthash_debug "init"
# Apply a series of patches if a series file exists # Apply a series of patches if a series file exists
if test -f "${SRC}"/patch/kernel/${KERNELPATCHDIR}/series.conf; then local series_conf="${SRC}"/patch/kernel/${KERNELPATCHDIR}/series.conf
if test -f "${series_conf}"; then
display_alert "series.conf file visible. Apply" display_alert "series.conf file visible. Apply"
series_conf="${SRC}"/patch/kernel/${KERNELPATCHDIR}/series.conf fasthash_branch "patches-${KERNELPATCHDIR}-series.conf"
apply_patch_series "${kernel_work_dir}" "${series_conf}" # applies a series of patches, read from a file. calls process_patch_file
# apply_patch_series <target dir> <full path to series file>
apply_patch_series "${kerneldir}" "$series_conf"
fi fi
# build 3rd party drivers # applies a humongous amount of patches coming from github repos.
compilation_prepare # it's mostly conditional, and very complex.
# @TODO: re-enable after finishing converting it with fasthash magic
# apply_kernel_patches_for_drivers "${kernel_work_dir}" "${version}" # calls process_patch_file and other stuff. there is A LOT of it.
advanced_patch "kernel" "$KERNELPATCHDIR" "$BOARD" "" "$BRANCH" "$LINUXFAMILY-$BRANCH" # applies a series of patches, in directory order, from multiple directories (default/"user" patches)
# @TODO: I believe using the $BOARD here is the most confusing thing in the whole of Armbian. It should be disabled.
# @TODO: Armbian built kernels dont't vary per-board, but only per "$ARCH-$LINUXFAMILY-$BRANCH"
# @TODO: allowing for board-specific kernel patches creates insanity. uboot is enough.
fasthash_branch "patches-${KERNELPATCHDIR}-$BRANCH"
advanced_patch "kernel" "$KERNELPATCHDIR" "$BOARD" "" "$BRANCH" "$LINUXFAMILY-$BRANCH" # calls process_patch_file, "target" is empty there
# create patch for manual source changes in debug mode fasthash_debug "finish"
[[ $CREATE_PATCHES == yes ]] && userpatch_create "kernel" finish_fasthash "kernel" # this reports the final hash and creates git branch to build ID. All modifications commited.
}
function kernel_config() {
# re-read kernel version after patching # re-read kernel version after patching
local version version=$(grab_version "$kernel_work_dir")
version=$(grab_version "$kerneldir")
display_alert "Compiling $BRANCH kernel" "$version" "info" display_alert "Compiling $BRANCH kernel" "$version" "info"
# compare with the architecture of the current Debian node # compare with the architecture of the current Debian node
# if it matches we use the system compiler # if it matches we use the system compiler
if $(dpkg-architecture -e "${ARCH}"); then if dpkg-architecture -e "${ARCH}"; then
display_alert "Native compilation" display_alert "Native compilation" "target ${ARCH} on host $(dpkg --print-architecture)"
elif [[ $(dpkg --print-architecture) == amd64 ]]; then else
local toolchain display_alert "Cross compilation" "target ${ARCH} on host $(dpkg --print-architecture)"
toolchain=$(find_toolchain "$KERNEL_COMPILER" "$KERNEL_USE_GCC") toolchain=$(find_toolchain "$KERNEL_COMPILER" "$KERNEL_USE_GCC")
[[ -z $toolchain ]] && exit_with_error "Could not find required toolchain" "${KERNEL_COMPILER}gcc $KERNEL_USE_GCC" [[ -z $toolchain ]] && exit_with_error "Could not find required toolchain" "${KERNEL_COMPILER}gcc $KERNEL_USE_GCC"
else
exit_with_error "Architecture [$ARCH] is not supported"
fi fi
display_alert "Compiler version" "${KERNEL_COMPILER}gcc $(eval env PATH="${toolchain}:${PATH}" "${KERNEL_COMPILER}gcc" -dumpversion)" "info" kernel_compiler_version="$(eval env PATH="${toolchain}:${PATH}" "${KERNEL_COMPILER}gcc" -dumpversion)"
display_alert "Compiler version" "${KERNEL_COMPILER}gcc ${kernel_compiler_version}" "info"
# copy kernel config # copy kernel config
local COPY_CONFIG_BACK_TO=""
if [[ $KERNEL_KEEP_CONFIG == yes && -f "${DEST}"/config/$LINUXCONFIG.config ]]; then if [[ $KERNEL_KEEP_CONFIG == yes && -f "${DEST}"/config/$LINUXCONFIG.config ]]; then
display_alert "Using previous kernel config" "${DEST}/config/$LINUXCONFIG.config" "info" display_alert "Using previous kernel config" "${DEST}/config/$LINUXCONFIG.config" "info"
cp -p "${DEST}/config/${LINUXCONFIG}.config" .config run_host_command_logged cp -pv "${DEST}/config/${LINUXCONFIG}.config" .config
else else
if [[ -f $USERPATCHES_PATH/$LINUXCONFIG.config ]]; then if [[ -f $USERPATCHES_PATH/$LINUXCONFIG.config ]]; then
display_alert "Using kernel config provided by user" "userpatches/$LINUXCONFIG.config" "info" display_alert "Using kernel config provided by user" "userpatches/$LINUXCONFIG.config" "info"
cp -p "${USERPATCHES_PATH}/${LINUXCONFIG}.config" .config run_host_command_logged cp -pv "${USERPATCHES_PATH}/${LINUXCONFIG}.config" .config
elif [[ -f "${USERPATCHES_PATH}/config/kernel/${LINUXCONFIG}.config" ]]; then
display_alert "Using kernel config provided by user in config/kernel folder" "config/kernel/${LINUXCONFIG}.config" "info"
run_host_command_logged cp -pv "${USERPATCHES_PATH}/config/kernel/${LINUXCONFIG}.config" .config
else else
display_alert "Using kernel config file" "config/kernel/$LINUXCONFIG.config" "info" display_alert "Using kernel config file" "config/kernel/$LINUXCONFIG.config" "info"
cp -p "${SRC}/config/kernel/${LINUXCONFIG}.config" .config run_host_command_logged cp -pv "${SRC}/config/kernel/${LINUXCONFIG}.config" .config
COPY_CONFIG_BACK_TO="${SRC}/config/kernel/${LINUXCONFIG}.config"
fi fi
fi fi
call_extension_method "custom_kernel_config" << 'CUSTOM_KERNEL_CONFIG' # Store the .config modification date at this time, for restoring later. Otherwise rebuilds.
*Kernel .config is in place, still clean from git version* local kernel_config_mtime
Called after ${LINUXCONFIG}.config is put in place (.config). kernel_config_mtime=$(get_file_modification_time ".config")
Before any olddefconfig any Kconfig make is called.
A good place to customize the .config directly. call_extension_method "custom_kernel_config" <<- 'CUSTOM_KERNEL_CONFIG'
CUSTOM_KERNEL_CONFIG *Kernel .config is in place, still clean from git version*
Called after ${LINUXCONFIG}.config is put in place (.config).
Before any olddefconfig any Kconfig make is called.
A good place to customize the .config directly.
CUSTOM_KERNEL_CONFIG
# hack for OdroidXU4. Copy firmare files # hack for OdroidXU4. Copy firmare files
if [[ $BOARD == odroidxu4 ]]; then if [[ $BOARD == odroidxu4 ]]; then
mkdir -p "${kerneldir}/firmware/edid" mkdir -p "${kernel_work_dir}/firmware/edid"
cp "${SRC}"/packages/blobs/odroidxu4/*.bin "${kerneldir}/firmware/edid" cp -p "${SRC}"/packages/blobs/odroidxu4/*.bin "${kernel_work_dir}/firmware/edid"
fi fi
# hack for deb builder. To pack what's missing in headers pack. display_alert "Kernel configuration" "${LINUXCONFIG}" "info"
cp "${SRC}"/patch/misc/headers-debian-byteshift.patch /tmp
if [[ $KERNEL_CONFIGURE != yes ]]; then if [[ $KERNEL_CONFIGURE != yes ]]; then
if [[ $BRANCH == default ]]; then run_kernel_make olddefconfig # @TODO: what is this? does it fuck up dates?
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \
'make ARCH=$ARCHITECTURE CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" silentoldconfig'
else
# TODO: check if required
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \
'make ARCH=$ARCHITECTURE CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" olddefconfig'
fi
else else
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \ display_alert "Starting (interactive) kernel oldconfig" "${LINUXCONFIG}" "debug"
'make $CTHREADS ARCH=$ARCHITECTURE CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" oldconfig'
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \
'make $CTHREADS ARCH=$ARCHITECTURE CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" ${KERNEL_MENUCONFIG:-menuconfig}'
[[ ${PIPESTATUS[0]} -ne 0 ]] && exit_with_error "Error kernel menuconfig failed" run_kernel_make_dialog oldconfig
# No logging for this. this is UI piece
display_alert "Starting (interactive) kernel ${KERNEL_MENUCONFIG:-menuconfig}" "${LINUXCONFIG}" "debug"
run_kernel_make_dialog "${KERNEL_MENUCONFIG:-menuconfig}"
# Capture new date. Otherwise changes not detected by make.
kernel_config_mtime=$(get_file_modification_time ".config")
# store kernel config in easily reachable place # store kernel config in easily reachable place
mkdir -p "${DEST}"/config
display_alert "Exporting new kernel config" "$DEST/config/$LINUXCONFIG.config" "info" display_alert "Exporting new kernel config" "$DEST/config/$LINUXCONFIG.config" "info"
cp .config "${DEST}/config/${LINUXCONFIG}.config" run_host_command_logged cp -pv .config "${DEST}/config/${LINUXCONFIG}.config"
# store back into original LINUXCONFIG too, if it came from there, so it's pending commits when done.
[[ "${COPY_CONFIG_BACK_TO}" != "" ]] && run_host_command_logged cp -pv .config "${COPY_CONFIG_BACK_TO}"
# export defconfig too if requested # export defconfig too if requested
if [[ $KERNEL_EXPORT_DEFCONFIG == yes ]]; then if [[ $KERNEL_EXPORT_DEFCONFIG == yes ]]; then
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \ run_kernel_make savedefconfig
'make ARCH=$ARCHITECTURE CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" savedefconfig'
[[ -f defconfig ]] && cp defconfig "${DEST}/config/${LINUXCONFIG}.defconfig" [[ -f defconfig ]] && run_host_command_logged cp -pv defconfig "${DEST}/config/${LINUXCONFIG}.defconfig"
fi fi
fi fi
# create linux-source package - with already patched sources call_extension_method "custom_kernel_config_post_defconfig" <<- 'CUSTOM_KERNEL_CONFIG_POST_DEFCONFIG'
# We will build this package first and clear the memory. *Kernel .config is in place, already processed by Armbian*
if [[ $BUILD_KSRC != no ]]; then Called after ${LINUXCONFIG}.config is put in place (.config).
create_linux-source_package After all olddefconfig any Kconfig make is called.
fi A good place to customize the .config last-minute.
CUSTOM_KERNEL_CONFIG_POST_DEFCONFIG
echo -e "\n\t== kernel ==\n" >> "${DEST}"/${LOG_SUBPATH}/compilation.log
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \
'make $CTHREADS ARCH=$ARCHITECTURE \
CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" \
$SRC_LOADADDR \
LOCALVERSION="-$LINUXFAMILY" \
$KERNEL_IMAGE_TYPE ${KERNEL_EXTRA_TARGETS:-modules dtbs} 2>>$DEST/${LOG_SUBPATH}/compilation.log' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/compilation.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" \
--progressbox "Compiling kernel..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'}
if [[ ${PIPESTATUS[0]} -ne 0 || ! -f arch/$ARCHITECTURE/boot/$KERNEL_IMAGE_TYPE ]]; then
grep -i error $DEST/${LOG_SUBPATH}/compilation.log
exit_with_error "Kernel was not built" "@host"
fi
# different packaging for 4.3+
if linux-version compare "${version}" ge 4.3; then
local kernel_packing="bindeb-pkg"
else
local kernel_packing="deb-pkg"
fi
display_alert "Creating packages"
# produce deb packages: image, headers, firmware, dtb
echo -e "\n\t== deb packages: image, headers, firmware, dtb ==\n" >> "${DEST}"/${LOG_SUBPATH}/compilation.log
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${PATH}" \
'make $CTHREADS $kernel_packing \
KDEB_PKGVERSION=$REVISION \
KDEB_COMPRESS=${DEB_COMPRESS} \
BRANCH=$BRANCH \
LOCALVERSION="-${LINUXFAMILY}" \
KBUILD_DEBARCH=$ARCH \
ARCH=$ARCHITECTURE \
DEBFULLNAME="$MAINTAINER" \
DEBEMAIL="$MAINTAINERMAIL" \
CROSS_COMPILE="$CCACHE $KERNEL_COMPILER" 2>>$DEST/${LOG_SUBPATH}/compilation.log' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/compilation.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Creating kernel packages..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'}
cd .. || exit
# remove firmare image packages here - easier than patching ~40 packaging scripts at once
rm -f linux-firmware-image-*.deb
rsync --remove-source-files -rq ./*.deb "${DEB_STORAGE}/" || exit_with_error "Failed moving kernel DEBs"
# store git hash to the file and create a change log
HASHTARGET="${SRC}/cache/hash"$([[ ${BETA} == yes ]] && echo "-beta")"/linux-image-${BRANCH}-${LINUXFAMILY}"
OLDHASHTARGET=$(head -1 "${HASHTARGET}.githash" 2> /dev/null)
# check if OLDHASHTARGET commit exists otherwise use oldest
if [[ -z ${KERNEL_VERSION_LEVEL} ]]; then
git -C ${kerneldir} cat-file -t ${OLDHASHTARGET} > /dev/null 2>&1
[[ $? -ne 0 ]] && OLDHASHTARGET=$(git -C ${kerneldir} show HEAD~199 --pretty=format:"%H" --no-patch)
else
git -C ${kerneldir} cat-file -t ${OLDHASHTARGET} > /dev/null 2>&1
[[ $? -ne 0 ]] && OLDHASHTARGET=$(git -C ${kerneldir} rev-list --max-parents=0 HEAD)
fi
[[ -z ${KERNELPATCHDIR} ]] && KERNELPATCHDIR=$LINUXFAMILY-$BRANCH
[[ -z ${LINUXCONFIG} ]] && LINUXCONFIG=linux-$LINUXFAMILY-$BRANCH
# calculate URL
if [[ "$KERNELSOURCE" == *"github.com"* ]]; then
URL="${KERNELSOURCE/git:/https:}/commit/${HASH}"
elif [[ "$KERNELSOURCE" == *"kernel.org"* ]]; then
URL="${KERNELSOURCE/git:/https:}/commit/?h=$(echo $KERNELBRANCH | cut -d":" -f2)&id=${HASH}"
else
URL="${KERNELSOURCE}/+/$HASH"
fi
# create change log
git --no-pager -C ${kerneldir} log --abbrev-commit --oneline --no-patch --no-merges --date-order --date=format:'%Y-%m-%d %H:%M:%S' --pretty=format:'%C(black bold)%ad%Creset%C(auto) | %s | <%an> | <a href='$URL'%H>%H</a>' ${OLDHASHTARGET}..${hash} > "${HASHTARGET}.gitlog"
# hash origin
echo "${hash}" > "${HASHTARGET}.githash"
# hash_patches
CALC_PATCHES=$(git -C $SRC log --format="%H" -1 -- $(realpath --relative-base="$SRC" "${SRC}/patch/kernel/${KERNELPATCHDIR}"))
[[ -z "$CALC_PATCHES" ]] && CALC_PATCHES="null"
echo "$CALC_PATCHES" >> "${HASHTARGET}.githash"
# hash_kernel_config
CALC_CONFIG=$(git -C $SRC log --format="%H" -1 -- $(realpath --relative-base="$SRC" "${SRC}/config/kernel/${LINUXCONFIG}.config"))
[[ -z "$CALC_CONFIG" ]] && CALC_CONFIG="null"
echo "$CALC_CONFIG" >> "${HASHTARGET}.githash"
# Restore the date of .config. Above delta is a pure function, theoretically.
set_files_modification_time "${kernel_config_mtime}" ".config"
}
function kernel_package_source() {
[[ "${BUILD_KSRC}" != "yes" ]] && return 0
display_alert "Creating kernel source package" "${LINUXCONFIG}" "info"
local ts=${SECONDS}
local sources_pkg_dir tmp_src_dir tarball_size package_size
tmp_src_dir=$(mktemp -d) # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
sources_pkg_dir="${tmp_src_dir}/${CHOSEN_KSRC}_${REVISION}_all"
mkdir -p "${sources_pkg_dir}"/usr/src/ \
"${sources_pkg_dir}/usr/share/doc/linux-source-${version}-${LINUXFAMILY}" \
"${sources_pkg_dir}"/DEBIAN
run_host_command_logged cp -v "${SRC}/config/kernel/${LINUXCONFIG}.config" "${sources_pkg_dir}/usr/src/${LINUXCONFIG}_${version}_${REVISION}_config"
run_host_command_logged cp -v COPYING "${sources_pkg_dir}/usr/share/doc/linux-source-${version}-${LINUXFAMILY}/LICENSE"
display_alert "Compressing sources for the linux-source package" "exporting from git" "info"
cd "${kernel_work_dir}"
local tar_prefix="${version}/"
local output_tarball="${sources_pkg_dir}/usr/src/linux-source-${version}-${LINUXFAMILY}.tar.zst"
# export tar with `git archive`; we point it at HEAD, but could be anything else too
run_host_command_logged git archive "--prefix=${tar_prefix}" --format=tar HEAD "| zstdmt > '${output_tarball}'"
tarball_size="$(du -h -s "${output_tarball}" | awk '{print $1}')"
cat <<- EOF > "${sources_pkg_dir}"/DEBIAN/control
Package: linux-source-${BRANCH}-${LINUXFAMILY}
Version: ${version}-${BRANCH}-${LINUXFAMILY}+${REVISION}
Architecture: all
Maintainer: ${MAINTAINER} <${MAINTAINERMAIL}>
Section: kernel
Priority: optional
Depends: binutils, coreutils
Provides: linux-source, linux-source-${version}-${LINUXFAMILY}
Recommends: gcc, make
Description: This package provides the source code for the Linux kernel $version
EOF
fakeroot_dpkg_deb_build -Znone -z0 "${sources_pkg_dir}" "${sources_pkg_dir}.deb" # do not compress .deb, it already contains a zstd compressed tarball! ignores ${KDEB_COMPRESS} on purpose
package_size="$(du -h -s "${sources_pkg_dir}.deb" | awk '{print $1}')"
run_host_command_logged rsync --remove-source-files -r "${sources_pkg_dir}.deb" "${DEB_STORAGE}/"
display_alert "$(basename "${sources_pkg_dir}.deb" ".deb") packaged" "$((SECONDS - ts)) seconds, ${tarball_size} tarball, ${package_size} .deb" "info"
}
function kernel_build_and_package() {
local ts=${SECONDS}
cd "${kernel_work_dir}"
local -a build_targets=("all") # "All" builds the vmlinux/Image/Image.gz default for the ${ARCH}
declare kernel_dest_install_dir
kernel_dest_install_dir=$(mktemp -d "${WORKDIR}/kernel.temp.install.target.XXXXXXXXX") # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
# define dict with vars passed and target directories
declare -A kernel_install_dirs=(
["INSTALL_PATH"]="${kernel_dest_install_dir}/image/boot" # Used by `make install`
["INSTALL_MOD_PATH"]="${kernel_dest_install_dir}/modules" # Used by `make modules_install`
#["INSTALL_HDR_PATH"]="${kernel_dest_install_dir}/libc_headers" # Used by `make headers_install` - disabled, only used for libc headers
)
build_targets+=(install modules_install) # headers_install disabled, only used for libc headers
if [[ "${KERNEL_BUILD_DTBS:-yes}" == "yes" ]]; then
display_alert "Kernel build will produce DTBs!" "DTBs YES" "debug"
build_targets+=("dtbs_install")
kernel_install_dirs+=(["INSTALL_DTBS_PATH"]="${kernel_dest_install_dir}/dtbs") # Used by `make dtbs_install`
fi
# loop over the keys above, get the value, create param value in array; also mkdir the dir
declare -a install_make_params_quoted
local dir_key
for dir_key in "${!kernel_install_dirs[@]}"; do
local dir="${kernel_install_dirs["${dir_key}"]}"
local value="${dir_key}=${dir}"
mkdir -p "${dir}"
install_make_params_quoted+=("${value}")
done
display_alert "Building kernel" "${LINUXCONFIG} ${build_targets[*]}" "info"
fasthash_debug "build"
make_filter="| grep --line-buffered -v -e 'CC' -e 'LD' -e 'AR' -e 'INSTALL' -e 'SIGN' -e 'XZ' " \
do_with_ccache_statistics \
run_kernel_make_long_running "${install_make_params_quoted[@]@Q}" "${build_targets[@]}"
fasthash_debug "build"
cd "${kernel_work_dir}"
prepare_kernel_packaging_debs "${kernel_work_dir}" "${kernel_dest_install_dir}" "${version}" kernel_install_dirs
display_alert "Kernel built and packaged in" "$((SECONDS - ts)) seconds - ${version}-${LINUXFAMILY}" "info"
}
function do_with_ccache_statistics() {
display_alert "Clearing ccache statistics" "ccache" "debug"
ccache --zero-stats
"$@"
display_alert "Display ccache statistics" "ccache" "debug"
run_host_command_logged ccache --show-stats
} }

View File

@@ -0,0 +1,68 @@
function report_fashtash_should_execute() {
report_fasthash "$@"
# @TODO: if fasthash only, return 1
return 0
}
function mark_fasthash_done() {
display_alert "mark_fasthash_done" "$*" "fasthash"
return 0
}
function report_fasthash() {
local type="${1}"
local obj="${2}"
local desc="${3}"
display_alert "report_fasthash" "${type}: ${desc}" "fasthash"
return 0
}
function initialize_fasthash() {
display_alert "initialize_fasthash" "$*" "fasthash"
return 0
declare -a fast_hash_list=() # @TODO: declaring here won't do it any good, this is a shared var
}
function fasthash_branch() {
display_alert "fasthash_branch" "$*" "fasthash"
return 0
}
function finish_fasthash() {
display_alert "finish_fasthash" "$*" "fasthash"
return 0
}
function fasthash_debug() {
if [[ "${SHOW_FASTHASH}" != "yes" ]]; then
return 0
fi
display_alert "fasthash_debug" "$*" "fasthash"
run_host_command_logged find . -type f -printf "'%T@ %p\\n'" "|" \
grep -v -e "\.ko" -e "\.o" -e "\.cmd" -e "\.mod" -e "\.a" -e "\.tmp" -e "\.dtb" -e ".scr" -e "\.\/debian" "|" \
sort -n "|" tail -n 10
}
function get_file_modification_time() { # @TODO: This is almost always called from a subshell. No use throwing errors?
local -i file_date
if [[ ! -f "${1}" ]]; then
exit_with_error "Can't get modification time of nonexisting file" "${1}"
return 1
fi
# YYYYMMDDhhmm.ss - it is NOT a valid integer, but is what 'touch' wants for its "-t" parameter
# YYYYMMDDhhmmss - IS a valid integer and we can do math to it. 'touch' code will format it later
file_date=$(date +%Y%m%d%H%M%S -r "${1}")
display_alert "Read modification date for file" "${1} - ${file_date}" "timestamp"
echo -n "${file_date}"
return 0
}
# This is for simple "set without thinking" usage, date preservation is done directly by process_patch_file
function set_files_modification_time() {
local -i mtime="${1}"
local formatted_mtime
shift
display_alert "Setting date ${mtime}" "${*} (no newer check)" "timestamp"
formatted_mtime="${mtime:0:12}.${mtime:12}"
touch --no-create -m -t "${formatted_mtime}" "${@}"
}

View File

@@ -0,0 +1,41 @@
#
# Linux splash file
#
function apply_kernel_patches_for_bootsplash() {
# disable it.
# todo: cleanup logo generation code and bring in plymouth
# @TODO: rpardini: so, can we completely remove this?
SKIP_BOOTSPLASH=yes
[[ "${SKIP_BOOTSPLASH}" == "yes" ]] && return 0
linux-version compare "${version}" le 5.14 && return 0
display_alert "Adding" "Kernel splash file" "info"
if linux-version compare "${version}" ge 5.19.6 ||
(linux-version compare "${version}" ge 5.15.64 && linux-version compare "${version}" lt 5.16); then
process_patch_file "${SRC}/patch/misc/0001-Revert-fbdev-fbcon-Properly-revert-changes-when-vc_r.patch" "applying"
fi
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0000-Revert-fbcon-Avoid-cap-set-but-not-used-warning.patch" "applying"
process_patch_file "${SRC}/patch/misc/0001-Revert-fbcon-Fix-accelerated-fbdev-scrolling-while-logo-is-still-shown.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0001-Revert-fbcon-Add-option-to-enable-legacy-hardware-ac.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0002-Revert-vgacon-drop-unused-vga_init_done.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0003-Revert-vgacon-remove-software-scrollback-support.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0004-Revert-drivers-video-fbcon-fix-NULL-dereference-in-f.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0005-Revert-fbcon-remove-no-op-fbcon_set_origin.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0006-Revert-fbcon-remove-now-unusued-softback_lines-curso.patch" "applying"
process_patch_file "${SRC}/patch/misc/bootsplash-5.16.y-0007-Revert-fbcon-remove-soft-scrollback-code.patch" "applying"
process_patch_file "${SRC}/patch/misc/0001-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0002-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0003-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0004-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0005-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0006-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0007-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0008-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0009-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0010-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0011-bootsplash.patch" "applying"
process_patch_file "${SRC}/patch/misc/0012-bootsplash.patch" "applying"
}

View File

@@ -1,6 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
compilation_prepare() {
function prepare_extra_kernel_drivers() {
source ${SRC}/lib/functions/compilation/patch/drivers_network.sh source ${SRC}/lib/functions/compilation/patch/drivers_network.sh
# Packaging patch for modern kernels should be one for all. # Packaging patch for modern kernels should be one for all.

View File

@@ -82,69 +82,66 @@ advanced_patch() {
# <status>: additional status text # <status>: additional status text
# #
process_patch_file() { process_patch_file() {
local patch=$1 local patch="${1}"
local status=$2 local status="${2}"
local -i patch_date
local relative_patch="${patch##"${SRC}"/}" # ${FOO##prefix} remove prefix from FOO
# detect and remove files which patch will create # report_fashtash_should_execute is report_fasthash returns true only if we're supposed to apply the patch on disk.
lsdiff -s --strip=1 "${patch}" | grep '^+' | awk '{print $2}' | xargs -I % sh -c 'rm -f %' if report_fashtash_should_execute file "${patch}" "Apply patch ${relative_patch}"; then
echo "Processing file $patch" >> "${DEST}"/${LOG_SUBPATH}/patching.log # get the modification date of the patch. make it not less than MIN_PATCH_AGE, if set.
patch --batch --silent -p1 -N < "${patch}" >> "${DEST}"/${LOG_SUBPATH}/patching.log 2>&1 patch_date=$(get_file_modification_time "${patch}")
# shellcheck disable=SC2154 # patch_minimum_target_mtime can be declared in outer scope
if [[ "${patch_minimum_target_mtime}" != "" ]]; then
if [[ ${patch_date} -lt ${patch_minimum_target_mtime} ]]; then
display_alert "Patch before minimum date" "${patch_date} -lt ${patch_minimum_target_mtime}" "timestamp"
patch_date=${patch_minimum_target_mtime}
fi
fi
if [[ $? -ne 0 ]]; then # detect and remove files which patch will create
display_alert "* $status $(basename "${patch}")" "failed" "wrn" lsdiff -s --strip=1 "${patch}" | grep '^+' | awk '{print $2}' | xargs -I % sh -c 'rm -f %'
[[ $EXIT_PATCHING_ERROR == yes ]] && exit_with_error "Aborting due to" "EXIT_PATCHING_ERROR"
else # store an array of the files that patch will add or modify, we'll set their modification times after the fact
display_alert "* $status $(basename "${patch}")" "" "info" declare -a patched_files
mapfile -t patched_files < <(lsdiff -s --strip=1 "${patch}" | grep -e '^+' -e '^!' | awk '{print $2}')
# @TODO: try patching with `git am` first, so git contains the patch commit info/msg. -- For future git-based hashing.
# shellcheck disable=SC2015 # noted, thanks. I need to handle exit code here.
patch --batch -p1 -N --input="${patch}" --quiet --reject-file=- && { # "-" discards rejects
# Fix the dates on the patched files
set_files_modification_time "${patch_date}" "${patched_files[@]}"
display_alert "* $status ${relative_patch}" "" "info"
} || {
display_alert "* $status ${relative_patch}" "failed" "wrn"
[[ $EXIT_PATCHING_ERROR == yes ]] && exit_with_error "Aborting due to" "EXIT_PATCHING_ERROR"
}
mark_fasthash_done # will do git commit, associate fasthash to real hash.
fi fi
echo >> "${DEST}"/${LOG_SUBPATH}/patching.log
return 0 # short-circuit above, avoid exiting with error
} }
# apply_patch_series <target dir> <full path to series file> # apply_patch_series <target dir> <full path to series_file_full_path file>
apply_patch_series() { apply_patch_series() {
local t_dir="${1}" local target_dir="${1}"
local series="${2}" local series_file_full_path="${2}"
local bzdir="$(dirname $series)" local included_list skip_list skip_count counter=1 base_dir
local flag base_dir="$(dirname "${series_file_full_path}")"
local err_pt=$(mktemp /tmp/apply_patch_series_XXXXX) included_list="$(awk '$0 !~ /^#.*|^-.*|^$/' "${series_file_full_path}")"
included_count=$(echo -n "${included_list}" | wc -w)
skip_list="$(awk '$0 ~ /^-.*/{print $NF}' "${series_file_full_path}")"
skip_count=$(echo -n "${skip_list}" | wc -w)
display_alert "apply a series of " "[$(echo -n "$included_list" | wc -w)] patches" "info"
[[ ${skip_count} -gt 0 ]] && display_alert "skipping" "[${skip_count}] patches" "warn"
cd "${target_dir}" || exit 1
list=$(awk '$0 !~ /^#.*|^-.*|^$/' "${series}") for p in $included_list; do
skiplist=$(awk '$0 ~ /^-.*/{print $NF}' "${series}") process_patch_file "${base_dir}/${p}" "${counter}/${included_count}"
counter=$((counter + 1))
display_alert "apply a series of " "[$(echo $list | wc -w)] patches"
display_alert "skip [$(echo $skiplist | wc -w)] patches"
cd "${t_dir}" || exit 1
for p in $list; do
# Detect and remove files as '*.patch' which patch will create.
# So we need to delete the file before applying the patch if it exists.
lsdiff -s --strip=1 "$bzdir/$p" |
awk '$0 ~ /^+.*patch$/{print $2}' |
xargs -I % sh -c 'rm -f %'
patch --batch --silent --no-backup-if-mismatch -p1 -N < $bzdir/"$p" >> $err_pt 2>&1
flag=$?
case $flag in
0)
printf "[\033[32m done \033[0m] %s\n" "${p}"
printf "[ done ] %s\n" "${p}" >> "${DEST}"/debug/patching.log
;;
1)
printf "[\033[33m FAILED \033[0m] %s\n" "${p}"
echo -e "[ FAILED ] For ${p} \t\tprocess exit [ $flag ]" >> "${DEST}"/debug/patching.log
cat $err_pt >> "${DEST}"/debug/patching.log
;;
2)
printf "[\033[31m Patch wrong \033[0m] %s\n" "${p}"
echo -e "Patch wrong ${p}\t\tprocess exit [ $flag ]" >> "${DEST}"/debug/patching.log
cat $err_pt >> "${DEST}"/debug/patching.log
;;
esac
echo "" > $err_pt
done done
echo "" >> "${DEST}"/debug/patching.log display_alert "done applying patch series " "[$(echo -n "$included_list" | wc -w)] patches" "info"
rm $err_pt
} }
userpatch_create() { userpatch_create() {
@@ -152,6 +149,7 @@ userpatch_create() {
git add . git add .
git -c user.name='Armbian User' -c user.email='user@example.org' commit -q -m "Cleaning working copy" git -c user.name='Armbian User' -c user.email='user@example.org' commit -q -m "Cleaning working copy"
mkdir -p "${DEST}/patch"
local patch="$DEST/patch/$1-$LINUXFAMILY-$BRANCH.patch" local patch="$DEST/patch/$1-$LINUXFAMILY-$BRANCH.patch"
# apply previous user debug mode created patches # apply previous user debug mode created patches
@@ -178,7 +176,7 @@ userpatch_create() {
read -e -p "Patch description: " -i "$COMMIT_MESSAGE" COMMIT_MESSAGE read -e -p "Patch description: " -i "$COMMIT_MESSAGE" COMMIT_MESSAGE
[[ -z "$COMMIT_MESSAGE" ]] && COMMIT_MESSAGE="Patching something" [[ -z "$COMMIT_MESSAGE" ]] && COMMIT_MESSAGE="Patching something"
git commit -s -m "$COMMIT_MESSAGE" git commit -s -m "$COMMIT_MESSAGE"
git format-patch -1 HEAD --stdout --signature="Created with Armbian build tools $GITHUB_SOURCE/armbian/build" > "${patch}" git format-patch -1 HEAD --stdout --signature="Created with Armbian build tools https://github.com/armbian/build" > "${patch}"
PATCHFILE=$(git format-patch -1 HEAD) PATCHFILE=$(git format-patch -1 HEAD)
rm $PATCHFILE # delete the actual file rm $PATCHFILE # delete the actual file
# create a symlink to have a nice name ready # create a symlink to have a nice name ready

View File

@@ -1,13 +1,127 @@
#!/usr/bin/env bash #!/usr/bin/env bash
compile_uboot() {
# not optimal, but extra cleaning before overlayfs_wrapper should keep sources directory clean function maybe_make_clean_uboot() {
if [[ $CLEAN_LEVEL == *make* ]]; then if [[ $CLEAN_LEVEL == *make-uboot* ]]; then
display_alert "Cleaning" "$BOOTSOURCEDIR" "info" display_alert "${uboot_prefix}Cleaning u-boot tree - CLEAN_LEVEL contains 'make-uboot'" "${BOOTSOURCEDIR}" "info"
( (
cd "${SRC}/cache/sources/${BOOTSOURCEDIR}" cd "${SRC}/cache/sources/${BOOTSOURCEDIR}" || exit_with_error "crazy about ${BOOTSOURCEDIR}"
make clean > /dev/null 2>&1 run_host_command_logged make clean
) )
else
display_alert "${uboot_prefix}Not cleaning u-boot tree, use CLEAN_LEVEL=make-uboot if needed" "CLEAN_LEVEL=${CLEAN_LEVEL}" "debug"
fi fi
}
# this receives version target uboot_name uboottempdir uboot_target_counter toolchain as variables.
function compile_uboot_target() {
local uboot_prefix="{u-boot:${uboot_target_counter}} "
local target_make target_patchdir target_files
target_make=$(cut -d';' -f1 <<< "${target}")
target_patchdir=$(cut -d';' -f2 <<< "${target}")
target_files=$(cut -d';' -f3 <<< "${target}")
# needed for multiple targets and for calling compile_uboot directly
display_alert "${uboot_prefix} Checking out to clean sources" "{$BOOTSOURCEDIR} for ${target_make}"
git checkout -f -q HEAD # @TODO: this assumes way too much. should call the wrapper again, not directly
maybe_make_clean_uboot
advanced_patch "u-boot" "$BOOTPATCHDIR" "$BOARD" "$target_patchdir" "$BRANCH" "${LINUXFAMILY}-${BOARD}-${BRANCH}"
# create patch for manual source changes
[[ $CREATE_PATCHES == yes ]] && userpatch_create "u-boot"
if [[ -n $ATFSOURCE && -d "${atftempdir}" ]]; then
display_alert "Copying over bins from atftempdir" "${atftempdir}" "debug"
run_host_command_logged cp -Rv "${atftempdir}"/*.bin .
run_host_command_logged cp -Rv "${atftempdir}"/*.elf .
run_host_command_logged rm -rf "${atftempdir}"
fi
display_alert "${uboot_prefix}Preparing u-boot config" "${version} ${target_make}" "info"
export if_error_detail_message="${uboot_prefix}Failed to configure u-boot ${version} $BOOTCONFIG ${target_make}"
run_host_command_logged CCACHE_BASEDIR="$(pwd)" PATH="${toolchain}:${toolchain2}:${PATH}" \
make "$CTHREADS" "$BOOTCONFIG" "CROSS_COMPILE=\"$CCACHE $UBOOT_COMPILER\"" "KCFLAGS=-fdiagnostics-color=always"
# armbian specifics u-boot settings
[[ -f .config ]] && sed -i 's/CONFIG_LOCALVERSION=""/CONFIG_LOCALVERSION="-armbian"/g' .config
[[ -f .config ]] && sed -i 's/CONFIG_LOCALVERSION_AUTO=.*/# CONFIG_LOCALVERSION_AUTO is not set/g' .config
# for modern (? 2018-2019?) kernel and non spi targets
if [[ ${BOOTBRANCH} =~ ^tag:v201[8-9](.*) && ${target} != "spi" && -f .config ]]; then
sed -i 's/^.*CONFIG_ENV_IS_IN_FAT.*/# CONFIG_ENV_IS_IN_FAT is not set/g' .config
sed -i 's/^.*CONFIG_ENV_IS_IN_EXT4.*/CONFIG_ENV_IS_IN_EXT4=y/g' .config
sed -i 's/^.*CONFIG_ENV_IS_IN_MMC.*/# CONFIG_ENV_IS_IN_MMC is not set/g' .config
sed -i 's/^.*CONFIG_ENV_IS_NOWHERE.*/# CONFIG_ENV_IS_NOWHERE is not set/g' .config
echo "# CONFIG_ENV_IS_NOWHERE is not set" >> .config
echo 'CONFIG_ENV_EXT4_INTERFACE="mmc"' >> .config
echo 'CONFIG_ENV_EXT4_DEVICE_AND_PART="0:auto"' >> .config
echo 'CONFIG_ENV_EXT4_FILE="/boot/boot.env"' >> .config
fi
# @TODO: this does not belong here
[[ -f tools/logos/udoo.bmp ]] && cp "${SRC}"/packages/blobs/splash/udoo.bmp tools/logos/udoo.bmp
# @TODO: why?
touch .scmversion
# $BOOTDELAY can be set in board family config, ensure autoboot can be stopped even if set to 0
[[ $BOOTDELAY == 0 ]] && echo -e "CONFIG_ZERO_BOOTDELAY_CHECK=y" >> .config
[[ -n $BOOTDELAY ]] && sed -i "s/^CONFIG_BOOTDELAY=.*/CONFIG_BOOTDELAY=${BOOTDELAY}/" .config || [[ -f .config ]] && echo "CONFIG_BOOTDELAY=${BOOTDELAY}" >> .config
# workaround when two compilers are needed
cross_compile="CROSS_COMPILE=\"$CCACHE $UBOOT_COMPILER\""
[[ -n $UBOOT_TOOLCHAIN2 ]] && cross_compile="ARMBIAN=foe" # empty parameter is not allowed
display_alert "${uboot_prefix}Compiling u-boot" "${version} ${target_make}" "info"
export if_error_detail_message="${uboot_prefix}Failed to build u-boot ${version} ${target_make}"
KCFLAGS="-fdiagnostics-color=always -Wno-error=maybe-uninitialized -Wno-error=misleading-indentation" \
run_host_command_logged_long_running CCACHE_BASEDIR="$(pwd)" PATH="${toolchain}:${toolchain2}:${PATH}" \
make "$target_make" "$CTHREADS" "${cross_compile}"
if [[ $(type -t uboot_custom_postprocess) == function ]]; then
display_alert "${uboot_prefix}Postprocessing u-boot" "${version} ${target_make}"
uboot_custom_postprocess
fi
display_alert "${uboot_prefix}Preparing u-boot targets packaging" "${version} ${target_make}"
# copy files to build directory
for f in $target_files; do
local f_src
f_src=$(cut -d':' -f1 <<< "${f}")
if [[ $f == *:* ]]; then
local f_dst
f_dst=$(cut -d':' -f2 <<< "${f}")
else
local f_dst
f_dst=$(basename "${f_src}")
fi
display_alert "${uboot_prefix}Deploying u-boot binary target" "${version} ${target_make} :: ${f_dst}"
[[ ! -f $f_src ]] && exit_with_error "U-boot artifact not found" "$(basename "${f_src}")"
run_host_command_logged cp -v "${f_src}" "${uboottempdir}/${uboot_name}/usr/lib/${uboot_name}/${f_dst}"
#display_alert "Done with binary target" "${version} ${target_make} :: ${f_dst}"
done
display_alert "${uboot_prefix}Done with u-boot target" "${version} ${target_make}"
return 0
}
compile_uboot() {
if [[ -n $BOOTSOURCE ]] && [[ "${BOOTSOURCE}" != "none" ]]; then
display_alert "Downloading sources" "u-boot" "git"
GIT_SKIP_SUBMODULES="${UBOOT_GIT_SKIP_SUBMODULES}" fetch_from_repo "$BOOTSOURCE" "$BOOTDIR" "$BOOTBRANCH" "yes" # fetch_from_repo <url> <dir> <ref> <subdir_flag>
display_alert "Extensions: fetch custom uboot" "fetch_custom_uboot" "debug"
call_extension_method "fetch_custom_uboot" <<- 'FETCH_CUSTOM_UBOOT'
*allow extensions to fetch extra uboot sources*
For downstream uboot et al.
This is done after `GIT_SKIP_SUBMODULES="${UBOOT_GIT_SKIP_SUBMODULES}" fetch_from_repo "$BOOTSOURCE" "$BOOTDIR" "$BOOTBRANCH" "yes"`
FETCH_CUSTOM_UBOOT
fi
# not optimal, but extra cleaning before overlayfs_wrapper should keep sources directory clean
maybe_make_clean_uboot
if [[ $USE_OVERLAYFS == yes ]]; then if [[ $USE_OVERLAYFS == yes ]]; then
local ubootdir local ubootdir
@@ -20,13 +134,12 @@ compile_uboot() {
# read uboot version # read uboot version
local version hash local version hash
version=$(grab_version "$ubootdir") version=$(grab_version "$ubootdir")
hash=$(improved_git --git-dir="$ubootdir"/.git rev-parse HEAD) hash=$(git --git-dir="$ubootdir"/.git rev-parse HEAD)
display_alert "Compiling u-boot" "$version" "info" display_alert "Compiling u-boot" "$version ${ubootdir}" "info"
# build aarch64 # build aarch64
if [[ $(dpkg --print-architecture) == amd64 ]]; then if [[ $(dpkg --print-architecture) == amd64 ]]; then
local toolchain local toolchain
toolchain=$(find_toolchain "$UBOOT_COMPILER" "$UBOOT_USE_GCC") toolchain=$(find_toolchain "$UBOOT_COMPILER" "$UBOOT_USE_GCC")
[[ -z $toolchain ]] && exit_with_error "Could not find required toolchain" "${UBOOT_COMPILER}gcc $UBOOT_USE_GCC" [[ -z $toolchain ]] && exit_with_error "Could not find required toolchain" "${UBOOT_COMPILER}gcc $UBOOT_USE_GCC"
@@ -38,117 +151,49 @@ compile_uboot() {
toolchain2=$(find_toolchain "$toolchain2_type" "$toolchain2_ver") toolchain2=$(find_toolchain "$toolchain2_type" "$toolchain2_ver")
[[ -z $toolchain2 ]] && exit_with_error "Could not find required toolchain" "${toolchain2_type}gcc $toolchain2_ver" [[ -z $toolchain2 ]] && exit_with_error "Could not find required toolchain" "${toolchain2_type}gcc $toolchain2_ver"
fi fi
# build aarch64 # build aarch64
fi fi
display_alert "Compiler version" "${UBOOT_COMPILER}gcc $(eval env PATH="${toolchain}:${toolchain2}:${PATH}" "${UBOOT_COMPILER}gcc" -dumpversion)" "info" display_alert "Compiler version" "${UBOOT_COMPILER}gcc $(eval env PATH="${toolchain}:${toolchain2}:${PATH}" "${UBOOT_COMPILER}gcc" -dumpversion)" "info"
[[ -n $toolchain2 ]] && display_alert "Additional compiler version" "${toolchain2_type}gcc $(eval env PATH="${toolchain}:${toolchain2}:${PATH}" "${toolchain2_type}gcc" -dumpversion)" "info" [[ -n $toolchain2 ]] && display_alert "Additional compiler version" "${toolchain2_type}gcc $(eval env PATH="${toolchain}:${toolchain2}:${PATH}" "${toolchain2_type}gcc" -dumpversion)" "info"
local uboot_name="${CHOSEN_UBOOT}_${REVISION}_${ARCH}"
# create directory structure for the .deb package # create directory structure for the .deb package
uboottempdir=$(mktemp -d) uboottempdir="$(mktemp -d)" # subject to TMPDIR/WORKDIR, so is protected by single/common error trapmanager to clean-up.
chmod 700 ${uboottempdir} chmod 700 "${uboottempdir}"
trap "ret=\$?; rm -rf \"${uboottempdir}\" ; exit \$ret" 0 1 2 3 15 mkdir -p "$uboottempdir/$uboot_name/usr/lib/u-boot" "$uboottempdir/$uboot_name/usr/lib/$uboot_name" "$uboottempdir/$uboot_name/DEBIAN"
local uboot_name=${CHOSEN_UBOOT}_${REVISION}_${ARCH}
rm -rf $uboottempdir/$uboot_name
mkdir -p $uboottempdir/$uboot_name/usr/lib/{u-boot,$uboot_name} $uboottempdir/$uboot_name/DEBIAN
# process compilation for one or multiple targets # Allow extension-based u-boot bulding. We call the hook, and if EXTENSION_BUILT_UBOOT="yes" afterwards, we skip our own compilation.
while read -r target; do # This is to make it easy to build vendor/downstream uboot with their own quirks.
local target_make target_patchdir target_files
target_make=$(cut -d';' -f1 <<< "${target}")
target_patchdir=$(cut -d';' -f2 <<< "${target}")
target_files=$(cut -d';' -f3 <<< "${target}")
# needed for multiple targets and for calling compile_uboot directly display_alert "Extensions: build custom uboot" "build_custom_uboot" "debug"
display_alert "Checking out to clean sources" call_extension_method "build_custom_uboot" <<- 'BUILD_CUSTOM_UBOOT'
improved_git checkout -f -q HEAD *allow extensions to build their own uboot*
For downstream uboot et al.
Set \`EXTENSION_BUILT_UBOOT=yes\` to then skip the normal compilation.
BUILD_CUSTOM_UBOOT
if [[ $CLEAN_LEVEL == *make* ]]; then if [[ "${EXTENSION_BUILT_UBOOT}" != "yes" ]]; then
display_alert "Cleaning" "$BOOTSOURCEDIR" "info" # Try very hard, to fault even, to avoid using subshells while reading a newline-delimited string.
( # Sorry for the juggling with IFS.
cd "${SRC}/cache/sources/${BOOTSOURCEDIR}" local _old_ifs="${IFS}" _new_ifs=$'\n' uboot_target_counter=1
make clean > /dev/null 2>&1 IFS="${_new_ifs}" # split on newlines only
) for target in ${UBOOT_TARGET_MAP}; do
fi IFS="${_old_ifs}" # restore for the body of loop
export target uboot_name uboottempdir toolchain version uboot_target_counter
advanced_patch "u-boot" "$BOOTPATCHDIR" "$BOARD" "$target_patchdir" "$BRANCH" "${LINUXFAMILY}-${BOARD}-${BRANCH}" compile_uboot_target
uboot_target_counter=$((uboot_target_counter + 1))
# create patch for manual source changes IFS="${_new_ifs}" # split on newlines only for rest of loop
[[ $CREATE_PATCHES == yes ]] && userpatch_create "u-boot"
if [[ -n $ATFSOURCE ]]; then
cp -Rv "${atftempdir}"/*.bin . 2> /dev/null ||
cp -Rv "${atftempdir}"/*.elf . 2> /dev/null
[[ $? -ne 0 ]] && exit_with_error "ATF binary not found"
rm -rf "${atftempdir}"
fi
echo -e "\n\t== u-boot make $BOOTCONFIG ==\n" >> "${DEST}"/${LOG_SUBPATH}/compilation.log
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${toolchain2}:${PATH}" \
'make $CTHREADS $BOOTCONFIG \
CROSS_COMPILE="$CCACHE $UBOOT_COMPILER"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/compilation.log'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} 2>> "${DEST}"/${LOG_SUBPATH}/compilation.log
# armbian specifics u-boot settings
[[ -f .config ]] && sed -i 's/CONFIG_LOCALVERSION=""/CONFIG_LOCALVERSION="-armbian"/g' .config
[[ -f .config ]] && sed -i 's/CONFIG_LOCALVERSION_AUTO=.*/# CONFIG_LOCALVERSION_AUTO is not set/g' .config
# for modern kernel and non spi targets
if [[ ${BOOTBRANCH} =~ ^tag:v201[8-9](.*) && ${target} != "spi" && -f .config ]]; then
sed -i 's/^.*CONFIG_ENV_IS_IN_FAT.*/# CONFIG_ENV_IS_IN_FAT is not set/g' .config
sed -i 's/^.*CONFIG_ENV_IS_IN_EXT4.*/CONFIG_ENV_IS_IN_EXT4=y/g' .config
sed -i 's/^.*CONFIG_ENV_IS_IN_MMC.*/# CONFIG_ENV_IS_IN_MMC is not set/g' .config
sed -i 's/^.*CONFIG_ENV_IS_NOWHERE.*/# CONFIG_ENV_IS_NOWHERE is not set/g' .config | echo \
"# CONFIG_ENV_IS_NOWHERE is not set" >> .config
echo 'CONFIG_ENV_EXT4_INTERFACE="mmc"' >> .config
echo 'CONFIG_ENV_EXT4_DEVICE_AND_PART="0:auto"' >> .config
echo 'CONFIG_ENV_EXT4_FILE="/boot/boot.env"' >> .config
fi
[[ -f tools/logos/udoo.bmp ]] && cp "${SRC}"/packages/blobs/splash/udoo.bmp tools/logos/udoo.bmp
touch .scmversion
# $BOOTDELAY can be set in board family config, ensure autoboot can be stopped even if set to 0
[[ $BOOTDELAY == 0 ]] && echo -e "CONFIG_ZERO_BOOTDELAY_CHECK=y" >> .config
[[ -n $BOOTDELAY ]] && sed -i "s/^CONFIG_BOOTDELAY=.*/CONFIG_BOOTDELAY=${BOOTDELAY}/" .config || [[ -f .config ]] && echo "CONFIG_BOOTDELAY=${BOOTDELAY}" >> .config
# workaround when two compilers are needed
cross_compile="CROSS_COMPILE=$CCACHE $UBOOT_COMPILER"
[[ -n $UBOOT_TOOLCHAIN2 ]] && cross_compile="ARMBIAN=foe" # empty parameter is not allowed
echo -e "\n\t== u-boot make $target_make ==\n" >> "${DEST}"/${LOG_SUBPATH}/compilation.log
eval CCACHE_BASEDIR="$(pwd)" env PATH="${toolchain}:${toolchain2}:${PATH}" \
'make $target_make $CTHREADS \
"${cross_compile}"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a "${DEST}"/${LOG_SUBPATH}/compilation.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Compiling u-boot..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})' 2>> "${DEST}"/${LOG_SUBPATH}/compilation.log
[[ ${EVALPIPE[0]} -ne 0 ]] && exit_with_error "U-boot compilation failed"
[[ $(type -t uboot_custom_postprocess) == function ]] && uboot_custom_postprocess
# copy files to build directory
for f in $target_files; do
local f_src
f_src=$(cut -d':' -f1 <<< "${f}")
if [[ $f == *:* ]]; then
local f_dst
f_dst=$(cut -d':' -f2 <<< "${f}")
else
local f_dst
f_dst=$(basename "${f_src}")
fi
[[ ! -f $f_src ]] && exit_with_error "U-boot file not found" "$(basename "${f_src}")"
cp "${f_src}" "$uboottempdir/${uboot_name}/usr/lib/${uboot_name}/${f_dst}"
done done
done <<< "$UBOOT_TARGET_MAP" IFS="${_old_ifs}"
else
display_alert "Extensions: custom uboot built by extension" "not building regular uboot" "debug"
fi
# set up postinstall script display_alert "Preparing u-boot general packaging. all_worked:${all_worked} any_worked:${any_worked} " "${version} ${target_make}"
# set up postinstall script # @todo: extract into a tinkerboard extension
if [[ $BOARD == tinkerboard ]]; then if [[ $BOARD == tinkerboard ]]; then
cat <<- EOF > "$uboottempdir/${uboot_name}/DEBIAN/postinst" cat <<- EOF > "$uboottempdir/${uboot_name}/DEBIAN/postinst"
#!/bin/bash #!/bin/bash
@@ -173,12 +218,12 @@ compile_uboot() {
chmod 755 "$uboottempdir/${uboot_name}/DEBIAN/postinst" chmod 755 "$uboottempdir/${uboot_name}/DEBIAN/postinst"
fi fi
# declare -f on non-defined function does not do anything # declare -f on non-defined function does not do anything (but exits with errors, so ignore them with "|| true")
cat <<- EOF > "$uboottempdir/${uboot_name}/usr/lib/u-boot/platform_install.sh" cat <<- EOF > "$uboottempdir/${uboot_name}/usr/lib/u-boot/platform_install.sh"
DIR=/usr/lib/$uboot_name DIR=/usr/lib/$uboot_name
$(declare -f write_uboot_platform) $(declare -f write_uboot_platform || true)
$(declare -f write_uboot_platform_mtd) $(declare -f write_uboot_platform_mtd || true)
$(declare -f setup_write_uboot_platform) $(declare -f setup_write_uboot_platform || true)
EOF EOF
# set up control file # set up control file
@@ -198,19 +243,21 @@ compile_uboot() {
# copy config file to the package # copy config file to the package
# useful for FEL boot with overlayfs_wrapper # useful for FEL boot with overlayfs_wrapper
[[ -f .config && -n $BOOTCONFIG ]] && cp .config "$uboottempdir/${uboot_name}/usr/lib/u-boot/${BOOTCONFIG}" [[ -f .config && -n $BOOTCONFIG ]] && cp .config "$uboottempdir/${uboot_name}/usr/lib/u-boot/${BOOTCONFIG}" 2>&1
# copy license files from typical locations # copy license files from typical locations
[[ -f COPYING ]] && cp COPYING "$uboottempdir/${uboot_name}/usr/lib/u-boot/LICENSE" [[ -f COPYING ]] && cp COPYING "$uboottempdir/${uboot_name}/usr/lib/u-boot/LICENSE" 2>&1
[[ -f Licenses/README ]] && cp Licenses/README "$uboottempdir/${uboot_name}/usr/lib/u-boot/LICENSE" [[ -f Licenses/README ]] && cp Licenses/README "$uboottempdir/${uboot_name}/usr/lib/u-boot/LICENSE" 2>&1
[[ -n $atftempdir && -f $atftempdir/license.md ]] && cp "${atftempdir}/license.md" "$uboottempdir/${uboot_name}/usr/lib/u-boot/LICENSE.atf" [[ -n $atftempdir && -f $atftempdir/license.md ]] && cp "${atftempdir}/license.md" "$uboottempdir/${uboot_name}/usr/lib/u-boot/LICENSE.atf" 2>&1
display_alert "Building deb" "${uboot_name}.deb" "info" display_alert "Building u-boot deb" "${uboot_name}.deb"
fakeroot dpkg-deb -b -Z${DEB_COMPRESS} "$uboottempdir/${uboot_name}" "$uboottempdir/${uboot_name}.deb" >> "${DEST}"/${LOG_SUBPATH}/output.log 2>&1 fakeroot_dpkg_deb_build "$uboottempdir/${uboot_name}" "$uboottempdir/${uboot_name}.deb"
rm -rf "$uboottempdir/${uboot_name}" rm -rf "$uboottempdir/${uboot_name}"
[[ -n $atftempdir ]] && rm -rf "${atftempdir}" [[ -n $atftempdir ]] && rm -rf "${atftempdir}"
[[ ! -f $uboottempdir/${uboot_name}.deb ]] && exit_with_error "Building u-boot package failed" [[ ! -f $uboottempdir/${uboot_name}.deb ]] && exit_with_error "Building u-boot package failed"
rsync --remove-source-files -rq "$uboottempdir/${uboot_name}.deb" "${DEB_STORAGE}/" run_host_command_logged rsync --remove-source-files -r "$uboottempdir/${uboot_name}.deb" "${DEB_STORAGE}/"
rm -rf "$uboottempdir"
display_alert "Built u-boot deb OK" "${uboot_name}.deb" "info"
return 0 # success
} }

View File

@@ -1,11 +1,37 @@
#!/usr/bin/env bash #!/usr/bin/env bash
#
# Copyright (c) 2013-2021 Igor Pecovnik, igor.pecovnik@gma**.com
#
# This file is licensed under the terms of the GNU General Public
# License version 2. This program is licensed "as is" without any
# warranty of any kind, whether express or implied.
#
# This file is a part of the Armbian build script
# https://github.com/armbian/build/
# Functions:
# compile_atf
# compile_uboot
# compile_kernel
# compile_firmware
# compile_armbian-config
# compile_xilinx_bootgen
# grab_version
# find_toolchain
# advanced_patch
# process_patch_file
# userpatch_create
# overlayfs_wrapper
grab_version() { grab_version() {
local ver=() local ver=()
ver[0]=$(grep "^VERSION" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^[[:digit:]]+') ver[0]=$(grep "^VERSION" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^[[:digit:]]+' || true)
ver[1]=$(grep "^PATCHLEVEL" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^[[:digit:]]+') ver[1]=$(grep "^PATCHLEVEL" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^[[:digit:]]+' || true)
ver[2]=$(grep "^SUBLEVEL" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^[[:digit:]]+') ver[2]=$(grep "^SUBLEVEL" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^[[:digit:]]+' || true)
ver[3]=$(grep "^EXTRAVERSION" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^-rc[[:digit:]]+') ver[3]=$(grep "^EXTRAVERSION" "${1}"/Makefile | head -1 | awk '{print $(NF)}' | grep -oE '^-rc[[:digit:]]+' || true)
echo "${ver[0]:-0}${ver[1]:+.${ver[1]}}${ver[2]:+.${ver[2]}}${ver[3]}" echo "${ver[0]:-0}${ver[1]:+.${ver[1]}}${ver[2]:+.${ver[2]}}${ver[3]}"
return 0
} }
# find_toolchain <compiler_prefix> <expression> # find_toolchain <compiler_prefix> <expression>
@@ -44,16 +70,7 @@ find_toolchain() {
fi fi
done done
echo "$toolchain" echo "$toolchain"
# logging a stack of used compilers.
if [[ -f "${DEST}"/${LOG_SUBPATH}/compiler.log ]]; then
if ! grep -q "$toolchain" "${DEST}"/${LOG_SUBPATH}/compiler.log; then
echo "$toolchain" >> "${DEST}"/${LOG_SUBPATH}/compiler.log
fi
else
echo "$toolchain" >> "${DEST}"/${LOG_SUBPATH}/compiler.log
fi
} }
# overlayfs_wrapper <operation> <workdir> <description> # overlayfs_wrapper <operation> <workdir> <description>
# #
# <operation>: wrap|cleanup # <operation>: wrap|cleanup
@@ -75,7 +92,7 @@ overlayfs_wrapper() {
local description="$3" local description="$3"
mkdir -p /tmp/overlay_components/ /tmp/armbian_build/ mkdir -p /tmp/overlay_components/ /tmp/armbian_build/
local tempdir workdir mergeddir local tempdir workdir mergeddir
tempdir=$(mktemp -d --tmpdir="/tmp/overlay_components/") tempdir=$(mktemp -d --tmpdir="/tmp/overlay_components/") # @TODO: WORKDIR? otherwise uses host's root disk, which might be small
workdir=$(mktemp -d --tmpdir="/tmp/overlay_components/") workdir=$(mktemp -d --tmpdir="/tmp/overlay_components/")
mergeddir=$(mktemp -d --suffix="_$description" --tmpdir="/tmp/armbian_build/") mergeddir=$(mktemp -d --suffix="_$description" --tmpdir="/tmp/armbian_build/")
mount -t overlay overlay -o lowerdir="$srcdir",upperdir="$tempdir",workdir="$workdir" "$mergeddir" mount -t overlay overlay -o lowerdir="$srcdir",upperdir="$tempdir",workdir="$workdir" "$mergeddir"

View File

@@ -6,21 +6,12 @@
# Write to variables : # Write to variables :
# - aggregated_content # - aggregated_content
aggregate_content() { aggregate_content() {
LOG_OUTPUT_FILE="$SRC/output/${LOG_SUBPATH}/potential-paths.log"
echo -e "Potential paths :" >> "${LOG_OUTPUT_FILE}"
show_checklist_variables potential_paths
for filepath in ${potential_paths}; do for filepath in ${potential_paths}; do
if [[ -f "${filepath}" ]]; then if [[ -f "${filepath}" ]]; then
echo -e "${filepath/"$SRC"\//} yes" >> "${LOG_OUTPUT_FILE}"
aggregated_content+=$(cat "${filepath}") aggregated_content+=$(cat "${filepath}")
aggregated_content+="${separator}" aggregated_content+="${separator}"
# else
# echo -e "${filepath/"$SRC"\//} no\n" >> "${LOG_OUTPUT_FILE}"
fi fi
done done
echo "" >> "${LOG_OUTPUT_FILE}"
unset LOG_OUTPUT_FILE
} }
get_all_potential_paths() { get_all_potential_paths() {
@@ -35,14 +26,6 @@ get_all_potential_paths() {
done done
done done
done done
# for ppath in ${potential_paths}; do
# echo "Checking for ${ppath}"
# if [[ -f "${ppath}" ]]; then
# echo "OK !|"
# else
# echo "Nope|"
# fi
# done
} }
# Environment variables expected : # Environment variables expected :

View File

@@ -1,65 +1,61 @@
#!/usr/bin/env bash #!/usr/bin/env bash
desktop_element_available_for_arch() {
function desktop_element_available_for_arch() {
local desktop_element_path="${1}" local desktop_element_path="${1}"
local targeted_arch="${2}" local targeted_arch="${2}"
local arch_limitation_file="${1}/only_for" local arch_limitation_file="${1}/only_for"
echo "Checking if ${desktop_element_path} is available for ${targeted_arch} in ${arch_limitation_file}" >> "${DEST}"/${LOG_SUBPATH}/output.log
if [[ -f "${arch_limitation_file}" ]]; then if [[ -f "${arch_limitation_file}" ]]; then
grep -- "${targeted_arch}" "${arch_limitation_file}" if ! grep -- "${targeted_arch}" "${arch_limitation_file}" &> /dev/null; then
return $? return 1
else fi
return 0
fi fi
return 0
} }
desktop_element_supported() { function desktop_element_supported() {
local desktop_element_path="${1}" local desktop_element_path="${1}"
local support_level_filepath="${desktop_element_path}/support" local support_level_filepath="${desktop_element_path}/support"
export desktop_element_supported_result=0
if [[ -f "${support_level_filepath}" ]]; then if [[ -f "${support_level_filepath}" ]]; then
local support_level="$(cat "${support_level_filepath}")" local support_level="$(cat "${support_level_filepath}")"
if [[ "${support_level}" != "supported" && "${EXPERT}" != "yes" ]]; then if [[ "${support_level}" != "supported" && "${EXPERT}" != "yes" ]]; then
return 65 desktop_element_supported_result=65
return 0
fi fi
if ! desktop_element_available_for_arch "${desktop_element_path}" "${ARCH}"; then
desktop_element_available_for_arch "${desktop_element_path}" "${ARCH}" desktop_element_supported_result=66
if [[ $? -ne 0 ]]; then return 0
return 66
fi fi
else else
return 64 desktop_element_supported_result=64
return 0
fi fi
return 0 return 0
} }
desktop_environments_prepare_menu() { function desktop_environments_prepare_menu() {
for desktop_env_dir in "${DESKTOP_CONFIGS_DIR}/"*; do for desktop_env_dir in "${DESKTOP_CONFIGS_DIR}/"*; do
local desktop_env_name=$(basename ${desktop_env_dir}) local desktop_env_name expert_infos="" desktop_element_supported_result=0
local expert_infos="" desktop_env_name="$(basename "${desktop_env_dir}")"
[[ "${EXPERT}" == "yes" ]] && expert_infos="[$(cat "${desktop_env_dir}/support" 2> /dev/null)]" [[ "${EXPERT}" == "yes" ]] && expert_infos="[$(cat "${desktop_env_dir}/support" 2> /dev/null)]"
desktop_element_supported "${desktop_env_dir}" "${ARCH}" && options+=("${desktop_env_name}" "${desktop_env_name^} desktop environment ${expert_infos}") desktop_element_supported "${desktop_env_dir}" "${ARCH}"
[[ ${desktop_element_supported_result} == 0 ]] && options+=("${desktop_env_name}" "${desktop_env_name^} desktop environment ${expert_infos}")
done done
return 0
} }
desktop_environment_check_if_valid() { function desktop_environment_check_if_valid() {
local error_msg="" desktop_element_supported_result=0
local error_msg=""
desktop_element_supported "${DESKTOP_ENVIRONMENT_DIRPATH}" "${ARCH}" desktop_element_supported "${DESKTOP_ENVIRONMENT_DIRPATH}" "${ARCH}"
local retval=$?
if [[ ${retval} == 0 ]]; then if [[ ${desktop_element_supported_result} == 0 ]]; then
return return
elif [[ ${retval} == 64 ]]; then elif [[ ${desktop_element_supported_result} == 64 ]]; then
error_msg+="Either the desktop environment ${DESKTOP_ENVIRONMENT} does not exist " error_msg+="Either the desktop environment ${DESKTOP_ENVIRONMENT} does not exist "
error_msg+="or the file ${DESKTOP_ENVIRONMENT_DIRPATH}/support is missing" error_msg+="or the file ${DESKTOP_ENVIRONMENT_DIRPATH}/support is missing"
elif [[ ${retval} == 65 ]]; then elif [[ ${desktop_element_supported_result} == 65 ]]; then
error_msg+="Only experts can build an image with the desktop environment \"${DESKTOP_ENVIRONMENT}\", since the Armbian team won't offer any support for it (EXPERT=${EXPERT})" error_msg+="Only experts can build an image with the desktop environment \"${DESKTOP_ENVIRONMENT}\", since the Armbian team won't offer any support for it (EXPERT=${EXPERT})"
elif [[ ${retval} == 66 ]]; then elif [[ ${desktop_element_supported_result} == 66 ]]; then
error_msg+="The desktop environment \"${DESKTOP_ENVIRONMENT}\" has no packages for your targeted board architecture (BOARD=${BOARD} ARCH=${ARCH}). " error_msg+="The desktop environment \"${DESKTOP_ENVIRONMENT}\" has no packages for your targeted board architecture (BOARD=${BOARD} ARCH=${ARCH}). "
error_msg+="The supported boards architectures are : " error_msg+="The supported boards architectures are : "
error_msg+="$(cat "${DESKTOP_ENVIRONMENT_DIRPATH}/only_for")" error_msg+="$(cat "${DESKTOP_ENVIRONMENT_DIRPATH}/only_for")"
@@ -72,6 +68,7 @@ desktop_environment_check_if_valid() {
} }
function interactive_desktop_main_configuration() { function interactive_desktop_main_configuration() {
[[ $BUILD_DESKTOP != "yes" ]] && return 0 # Only for desktops.
# Myy : Once we got a list of selected groups, parse the PACKAGE_LIST inside configuration.sh # Myy : Once we got a list of selected groups, parse the PACKAGE_LIST inside configuration.sh
DESKTOP_ELEMENTS_DIR="${SRC}/config/desktop/${RELEASE}" DESKTOP_ELEMENTS_DIR="${SRC}/config/desktop/${RELEASE}"
@@ -79,43 +76,36 @@ function interactive_desktop_main_configuration() {
DESKTOP_CONFIG_PREFIX="config_" DESKTOP_CONFIG_PREFIX="config_"
DESKTOP_APPGROUPS_DIR="${DESKTOP_ELEMENTS_DIR}/appgroups" DESKTOP_APPGROUPS_DIR="${DESKTOP_ELEMENTS_DIR}/appgroups"
if [[ $BUILD_DESKTOP == "yes" && -z $DESKTOP_ENVIRONMENT ]]; then display_alert "desktop-config" "DESKTOP_ENVIRONMENT entry: ${DESKTOP_ENVIRONMENT}" "debug"
if [[ -z $DESKTOP_ENVIRONMENT ]]; then
options=() options=()
desktop_environments_prepare_menu desktop_environments_prepare_menu
if [[ "${options[0]}" == "" ]]; then if [[ "${options[0]}" == "" ]]; then
exit_with_error "No desktop environment seems to be available for your board ${BOARD} (ARCH : ${ARCH} - EXPERT : ${EXPERT})" exit_with_error "No desktop environment seems to be available for your board ${BOARD} (ARCH : ${ARCH} - EXPERT : ${EXPERT})"
fi fi
DESKTOP_ENVIRONMENT=$(show_menu "Choose a desktop environment" "$backtitle" "Select the default desktop environment to bundle with this image" "${options[@]}") display_alert "Desktops available" "${options[*]}" "debug"
dialog_menu "Choose a desktop environment" "$backtitle" "Select the default desktop environment to bundle with this image" "${options[@]}"
DESKTOP_ENVIRONMENT="${DIALOG_MENU_RESULT}"
unset options unset options
if [[ -z "${DESKTOP_ENVIRONMENT}" ]]; then if [[ -z "${DESKTOP_ENVIRONMENT}" ]]; then
exit_with_error "No desktop environment selected..." exit_with_error "No desktop environment selected..."
fi fi
fi fi
if [[ $BUILD_DESKTOP == "yes" ]]; then display_alert "desktop-config" "DESKTOP_ENVIRONMENT selected: ${DESKTOP_ENVIRONMENT}" "debug"
# Expected environment variables :
# - options
# - ARCH
DESKTOP_ENVIRONMENT_DIRPATH="${DESKTOP_CONFIGS_DIR}/${DESKTOP_ENVIRONMENT}" DESKTOP_ENVIRONMENT_DIRPATH="${DESKTOP_CONFIGS_DIR}/${DESKTOP_ENVIRONMENT}"
desktop_environment_check_if_valid # Make sure desktop config is sane.
desktop_environment_check_if_valid display_alert "desktop-config" "DESKTOP_ENVIRONMENT_CONFIG_NAME entry: ${DESKTOP_ENVIRONMENT_CONFIG_NAME}" "debug"
fi
if [[ $BUILD_DESKTOP == "yes" && -z $DESKTOP_ENVIRONMENT_CONFIG_NAME ]]; then
# FIXME Check for empty folders, just in case the current maintainer
# messed up
# Note, we could also ignore it and don't show anything in the previous
# menu, but that hides information and make debugging harder, which I
# don't like. Adding desktop environments as a maintainer is not a
# trivial nor common task.
if [[ -z $DESKTOP_ENVIRONMENT_CONFIG_NAME ]]; then
# @FIXME: Myy: Check for empty folders, just in case the current maintainer messed up
# Note, we could also ignore it and don't show anything in the previous menu, but that hides information and make debugging harder, which I
# don't like. Adding desktop environments as a maintainer is not a trivial nor common task.
options=() options=()
for configuration in "${DESKTOP_ENVIRONMENT_DIRPATH}/${DESKTOP_CONFIG_PREFIX}"*; do for configuration in "${DESKTOP_ENVIRONMENT_DIRPATH}/${DESKTOP_CONFIG_PREFIX}"*; do
config_filename=$(basename ${configuration}) config_filename=$(basename ${configuration})
@@ -123,39 +113,33 @@ function interactive_desktop_main_configuration() {
options+=("${config_filename}" "${config_name} configuration") options+=("${config_filename}" "${config_name} configuration")
done done
DESKTOP_ENVIRONMENT_CONFIG_NAME=$(show_menu "Choose the desktop environment config" "$backtitle" "Select the configuration for this environment.\nThese are sourced from ${desktop_environment_config_dir}" "${options[@]}") dialog_menu "Choose the desktop environment config" "$backtitle" "Select the configuration for this environment.\nThese are sourced from ${desktop_environment_config_dir}" "${options[@]}"
DESKTOP_ENVIRONMENT_CONFIG_NAME="${DIALOG_MENU_RESULT}"
unset options unset options
if [[ -z $DESKTOP_ENVIRONMENT_CONFIG_NAME ]]; then if [[ -z $DESKTOP_ENVIRONMENT_CONFIG_NAME ]]; then
exit_with_error "No desktop configuration selected... Do you really want a desktop environment ?" exit_with_error "No desktop configuration selected... Do you really want a desktop environment ?"
fi fi
fi fi
display_alert "desktop-config" "DESKTOP_ENVIRONMENT_CONFIG_NAME exit: ${DESKTOP_ENVIRONMENT_CONFIG_NAME}" "debug"
if [[ $BUILD_DESKTOP == "yes" ]]; then export DESKTOP_ENVIRONMENT_PACKAGE_LIST_DIRPATH="${DESKTOP_ENVIRONMENT_DIRPATH}/${DESKTOP_ENVIRONMENT_CONFIG_NAME}"
DESKTOP_ENVIRONMENT_PACKAGE_LIST_DIRPATH="${DESKTOP_ENVIRONMENT_DIRPATH}/${DESKTOP_ENVIRONMENT_CONFIG_NAME}" export DESKTOP_ENVIRONMENT_PACKAGE_LIST_FILEPATH="${DESKTOP_ENVIRONMENT_PACKAGE_LIST_DIRPATH}/packages"
DESKTOP_ENVIRONMENT_PACKAGE_LIST_FILEPATH="${DESKTOP_ENVIRONMENT_PACKAGE_LIST_DIRPATH}/packages"
fi
display_alert "desktop-config" "DESKTOP_APPGROUPS_SELECTED+x entry: ${DESKTOP_APPGROUPS_SELECTED+x}" "debug"
# "-z ${VAR+x}" allows to check for unset variable # "-z ${VAR+x}" allows to check for unset variable
# Technically, someone might want to build a desktop with no additional # Technically, someone might want to build a desktop with no additional
# appgroups. # appgroups.
if [[ $BUILD_DESKTOP == "yes" && -z ${DESKTOP_APPGROUPS_SELECTED+x} ]]; then if [[ -z ${DESKTOP_APPGROUPS_SELECTED+x} ]]; then
options=() options=()
for appgroup_path in "${DESKTOP_APPGROUPS_DIR}/"*; do for appgroup_path in "${DESKTOP_APPGROUPS_DIR}/"*; do
appgroup="$(basename "${appgroup_path}")" appgroup="$(basename "${appgroup_path}")"
options+=("${appgroup}" "${appgroup^}" off) options+=("${appgroup}" "${appgroup^}" off)
done done
DESKTOP_APPGROUPS_SELECTED=$( dialog_checklist "Choose desktop softwares to add" "$backtitle" "Select which kind of softwares you'd like to add to your build" "${options[@]}"
show_select_menu \ DESKTOP_APPGROUPS_SELECTED="${DIALOG_CHECKLIST_RESULT}"
"Choose desktop softwares to add" \
"$backtitle" \
"Select which kind of softwares you'd like to add to your build" \
"${options[@]}"
)
unset options unset options
fi fi
display_alert "desktop-config" "DESKTOP_APPGROUPS_SELECTED exit: ${DESKTOP_APPGROUPS_SELECTED}" "debug"
} }

View File

@@ -1,211 +1,214 @@
#!/usr/bin/env bash #!/usr/bin/env bash
function interactive_config_prepare_terminal() { function interactive_config_prepare_terminal() {
if [[ -z $ROOT_FS_CREATE_ONLY ]]; then if [[ -z $ROOT_FS_CREATE_ONLY ]]; then
# override stty size if [[ -t 0 ]]; then # "-t fd return True if file descriptor fd is open and refers to a terminal". 0 = stdin, 1 = stdout, 2 = stderr, 3+ custom
[[ -n $COLUMNS ]] && stty cols $COLUMNS # override stty size, if stdin is a terminal.
[[ -n $LINES ]] && stty rows $LINES [[ -n $COLUMNS ]] && stty cols $COLUMNS
TTY_X=$(($(stty size | awk '{print $2}') - 6)) # determine terminal width [[ -n $LINES ]] && stty rows $LINES
TTY_Y=$(($(stty size | awk '{print $1}') - 6)) # determine terminal height export TTY_X=$(($(stty size | awk '{print $2}') - 6)) # determine terminal width
export TTY_Y=$(($(stty size | awk '{print $1}') - 6)) # determine terminal height
fi
fi fi
# We'll use this title on all menus # We'll use this title on all menus
backtitle="Armbian building script, https://www.armbian.com | https://docs.armbian.com | (c) 2013-2021 Igor Pecovnik " export backtitle="Armbian building script, https://www.armbian.com | https://docs.armbian.com | (c) 2013-2022 Igor Pecovnik "
} }
function interactive_config_ask_kernel() { function interactive_config_ask_kernel() {
interactive_config_ask_build_only # interactive_config_ask_kernel_only
interactive_config_ask_kernel_configure interactive_config_ask_kernel_configure
} }
function interactive_config_ask_build_only() { function interactive_config_ask_kernel_only() {
if [[ -z $BUILD_ONLY ]]; then # if KERNEL_ONLY, KERNEL_CONFIGURE, BOARD, BRANCH or RELEASE are not set, display selection menu
[[ -n ${KERNEL_ONLY} ]] && return 0
options+=("$(build_only_value_for_kernel_only_build)" "Kernel and U-boot packages only") options+=("yes" "U-boot and kernel packages")
options+=("u-boot" "U-boot package only") options+=("no" "Full OS image for flashing")
options+=("default" "Full OS image for flashing") dialog_if_terminal_set_vars --title "Choose an option" --backtitle "$backtitle" --no-tags --menu "Select what to build" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}"
BUILD_ONLY=$(dialog --stdout --title "Choose an option" --backtitle "$backtitle" --no-tags \ KERNEL_ONLY="${DIALOG_RESULT}"
--menu "Select what to build" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}") [[ "${DIALOG_EXIT_CODE}" != "0" ]] && exit_with_error "You cancelled interactive during KERNEL_ONLY selection: '${DIALOG_EXIT_CODE}'" "Build cancelled: ${DIALOG_EXIT_CODE}"
unset options unset options
[[ -z $BUILD_ONLY ]] && exit_with_error "No option selected"
fi
} }
function interactive_config_ask_kernel_configure() { function interactive_config_ask_kernel_configure() {
if [[ -z $KERNEL_CONFIGURE ]]; then [[ -n ${KERNEL_CONFIGURE} ]] && return 0
options+=("no" "Do not change the kernel configuration")
options+=("yes" "Show a kernel configuration menu before compilation")
options+=("prebuilt" "Use precompiled packages (maintained hardware only)")
dialog_if_terminal_set_vars --title "Choose an option" --backtitle "$backtitle" --no-tags --menu "Select the kernel configuration" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}"
KERNEL_CONFIGURE="${DIALOG_RESULT}"
[[ ${DIALOG_EXIT_CODE} != 0 ]] && exit_with_error "You cancelled interactive during kernel configuration" "Build cancelled"
unset options
}
options+=("no" "Do not change the kernel configuration") # Required usage:
options+=("yes" "Show a kernel configuration menu before compilation") # declare -a arr_all_board_names=() arr_all_board_options=() # arrays
options+=("prebuilt" "Use precompiled packages (maintained hardware only)") # declare -A dict_all_board_types=() dict_all_board_source_files=() dict_all_board_descriptions=() # dictionaries
KERNEL_CONFIGURE=$(dialog --stdout --title "Choose an option" --backtitle "$backtitle" --no-tags \ # get_list_of_all_buildable_boards arr_all_board_names arr_all_board_options dict_all_board_types dict_all_board_source_files dict_all_board_descriptions # invoke
--menu "Select the kernel configuration" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}") function get_list_of_all_buildable_boards() {
unset options local -a board_types=("conf")
[[ -z $KERNEL_CONFIGURE ]] && exit_with_error "No option selected" [[ "${WIP_STATE}" != "supported" ]] && board_types+=("wip" "csc" "eos" "tvb")
local -a board_file_paths=("${SRC}/config/boards" "${USERPATCHES_PATH}/config/boards")
fi # local -n is a name reference, see https://www.linuxjournal.com/content/whats-new-bash-parameter-expansion
# it works with arrays and associative arrays/dictionaries
local -n ref_arr_all_board_names="${1}"
[[ "${2}" != "" ]] && local -n ref_arr_all_board_options="${2}" # optional
local -n ref_dict_all_board_types="${3}"
local -n ref_dict_all_board_source_files="${4}"
[[ "${5}" != "" ]] && local -n ref_dict_all_board_descriptions="${5}" # optional
local board_file_path board_type full_board_file
for board_file_path in "${board_file_paths[@]}"; do
[[ ! -d "${board_file_path}" ]] && continue
for board_type in "${board_types[@]}"; do
for full_board_file in "${board_file_path}"/*."${board_type}"; do
[[ "${full_board_file}" == *"*"* ]] && continue # ignore non-matches, due to bash's (non-)globbing behaviour
local board_name board_desc
board_name="$(basename "${full_board_file}" | cut -d'.' -f1)"
ref_dict_all_board_types["${board_name}"]="${board_type}"
ref_dict_all_board_source_files["${board_name}"]="${ref_dict_all_board_source_files["${board_name}"]} ${full_board_file}" # accumulate, will have extra space
if [[ "${2}" != "" || "${5}" != "" ]]; then # only if second or fifth reference specified, otherwise too costly
board_desc="$(head -1 "${full_board_file}" | cut -d'#' -f2)"
ref_arr_all_board_options+=("${board_name}" "\Z1(${board_type})\Zn ${board_desc}")
ref_dict_all_board_descriptions["${board_name}"]="${board_desc}"
fi
done
done
done
ref_arr_all_board_names=("${!ref_dict_all_board_types[@]}") # Expand the keys of one of the dicts, that's the list of boards.
return 0
} }
function interactive_config_ask_board_list() { function interactive_config_ask_board_list() {
if [[ -z $BOARD ]]; then # if BOARD is not set, display selection menu, otherwise return success
[[ -n ${BOARD} ]] && return 0
WIP_STATE=supported WIP_STATE=supported
WIP_BUTTON='CSC/WIP/EOS/TVB' WIP_BUTTON='CSC/WIP/EOS/TVB'
STATE_DESCRIPTION=' - boards with high level of software maturity' STATE_DESCRIPTION=' - boards with high level of software maturity'
temp_rc=$(mktemp) temp_rc=$(mktemp) # @TODO: this is a _very_ early call to mktemp - no TMPDIR set yet - it needs to be cleaned-up somehow
while true; do while true; do
options=() declare -a arr_all_board_names=() arr_all_board_options=() # arrays
declare -A dict_all_board_types=() dict_all_board_source_files=() dict_all_board_descriptions=() # dictionaries
get_list_of_all_buildable_boards arr_all_board_names arr_all_board_options dict_all_board_types dict_all_board_source_files dict_all_board_descriptions # invoke
echo > "${temp_rc}" # zero out the rcfile to start
if [[ $WIP_STATE != supported ]]; then # be if wip csc etc included. I personally disagree here.
cat <<- 'EOF' > "${temp_rc}"
dialog_color = (RED,WHITE,OFF)
screen_color = (WHITE,RED,ON)
tag_color = (RED,WHITE,ON)
item_selected_color = (WHITE,RED,ON)
tag_selected_color = (WHITE,RED,ON)
tag_key_selected_color = (WHITE,RED,ON)
EOF
fi
DIALOGRC=$temp_rc \
dialog_if_terminal_set_vars --title "Choose a board" --backtitle "$backtitle" --scrollbar \
--colors --extra-label "Show $WIP_BUTTON" --extra-button \
--menu "Select the target board. Displaying:\n$STATE_DESCRIPTION" $TTY_Y $TTY_X $((TTY_Y - 8)) "${arr_all_board_options[@]}"
BOARD="${DIALOG_RESULT}"
STATUS=${DIALOG_EXIT_CODE}
if [[ $STATUS == 3 ]]; then
if [[ $WIP_STATE == supported ]]; then if [[ $WIP_STATE == supported ]]; then
[[ $SHOW_WARNING == yes ]] && show_developer_warning
for board in "${SRC}"/config/boards/*.conf; do STATE_DESCRIPTION=' - \Z1(CSC)\Zn - Community Supported Configuration\n - \Z1(WIP)\Zn - Work In Progress
options+=("$(basename "${board}" | cut -d'.' -f1)" "$(head -1 "${board}" | cut -d'#' -f2)") \n - \Z1(EOS)\Zn - End Of Support\n - \Z1(TVB)\Zn - TV boxes'
done WIP_STATE=unsupported
WIP_BUTTON='matured'
EXPERT=yes
else else
STATE_DESCRIPTION=' - boards with high level of software maturity'
for board in "${SRC}"/config/boards/*.wip; do WIP_STATE=supported
options+=("$(basename "${board}" | cut -d'.' -f1)" "\Z1(WIP)\Zn $(head -1 "${board}" | cut -d'#' -f2)") WIP_BUTTON='CSC/WIP/EOS'
done EXPERT=no # @TODO: this overrides an "expert" mode that could be set on by the user. revert to original one?
for board in "${SRC}"/config/boards/*.csc; do
options+=("$(basename "${board}" | cut -d'.' -f1)" "\Z1(CSC)\Zn $(head -1 "${board}" | cut -d'#' -f2)")
done
for board in "${SRC}"/config/boards/*.eos; do
options+=("$(basename "${board}" | cut -d'.' -f1)" "\Z1(EOS)\Zn $(head -1 "${board}" | cut -d'#' -f2)")
done
for board in "${SRC}"/config/boards/*.tvb; do
options+=("$(basename "${board}" | cut -d'.' -f1)" "\Z1(TVB)\Zn $(head -1 "${board}" | cut -d'#' -f2)")
done
fi fi
continue
if [[ $WIP_STATE != supported ]]; then elif [[ $STATUS == 0 ]]; then
cat <<- 'EOF' > "${temp_rc}" break
dialog_color = (RED,WHITE,OFF) else
screen_color = (WHITE,RED,ON) exit_with_error "You cancelled interactive config" "Build cancelled, board not chosen"
tag_color = (RED,WHITE,ON) fi
item_selected_color = (WHITE,RED,ON) done
tag_selected_color = (WHITE,RED,ON)
tag_key_selected_color = (WHITE,RED,ON)
EOF
else
echo > "${temp_rc}"
fi
BOARD=$(DIALOGRC=$temp_rc dialog --stdout --title "Choose a board" --backtitle "$backtitle" --scrollbar \
--colors --extra-label "Show $WIP_BUTTON" --extra-button \
--menu "Select the target board. Displaying:\n$STATE_DESCRIPTION" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}")
STATUS=$?
if [[ $STATUS == 3 ]]; then
if [[ $WIP_STATE == supported ]]; then
[[ $SHOW_WARNING == yes ]] && show_developer_warning
STATE_DESCRIPTION=' - \Z1(CSC)\Zn - Community Supported Configuration\n - \Z1(WIP)\Zn - Work In Progress
\n - \Z1(EOS)\Zn - End Of Support\n - \Z1(TVB)\Zn - TV boxes'
WIP_STATE=unsupported
WIP_BUTTON='matured'
EXPERT=yes
else
STATE_DESCRIPTION=' - boards with high level of software maturity'
WIP_STATE=supported
WIP_BUTTON='CSC/WIP/EOS'
EXPERT=no
fi
continue
elif [[ $STATUS == 0 ]]; then
break
fi
unset options
[[ -z $BOARD ]] && exit_with_error "No board selected"
done
fi
} }
function interactive_config_ask_branch() { function interactive_config_ask_branch() {
if [[ -z $BRANCH ]]; then # if BRANCH not set, display selection menu
[[ -n $BRANCH ]] && return 0
options=() options=()
[[ $KERNEL_TARGET == *current* ]] && options+=("current" "Recommended. Come with best support") [[ $KERNEL_TARGET == *current* ]] && options+=("current" "Recommended. Come with best support")
[[ $KERNEL_TARGET == *legacy* ]] && options+=("legacy" "Old stable / Legacy") [[ $KERNEL_TARGET == *legacy* ]] && options+=("legacy" "Old stable / Legacy")
[[ $KERNEL_TARGET == *edge* && $EXPERT = yes ]] && options+=("edge" "\Z1Bleeding edge from @kernel.org\Zn") [[ $KERNEL_TARGET == *edge* && $EXPERT = yes ]] && options+=("edge" "\Z1Bleeding edge from @kernel.org\Zn")
# do not display selection dialog if only one kernel branch is available
if [[ "${#options[@]}" == 2 ]]; then
BRANCH="${options[0]}"
else
BRANCH=$(dialog --stdout --title "Choose a kernel" --backtitle "$backtitle" --colors \
--menu "Select the target kernel branch\nExact kernel versions depend on selected board" \
$TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}")
fi
unset options
[[ -z $BRANCH ]] && exit_with_error "No kernel branch selected"
[[ $BRANCH == dev && $SHOW_WARNING == yes ]] && show_developer_warning
# do not display selection dialog if only one kernel branch is available
if [[ "${#options[@]}" == 2 ]]; then
BRANCH="${options[0]}"
else else
dialog_if_terminal_set_vars --title "Choose a kernel" --backtitle "$backtitle" --colors \
[[ $BRANCH == next ]] && KERNEL_TARGET="next" --menu "Select the target kernel branch\nExact kernel versions depend on selected board" \
# next = new legacy. Should stay for backward compatibility, but be removed from menu above $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}"
# or we left definitions in board configs and only remove menu BRANCH="${DIALOG_RESULT}"
[[ $KERNEL_TARGET != *$BRANCH* ]] && exit_with_error "Kernel branch not defined for this board" "$BRANCH"
fi fi
[[ -z ${BRANCH} ]] && exit_with_error "No kernel branch selected"
unset options
return 0
} }
function interactive_config_ask_release() { function interactive_config_ask_release() {
if [[ -z "$RELEASE" ]]; then [[ $KERNEL_ONLY == yes ]] && return 0 # Don't ask if building packages only.
[[ -n ${RELEASE} ]] && return 0
options=() options=()
distros_options
distros_options dialog_if_terminal_set_vars --title "Choose a release package base" --backtitle "$backtitle" --menu "Select the target OS release package base" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}"
RELEASE="${DIALOG_RESULT}"
RELEASE=$(dialog --stdout --title "Choose a release package base" --backtitle "$backtitle" \ [[ -z ${RELEASE} ]] && exit_with_error "No release selected"
--menu "Select the target OS release package base" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}") unset options
[[ -z $RELEASE ]] && exit_with_error "No release selected"
unset options
fi
} }
function interactive_config_ask_desktop_build() { function interactive_config_ask_desktop_build() {
# don't show desktop option if we choose minimal build # don't show desktop option if we choose minimal build
if [[ $HAS_VIDEO_OUTPUT == no || $BUILD_MINIMAL == yes ]]; then [[ $HAS_VIDEO_OUTPUT == no || $BUILD_MINIMAL == yes ]] && BUILD_DESKTOP=no
BUILD_DESKTOP=no
elif [[ -z "$BUILD_DESKTOP" ]]; then
# read distribution support status which is written to the armbian-release file [[ $KERNEL_ONLY == yes ]] && return 0
set_distribution_status [[ -n ${BUILD_DESKTOP} ]] && return 0
options=()
options+=("no" "Image with console interface (server)")
options+=("yes" "Image with desktop environment")
BUILD_DESKTOP=$(dialog --stdout --title "Choose image type" --backtitle "$backtitle" --no-tags \
--menu "Select the target image type" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}")
unset options
[[ -z $BUILD_DESKTOP ]] && exit_with_error "No option selected"
if [[ ${BUILD_DESKTOP} == "yes" ]]; then
BUILD_MINIMAL=no
SELECTED_CONFIGURATION="desktop"
fi
# read distribution support status which is written to the armbian-release file
set_distribution_status
options=()
options+=("no" "Image with console interface (server)")
options+=("yes" "Image with desktop environment")
dialog_if_terminal_set_vars --title "Choose image type" --backtitle "$backtitle" --no-tags \
--menu "Select the target image type" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}"
BUILD_DESKTOP="${DIALOG_RESULT}"
unset options
[[ -z $BUILD_DESKTOP ]] && exit_with_error "No image type selected"
if [[ ${BUILD_DESKTOP} == "yes" ]]; then
BUILD_MINIMAL=no
SELECTED_CONFIGURATION="desktop"
fi fi
return 0
} }
function interactive_config_ask_standard_or_minimal() { function interactive_config_ask_standard_or_minimal() {
if [[ $BUILD_DESKTOP == no && -z $BUILD_MINIMAL ]]; then [[ $KERNEL_ONLY == yes ]] && return 0
[[ $BUILD_DESKTOP != no ]] && return 0
options=() [[ -n $BUILD_MINIMAL ]] && return 0
options+=("no" "Standard image with console interface") options=()
options+=("yes" "Minimal image with console interface") options+=("no" "Standard image with console interface")
BUILD_MINIMAL=$(dialog --stdout --title "Choose image type" --backtitle "$backtitle" --no-tags \ options+=("yes" "Minimal image with console interface")
--menu "Select the target image type" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}") dialog_if_terminal_set_vars --title "Choose image type" --backtitle "$backtitle" --no-tags \
unset options --menu "Select the target image type" $TTY_Y $TTY_X $((TTY_Y - 8)) "${options[@]}"
[[ -z $BUILD_MINIMAL ]] && exit_with_error "No option selected" BUILD_MINIMAL="${DIALOG_RESULT}"
if [[ $BUILD_MINIMAL == "yes" ]]; then unset options
SELECTED_CONFIGURATION="cli_minimal" [[ -z $BUILD_MINIMAL ]] && exit_with_error "No standard/minimal selected"
else if [[ $BUILD_MINIMAL == "yes" ]]; then
SELECTED_CONFIGURATION="cli_standard" SELECTED_CONFIGURATION="cli_minimal"
fi else
SELECTED_CONFIGURATION="cli_standard"
fi fi
} }

View File

@@ -10,6 +10,7 @@
# https://github.com/armbian/build/ # https://github.com/armbian/build/
function do_main_configuration() { function do_main_configuration() {
display_alert "Starting main configuration" "${MOUNT_UUID}" "info"
# common options # common options
# daily beta build contains date in subrevision # daily beta build contains date in subrevision
@@ -23,7 +24,7 @@ function do_main_configuration() {
[[ -z $ROOTPWD ]] && ROOTPWD="1234" # Must be changed @first login [[ -z $ROOTPWD ]] && ROOTPWD="1234" # Must be changed @first login
[[ -z $MAINTAINER ]] && MAINTAINER="Igor Pecovnik" # deb signature [[ -z $MAINTAINER ]] && MAINTAINER="Igor Pecovnik" # deb signature
[[ -z $MAINTAINERMAIL ]] && MAINTAINERMAIL="igor.pecovnik@****l.com" # deb signature [[ -z $MAINTAINERMAIL ]] && MAINTAINERMAIL="igor.pecovnik@****l.com" # deb signature
[[ -z $DEB_COMPRESS ]] && DEB_COMPRESS="xz" # compress .debs with XZ by default. Use 'none' for faster/larger builds export SKIP_EXTERNAL_TOOLCHAINS="${SKIP_EXTERNAL_TOOLCHAINS:-yes}" # don't use any external toolchains, by default.
TZDATA=$(cat /etc/timezone) # Timezone for target is taken from host or defined here. TZDATA=$(cat /etc/timezone) # Timezone for target is taken from host or defined here.
USEALLCORES=yes # Use all CPU cores for compiling USEALLCORES=yes # Use all CPU cores for compiling
HOSTRELEASE=$(cat /etc/os-release | grep VERSION_CODENAME | cut -d"=" -f2) HOSTRELEASE=$(cat /etc/os-release | grep VERSION_CODENAME | cut -d"=" -f2)
@@ -33,10 +34,17 @@ function do_main_configuration() {
cd "${SRC}" || exit cd "${SRC}" || exit
[[ -z "${CHROOT_CACHE_VERSION}" ]] && CHROOT_CACHE_VERSION=7 [[ -z "${CHROOT_CACHE_VERSION}" ]] && CHROOT_CACHE_VERSION=7
BUILD_REPOSITORY_URL=$(improved_git remote get-url $(improved_git remote 2> /dev/null | grep origin) 2> /dev/null) BUILD_REPOSITORY_URL=$(git remote get-url "$(git remote | grep origin)")
BUILD_REPOSITORY_COMMIT=$(improved_git describe --match=d_e_a_d_b_e_e_f --always --dirty 2> /dev/null) BUILD_REPOSITORY_COMMIT=$(git describe --match=d_e_a_d_b_e_e_f --always --dirty)
ROOTFS_CACHE_MAX=200 # max number of rootfs cache, older ones will be cleaned up ROOTFS_CACHE_MAX=200 # max number of rootfs cache, older ones will be cleaned up
# .deb compression. xz is standard, but is slow, so if avoided by default if not running in CI. one day, zstd.
if [[ -z ${DEB_COMPRESS} ]]; then
DEB_COMPRESS="none" # none is very fast bug produces big artifacts.
[[ "${CI}" == "true" ]] && DEB_COMPRESS="xz" # xz is small and slow
fi
display_alert ".deb compression" "DEB_COMPRESS=${DEB_COMPRESS}" "debug"
if [[ $BETA == yes ]]; then if [[ $BETA == yes ]]; then
DEB_STORAGE=$DEST/debs-beta DEB_STORAGE=$DEST/debs-beta
REPO_STORAGE=$DEST/repository-beta REPO_STORAGE=$DEST/repository-beta
@@ -48,12 +56,10 @@ function do_main_configuration() {
fi fi
# image artefact destination with or without subfolder # image artefact destination with or without subfolder
FINALDEST=$DEST/images FINALDEST="${DEST}/images"
if [[ -n "${MAKE_FOLDERS}" ]]; then if [[ -n "${MAKE_FOLDERS}" ]]; then
FINALDEST="${DEST}"/images/"${BOARD}"/"${MAKE_FOLDERS}"
FINALDEST=$DEST/images/"${BOARD}"/"${MAKE_FOLDERS}" install -d "${FINALDEST}"
install -d ${FINALDEST}
fi fi
# TODO: fixed name can't be used for parallel image building # TODO: fixed name can't be used for parallel image building
@@ -95,6 +101,11 @@ function do_main_configuration() {
# used by multiple sources - reduce code duplication # used by multiple sources - reduce code duplication
[[ $USE_MAINLINE_GOOGLE_MIRROR == yes ]] && MAINLINE_MIRROR=google [[ $USE_MAINLINE_GOOGLE_MIRROR == yes ]] && MAINLINE_MIRROR=google
# URL for the git bundle used to "bootstrap" local git copies without too much server load. This applies independently of git mirror below.
export MAINLINE_KERNEL_TORVALDS_BUNDLE_URL="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/clone.bundle" # this is plain torvalds, single branch
export MAINLINE_KERNEL_STABLE_BUNDLE_URL="https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/clone.bundle" # this is all stable branches. with tags!
export MAINLINE_KERNEL_COLD_BUNDLE_URL="${MAINLINE_KERNEL_COLD_BUNDLE_URL:-${MAINLINE_KERNEL_TORVALDS_BUNDLE_URL}}" # default to Torvalds; everything else is small enough with this
case $MAINLINE_MIRROR in case $MAINLINE_MIRROR in
google) google)
MAINLINE_KERNEL_SOURCE='https://kernel.googlesource.com/pub/scm/linux/kernel/git/stable/linux-stable' MAINLINE_KERNEL_SOURCE='https://kernel.googlesource.com/pub/scm/linux/kernel/git/stable/linux-stable'
@@ -168,17 +179,36 @@ function do_main_configuration() {
# single ext4 partition is the default and preferred configuration # single ext4 partition is the default and preferred configuration
#BOOTFS_TYPE='' #BOOTFS_TYPE=''
[[ ! -f ${SRC}/config/sources/families/$LINUXFAMILY.conf ]] &&
exit_with_error "Sources configuration not found" "$LINUXFAMILY"
source "${SRC}/config/sources/families/${LINUXFAMILY}.conf" ## ------ Sourcing family config ---------------------------
declare -a family_source_paths=("${SRC}/config/sources/families/${LINUXFAMILY}.conf" "${USERPATCHES_PATH}/config/sources/families/${LINUXFAMILY}.conf")
declare -i family_sourced_ok=0
for family_source_path in "${family_source_paths[@]}"; do
[[ ! -f "${family_source_path}" ]] && continue
display_alert "Sourcing family configuration" "${family_source_path}" "info"
# shellcheck source=/dev/null
source "${family_source_path}"
# @TODO: reset error handling, go figure what they do in there.
family_sourced_ok=$((family_sourced_ok + 1))
done
[[ ${family_sourced_ok} -lt 1 ]] &&
exit_with_error "Sources configuration not found" "tried ${family_source_paths[*]}"
# This is for compatibility only; path above should suffice
if [[ -f $USERPATCHES_PATH/sources/families/$LINUXFAMILY.conf ]]; then if [[ -f $USERPATCHES_PATH/sources/families/$LINUXFAMILY.conf ]]; then
display_alert "Adding user provided $LINUXFAMILY overrides" display_alert "Adding user provided $LINUXFAMILY overrides"
# shellcheck source=/dev/null
source "$USERPATCHES_PATH/sources/families/${LINUXFAMILY}.conf" source "$USERPATCHES_PATH/sources/families/${LINUXFAMILY}.conf"
fi fi
# load architecture defaults # load architecture defaults
display_alert "Sourcing arch configuration" "${ARCH}.conf" "info"
# shellcheck source=/dev/null
source "${SRC}/config/sources/${ARCH}.conf" source "${SRC}/config/sources/${ARCH}.conf"
if [[ "$HAS_VIDEO_OUTPUT" == "no" ]]; then if [[ "$HAS_VIDEO_OUTPUT" == "no" ]]; then
@@ -193,23 +223,25 @@ function do_main_configuration() {
## like the 'post_family_config' that is invoked below. ## like the 'post_family_config' that is invoked below.
initialize_extension_manager initialize_extension_manager
call_extension_method "post_family_config" "config_tweaks_post_family_config" << 'POST_FAMILY_CONFIG' call_extension_method "post_family_config" "config_tweaks_post_family_config" <<- 'POST_FAMILY_CONFIG'
*give the config a chance to override the family/arch defaults* *give the config a chance to override the family/arch defaults*
This hook is called after the family configuration (`sources/families/xxx.conf`) is sourced. This hook is called after the family configuration (`sources/families/xxx.conf`) is sourced.
Since the family can override values from the user configuration and the board configuration, Since the family can override values from the user configuration and the board configuration,
it is often used to in turn override those. it is often used to in turn override those.
POST_FAMILY_CONFIG POST_FAMILY_CONFIG
# A global killswitch for extlinux.
if [[ "${SRC_EXTLINUX}" == "yes" ]]; then
if [[ "${ALLOW_EXTLINUX}" != "yes" ]]; then
display_alert "Disabling extlinux support" "extlinux global killswitch; set ALLOW_EXTLINUX=yes to avoid" "info"
export SRC_EXTLINUX=no
else
display_alert "Both SRC_EXTLINUX=yes and ALLOW_EXTLINUX=yes" "enabling extlinux, expect breakage" "warn"
fi
fi
interactive_desktop_main_configuration interactive_desktop_main_configuration
#exit_with_error 'Testing'
# set unique mounting directory
MOUNT_UUID=$(uuidgen)
SDCARD="${SRC}/.tmp/rootfs-${MOUNT_UUID}"
MOUNT="${SRC}/.tmp/mount-${MOUNT_UUID}"
DESTIMG="${SRC}/.tmp/image-${MOUNT_UUID}"
[[ -n $ATFSOURCE && -z $ATF_USE_GCC ]] && exit_with_error "Error in configuration: ATF_USE_GCC is unset" [[ -n $ATFSOURCE && -z $ATF_USE_GCC ]] && exit_with_error "Error in configuration: ATF_USE_GCC is unset"
[[ -z $UBOOT_USE_GCC ]] && exit_with_error "Error in configuration: UBOOT_USE_GCC is unset" [[ -z $UBOOT_USE_GCC ]] && exit_with_error "Error in configuration: UBOOT_USE_GCC is unset"
[[ -z $KERNEL_USE_GCC ]] && exit_with_error "Error in configuration: KERNEL_USE_GCC is unset" [[ -z $KERNEL_USE_GCC ]] && exit_with_error "Error in configuration: KERNEL_USE_GCC is unset"
@@ -230,52 +262,48 @@ POST_FAMILY_CONFIG
CLI_CONFIG_PATH="${SRC}/config/cli/${RELEASE}" CLI_CONFIG_PATH="${SRC}/config/cli/${RELEASE}"
DEBOOTSTRAP_CONFIG_PATH="${CLI_CONFIG_PATH}/debootstrap" DEBOOTSTRAP_CONFIG_PATH="${CLI_CONFIG_PATH}/debootstrap"
if [[ $? != 0 ]]; then
exit_with_error "The desktop environment ${DESKTOP_ENVIRONMENT} is not available for your architecture ${ARCH}"
fi
AGGREGATION_SEARCH_ROOT_ABSOLUTE_DIRS=" AGGREGATION_SEARCH_ROOT_ABSOLUTE_DIRS="
${SRC}/config ${SRC}/config
${SRC}/config/optional/_any_board/_config ${SRC}/config/optional/_any_board/_config
${SRC}/config/optional/architectures/${ARCH}/_config ${SRC}/config/optional/architectures/${ARCH}/_config
${SRC}/config/optional/families/${LINUXFAMILY}/_config ${SRC}/config/optional/families/${LINUXFAMILY}/_config
${SRC}/config/optional/boards/${BOARD}/_config ${SRC}/config/optional/boards/${BOARD}/_config
${USERPATCHES_PATH} ${USERPATCHES_PATH}
" "
DEBOOTSTRAP_SEARCH_RELATIVE_DIRS=" DEBOOTSTRAP_SEARCH_RELATIVE_DIRS="
cli/_all_distributions/debootstrap cli/_all_distributions/debootstrap
cli/${RELEASE}/debootstrap cli/${RELEASE}/debootstrap
" "
CLI_SEARCH_RELATIVE_DIRS=" CLI_SEARCH_RELATIVE_DIRS="
cli/_all_distributions/main cli/_all_distributions/main
cli/${RELEASE}/main cli/${RELEASE}/main
" "
PACKAGES_SEARCH_ROOT_ABSOLUTE_DIRS=" PACKAGES_SEARCH_ROOT_ABSOLUTE_DIRS="
${SRC}/packages ${SRC}/packages
${SRC}/config/optional/_any_board/_packages ${SRC}/config/optional/_any_board/_packages
${SRC}/config/optional/architectures/${ARCH}/_packages ${SRC}/config/optional/architectures/${ARCH}/_packages
${SRC}/config/optional/families/${LINUXFAMILY}/_packages ${SRC}/config/optional/families/${LINUXFAMILY}/_packages
${SRC}/config/optional/boards/${BOARD}/_packages ${SRC}/config/optional/boards/${BOARD}/_packages
" "
DESKTOP_ENVIRONMENTS_SEARCH_RELATIVE_DIRS=" DESKTOP_ENVIRONMENTS_SEARCH_RELATIVE_DIRS="
desktop/_all_distributions/environments/_all_environments desktop/_all_distributions/environments/_all_environments
desktop/_all_distributions/environments/${DESKTOP_ENVIRONMENT} desktop/_all_distributions/environments/${DESKTOP_ENVIRONMENT}
desktop/_all_distributions/environments/${DESKTOP_ENVIRONMENT}/${DESKTOP_ENVIRONMENT_CONFIG_NAME} desktop/_all_distributions/environments/${DESKTOP_ENVIRONMENT}/${DESKTOP_ENVIRONMENT_CONFIG_NAME}
desktop/${RELEASE}/environments/_all_environments desktop/${RELEASE}/environments/_all_environments
desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT} desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT}
desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT}/${DESKTOP_ENVIRONMENT_CONFIG_NAME} desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT}/${DESKTOP_ENVIRONMENT_CONFIG_NAME}
" "
DESKTOP_APPGROUPS_SEARCH_RELATIVE_DIRS=" DESKTOP_APPGROUPS_SEARCH_RELATIVE_DIRS="
desktop/_all_distributions/appgroups desktop/_all_distributions/appgroups
desktop/_all_distributions/environments/${DESKTOP_ENVIRONMENT}/appgroups desktop/_all_distributions/environments/${DESKTOP_ENVIRONMENT}/appgroups
desktop/${RELEASE}/appgroups desktop/${RELEASE}/appgroups
desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT}/appgroups desktop/${RELEASE}/environments/${DESKTOP_ENVIRONMENT}/appgroups
" "
DEBOOTSTRAP_LIST="$(one_line aggregate_all_debootstrap "packages" " ")" DEBOOTSTRAP_LIST="$(one_line aggregate_all_debootstrap "packages" " ")"
DEBOOTSTRAP_COMPONENTS="$(one_line aggregate_all_debootstrap "components" " ")" DEBOOTSTRAP_COMPONENTS="$(one_line aggregate_all_debootstrap "components" " ")"
@@ -283,19 +311,9 @@ POST_FAMILY_CONFIG
PACKAGE_LIST="$(one_line aggregate_all_cli "packages" " ")" PACKAGE_LIST="$(one_line aggregate_all_cli "packages" " ")"
PACKAGE_LIST_ADDITIONAL="$(one_line aggregate_all_cli "packages.additional" " ")" PACKAGE_LIST_ADDITIONAL="$(one_line aggregate_all_cli "packages.additional" " ")"
LOG_OUTPUT_FILE="$SRC/output/${LOG_SUBPATH}/debootstrap-list.log"
show_checklist_variables "DEBOOTSTRAP_LIST DEBOOTSTRAP_COMPONENTS PACKAGE_LIST PACKAGE_LIST_ADDITIONAL PACKAGE_LIST_UNINSTALL"
# Dependent desktop packages
# Myy : Sources packages from file here
# Myy : FIXME Rename aggregate_all to aggregate_all_desktop
if [[ $BUILD_DESKTOP == "yes" ]]; then if [[ $BUILD_DESKTOP == "yes" ]]; then
PACKAGE_LIST_DESKTOP+="$(one_line aggregate_all_desktop "packages" " ")" PACKAGE_LIST_DESKTOP+="$(one_line aggregate_all_desktop "packages" " ")"
echo -e "\nGroups selected ${DESKTOP_APPGROUPS_SELECTED} -> PACKAGES :" >> "${LOG_OUTPUT_FILE}"
show_checklist_variables PACKAGE_LIST_DESKTOP
fi fi
unset LOG_OUTPUT_FILE
DEBIAN_MIRROR='deb.debian.org/debian' DEBIAN_MIRROR='deb.debian.org/debian'
DEBIAN_SECURTY='security.debian.org/' DEBIAN_SECURTY='security.debian.org/'
@@ -319,6 +337,21 @@ POST_FAMILY_CONFIG
UBUNTU_MIRROR='mirrors.bfsu.edu.cn/ubuntu-ports/' UBUNTU_MIRROR='mirrors.bfsu.edu.cn/ubuntu-ports/'
fi fi
if [[ "${ARCH}" == "amd64" ]]; then
UBUNTU_MIRROR='archive.ubuntu.com/ubuntu' # ports are only for non-amd64, of course.
if [[ -n ${CUSTOM_UBUNTU_MIRROR} ]]; then # ubuntu redirector doesn't work well on amd64
UBUNTU_MIRROR="${CUSTOM_UBUNTU_MIRROR}"
fi
fi
if [[ "${ARCH}" == "arm64" ]]; then
if [[ -n ${CUSTOM_UBUNTU_MIRROR_ARM64} ]]; then
display_alert "Using custom ports/arm64 mirror" "${CUSTOM_UBUNTU_MIRROR_ARM64}" "info"
UBUNTU_MIRROR="${CUSTOM_UBUNTU_MIRROR_ARM64}"
fi
fi
# Control aria2c's usage of ipv6.
[[ -z $DISABLE_IPV6 ]] && DISABLE_IPV6="true" [[ -z $DISABLE_IPV6 ]] && DISABLE_IPV6="true"
# For (late) user override. # For (late) user override.
@@ -329,29 +362,29 @@ POST_FAMILY_CONFIG
source "$USERPATCHES_PATH"/lib.config source "$USERPATCHES_PATH"/lib.config
fi fi
call_extension_method "user_config" << 'USER_CONFIG' call_extension_method "user_config" <<- 'USER_CONFIG'
*Invoke function with user override* *Invoke function with user override*
Allows for overriding configuration values set anywhere else. Allows for overriding configuration values set anywhere else.
It is called after sourcing the `lib.config` file if it exists, It is called after sourcing the `lib.config` file if it exists,
but before assembling any package lists. but before assembling any package lists.
USER_CONFIG USER_CONFIG
call_extension_method "extension_prepare_config" << 'EXTENSION_PREPARE_CONFIG' display_alert "Extensions: prepare configuration" "extension_prepare_config" "debug"
*allow extensions to prepare their own config, after user config is done* call_extension_method "extension_prepare_config" <<- 'EXTENSION_PREPARE_CONFIG'
Implementors should preserve variable values pre-set, but can default values an/or validate them. *allow extensions to prepare their own config, after user config is done*
This runs *after* user_config. Don't change anything not coming from other variables or meant to be configured by the user. Implementors should preserve variable values pre-set, but can default values an/or validate them.
EXTENSION_PREPARE_CONFIG This runs *after* user_config. Don't change anything not coming from other variables or meant to be configured by the user.
EXTENSION_PREPARE_CONFIG
# apt-cacher-ng mirror configurarion # apt-cacher-ng mirror configurarion
APT_MIRROR=$DEBIAN_MIRROR
if [[ $DISTRIBUTION == Ubuntu ]]; then if [[ $DISTRIBUTION == Ubuntu ]]; then
APT_MIRROR=$UBUNTU_MIRROR APT_MIRROR=$UBUNTU_MIRROR
else
APT_MIRROR=$DEBIAN_MIRROR
fi fi
[[ -n $APT_PROXY_ADDR ]] && display_alert "Using custom apt-cacher-ng address" "$APT_PROXY_ADDR" "info" [[ -n $APT_PROXY_ADDR ]] && display_alert "Using custom apt-cacher-ng address" "$APT_PROXY_ADDR" "info"
# Build final package list after possible override display_alert "Build final package list" "after possible override" "debug"
PACKAGE_LIST="$PACKAGE_LIST $PACKAGE_LIST_RELEASE $PACKAGE_LIST_ADDITIONAL" PACKAGE_LIST="$PACKAGE_LIST $PACKAGE_LIST_RELEASE $PACKAGE_LIST_ADDITIONAL"
PACKAGE_MAIN_LIST="$(cleanup_list PACKAGE_LIST)" PACKAGE_MAIN_LIST="$(cleanup_list PACKAGE_LIST)"
@@ -371,8 +404,8 @@ EXTENSION_PREPARE_CONFIG
PACKAGE_LIST_UNINSTALL="$(cleanup_list aggregated_content)" PACKAGE_LIST_UNINSTALL="$(cleanup_list aggregated_content)"
unset aggregated_content unset aggregated_content
# @TODO: rpardini: this has to stop. refactor this into array or dict-based and stop the madness.
if [[ -n $PACKAGE_LIST_RM ]]; then if [[ -n $PACKAGE_LIST_RM ]]; then
display_alert "Package remove list ${PACKAGE_LIST_RM}"
# Turns out that \b can be tricked by dashes. # Turns out that \b can be tricked by dashes.
# So if you remove mesa-utils but still want to install "mesa-utils-extra" # So if you remove mesa-utils but still want to install "mesa-utils-extra"
# a "\b(mesa-utils)\b" filter will convert "mesa-utils-extra" to "-extra". # a "\b(mesa-utils)\b" filter will convert "mesa-utils-extra" to "-extra".
@@ -396,43 +429,50 @@ EXTENSION_PREPARE_CONFIG
PACKAGE_MAIN_LIST="$(echo ${PACKAGE_MAIN_LIST})" PACKAGE_MAIN_LIST="$(echo ${PACKAGE_MAIN_LIST})"
fi fi
LOG_OUTPUT_FILE="$SRC/output/${LOG_SUBPATH}/debootstrap-list.log"
echo -e "\nVariables after manual configuration" >> $LOG_OUTPUT_FILE
show_checklist_variables "DEBOOTSTRAP_COMPONENTS DEBOOTSTRAP_LIST PACKAGE_LIST PACKAGE_MAIN_LIST"
unset LOG_OUTPUT_FILE
# Give the option to configure DNS server used in the chroot during the build process # Give the option to configure DNS server used in the chroot during the build process
[[ -z $NAMESERVER ]] && NAMESERVER="1.0.0.1" # default is cloudflare alternate [[ -z $NAMESERVER ]] && NAMESERVER="1.0.0.1" # default is cloudflare alternate
call_extension_method "post_aggregate_packages" "user_config_post_aggregate_packages" << 'POST_AGGREGATE_PACKAGES' call_extension_method "post_aggregate_packages" "user_config_post_aggregate_packages" <<- 'POST_AGGREGATE_PACKAGES'
*For final user override, using a function, after all aggregations are done* *For final user override, using a function, after all aggregations are done*
Called after aggregating all package lists, before the end of `compilation.sh`. Called after aggregating all package lists, before the end of `compilation.sh`.
Packages will still be installed after this is called, so it is the last chance Packages will still be installed after this is called, so it is the last chance
to confirm or change any packages. to confirm or change any packages.
POST_AGGREGATE_PACKAGES POST_AGGREGATE_PACKAGES
# debug display_alert "Done with main-config.sh" "do_main_configuration" "debug"
cat <<- EOF >> "${DEST}"/${LOG_SUBPATH}/output.log }
# This is called by main_default_build_single() but declared here for 'convenience'
function write_config_summary_output_file() {
local debug_dpkg_arch debug_uname debug_virt debug_src_mount debug_src_perms debug_src_temp_perms
debug_dpkg_arch="$(dpkg --print-architecture)"
debug_uname="$(uname -a)"
debug_virt="$(systemd-detect-virt || true)"
debug_src_mount="$(findmnt --output TARGET,SOURCE,FSTYPE,AVAIL --target "${SRC}" --uniq)"
debug_src_perms="$(getfacl -p "${SRC}")"
debug_src_temp_perms="$(getfacl -p "${SRC}"/.tmp 2> /dev/null)"
display_alert "Writing build config summary to" "debug log" "debug"
LOG_ASSET="build.summary.txt" do_with_log_asset run_host_command_logged cat <<- EOF
## BUILD SCRIPT ENVIRONMENT ## BUILD SCRIPT ENVIRONMENT
Repository: $REPOSITORY_URL Repository: $REPOSITORY_URL
Version: $REPOSITORY_COMMIT Version: $REPOSITORY_COMMIT
Host OS: $HOSTRELEASE Host OS: $HOSTRELEASE
Host arch: $(dpkg --print-architecture) Host arch: ${debug_dpkg_arch}
Host system: $(uname -a) Host system: ${debug_uname}
Virtualization type: $(systemd-detect-virt) Virtualization type: ${debug_virt}
## Build script directories ## Build script directories
Build directory is located on: Build directory is located on:
$(findmnt --output TARGET,SOURCE,FSTYPE,AVAIL --target "${SRC}" --uniq) ${debug_src_mount}
Build directory permissions: Build directory permissions:
$(getfacl -p "${SRC}") ${debug_src_perms}
Temp directory permissions: Temp directory permissions:
$(getfacl -p "${SRC}"/.tmp 2> /dev/null) ${debug_src_temp_perms}
## BUILD CONFIGURATION ## BUILD CONFIGURATION
@@ -460,5 +500,4 @@ POST_AGGREGATE_PACKAGES
CPU configuration: $CPUMIN - $CPUMAX with $GOVERNOR CPU configuration: $CPUMIN - $CPUMAX with $GOVERNOR
EOF EOF
} }

View File

@@ -1,57 +1,68 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# Stuff involving dialog
# rpardini: dialog reports what happened via nonzero exit codes.
# we also want to capture the stdout of dialog.
# this is a helper function that handles the error logging on/off and does the capturing
# then reports via exported variables, which the caller can/should test for later.
# warning: this will exit with error if stdin/stdout/stderr is not a terminal or running under CI, or if dialog not installed
# otherwise it will NOT exit with error, even if user cancelled.
# This is a boring topic, see https://askubuntu.com/questions/491509/how-to-get-dialog-box-input-directed-to-a-variable
function dialog_if_terminal_set_vars() {
export DIALOG_RESULT=""
export DIALOG_EXIT_CODE=0
[[ ! -t 0 ]] && exit_with_error "stdin is not a terminal. can't use dialog." "dialog_if_terminal_set_vars ${*}" "err"
[[ ! -t 1 ]] && exit_with_error "stdout is not a terminal. can't use dialog." "dialog_if_terminal_set_vars ${*}" "err"
[[ ! -t 2 ]] && exit_with_error "stderr is not a terminal. can't use dialog." "dialog_if_terminal_set_vars ${*}" "err"
[[ "${CI}" == "true" ]] && exit_with_error "CI=true. can't use dialog." "dialog_if_terminal_set_vars ${*}" "err"
[[ ! -f /usr/bin/dialog ]] && exit_with_error "Dialog is not installed at /usr/bin/dialog" "dialog_if_terminal_set_vars ${*}" "err"
set +e # allow errors through
set +o errtrace # do not trap errors inside a subshell/function
set +o errexit # disable
exec 3>&1 # open fd 3...
DIALOG_RESULT=$(dialog "$@" 2>&1 1>&3) # juggle fds and capture.
DIALOG_EXIT_CODE=$? # get the exit code.
exec 3>&- # close fd 3...
set -e # back to normal
set -o errtrace # back to normal
set -o errexit # back to normal
return 0 # always success, caller must check DIALOG_EXIT_CODE and DIALOG_RESULT
}
# Myy : Menu configuration for choosing desktop configurations # Myy : Menu configuration for choosing desktop configurations
show_menu() { dialog_menu() {
export DIALOG_MENU_RESULT=""
provided_title=$1 provided_title=$1
provided_backtitle=$2 provided_backtitle=$2
provided_menuname=$3 provided_menuname=$3
# Myy : I don't know why there's a TTY_Y - 8... dialog_if_terminal_set_vars --title "$provided_title" --backtitle "${provided_backtitle}" --menu "$provided_menuname" $TTY_Y $TTY_X $((TTY_Y - 8)) "${@:4}"
#echo "Provided title : $provided_title" DIALOG_MENU_RESULT="${DIALOG_RESULT}"
#echo "Provided backtitle : $provided_backtitle" return $DIALOG_EXIT_CODE
#echo "Provided menuname : $provided_menuname"
#echo "Provided options : " "${@:4}"
#echo "TTY X: $TTY_X Y: $TTY_Y"
dialog --stdout --title "$provided_title" --backtitle "${provided_backtitle}" \
--menu "$provided_menuname" $TTY_Y $TTY_X $((TTY_Y - 8)) "${@:4}"
} }
# Myy : FIXME Factorize # Almost identical, but is a checklist instead of menu
show_select_menu() { dialog_checklist() {
export DIALOG_CHECKLIST_RESULT=""
provided_title=$1 provided_title=$1
provided_backtitle=$2 provided_backtitle=$2
provided_menuname=$3 provided_menuname=$3
dialog --stdout --title "${provided_title}" --backtitle "${provided_backtitle}" \ dialog_if_terminal_set_vars --title "${provided_title}" --backtitle "${provided_backtitle}" --checklist "${provided_menuname}" $TTY_Y $TTY_X $((TTY_Y - 8)) "${@:4}"
--checklist "${provided_menuname}" $TTY_Y $TTY_X $((TTY_Y - 8)) "${@:4}" DIALOG_CHECKLIST_RESULT="${DIALOG_RESULT}"
} return $DIALOG_EXIT_CODE
function distro_menu() {
# create a select menu for choosing a distribution based EXPERT status
local distrib_dir="${1}"
if [[ -d "${distrib_dir}" && -f "${distrib_dir}/support" ]]; then
local support_level="$(cat "${distrib_dir}/support")"
if [[ "${support_level}" != "supported" && $EXPERT != "yes" ]]; then
:
else
local distro_codename="$(basename "${distrib_dir}")"
local distro_fullname="$(cat "${distrib_dir}/name")"
local expert_infos=""
[[ $EXPERT == "yes" ]] && expert_infos="(${support_level})"
options+=("${distro_codename}" "${distro_fullname} ${expert_infos}")
fi
fi
}
function distros_options() {
for distrib_dir in "${DISTRIBUTIONS_DESC_DIR}/"*; do
distro_menu "${distrib_dir}"
done
} }
# Other menu stuff
show_developer_warning() { show_developer_warning() {
local temp_rc local temp_rc
temp_rc=$(mktemp) temp_rc=$(mktemp) # @TODO: this is a _very_ early call to mktemp - no TMPDIR set yet - it needs to be cleaned-up somehow
cat <<- 'EOF' > "${temp_rc}" cat <<- 'EOF' > "${temp_rc}"
screen_color = (WHITE,RED,ON) screen_color = (WHITE,RED,ON)
EOF EOF
@@ -67,8 +78,32 @@ show_developer_warning() {
- Forum posts related to dev kernel, CSC, WIP and EOS boards - Forum posts related to dev kernel, CSC, WIP and EOS boards
should be created in the \Z2\"Community forums\"\Zn section should be created in the \Z2\"Community forums\"\Zn section
" "
DIALOGRC=$temp_rc dialog --title "Expert mode warning" --backtitle "${backtitle}" --colors --defaultno --no-label "I do not agree" \ DIALOGRC=$temp_rc dialog_if_terminal_set_vars --title "Expert mode warning" --backtitle "${backtitle}" --colors --defaultno --no-label "I do not agree" --yes-label "I understand and agree" --yesno "$warn_text" "${TTY_Y}" "${TTY_X}"
--yes-label "I understand and agree" --yesno "$warn_text" "${TTY_Y}" "${TTY_X}" [[ ${DIALOG_EXIT_CODE} -ne 0 ]] && exit_with_error "Error switching to the expert mode"
[[ $? -ne 0 ]] && exit_with_error "Error switching to the expert mode"
SHOW_WARNING=no SHOW_WARNING=no
} }
# Stuff that was in config files
function distro_menu() {
# create a select menu for choosing a distribution based EXPERT status
local distrib_dir="${1}"
if [[ -d "${distrib_dir}" && -f "${distrib_dir}/support" ]]; then
local support_level="$(cat "${distrib_dir}/support")"
if [[ "${support_level}" != "supported" && $EXPERT != "yes" ]]; then
:
else
local distro_codename="$(basename "${distrib_dir}")"
local distro_fullname="$(cat "${distrib_dir}/name")"
local expert_infos=""
[[ $EXPERT == "yes" ]] && expert_infos="(${support_level})"
options+=("${distro_codename}" "${distro_fullname} ${expert_infos}")
fi
fi
}
function distros_options() {
for distrib_dir in "config/distributions/"*; do
distro_menu "${distrib_dir}"
done
}

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# create_chroot <target_dir> <release> <arch> # create_chroot <target_dir> <release> <arch>
# #
create_chroot() { create_chroot() {
@@ -107,7 +108,7 @@ create_chroot() {
touch "${target_dir}"/root/.debootstrap-complete touch "${target_dir}"/root/.debootstrap-complete
display_alert "Debootstrap complete" "${release}/${arch}" "info" display_alert "Debootstrap complete" "${release}/${arch}" "info"
} } #############################################################################
# chroot_prepare_distccd <release> <arch> # chroot_prepare_distccd <release> <arch>
# #
@@ -319,7 +320,7 @@ chroot_build_packages() {
display_alert "$p" display_alert "$p"
done done
fi fi
} } #############################################################################
# create build script # create build script
create_build_script() { create_build_script() {
@@ -421,7 +422,7 @@ chroot_installpackages_local() {
EOF EOF
chroot_installpackages chroot_installpackages
kill "${aptly_pid}" kill "${aptly_pid}"
} } #############################################################################
# chroot_installpackages <remote_only> # chroot_installpackages <remote_only>
# #
@@ -464,4 +465,4 @@ chroot_installpackages() {
EOF EOF
chmod +x "${SDCARD}"/tmp/install.sh chmod +x "${SDCARD}"/tmp/install.sh
chroot "${SDCARD}" /bin/bash -c "/tmp/install.sh" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot "${SDCARD}" /bin/bash -c "/tmp/install.sh" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
} } #############################################################################

View File

@@ -1,139 +0,0 @@
#!/usr/bin/env bash
# Installing debian packages or package files in the armbian build system.
# The function accepts four optional parameters:
# autoupdate - If the installation list is not empty then update first.
# upgrade, clean - the same name for apt
# verbose - detailed log for the function
#
# list="pkg1 pkg2 pkg3 pkgbadname pkg-1.0 | pkg-2.0 pkg5 (>= 9)"
# or list="pkg1 pkg2 /path-to/output/debs/file-name.deb"
# install_pkg_deb upgrade verbose $list
# or
# install_pkg_deb autoupdate $list
#
# If the package has a bad name, we will see it in the log file.
# If there is an LOG_OUTPUT_FILE variable and it has a value as
# the full real path to the log file, then all the information will be there.
#
# The LOG_OUTPUT_FILE variable must be defined in the calling function
# before calling the install_pkg_deb function and unset after.
#
install_pkg_deb() {
local list=""
local listdeb=""
local log_file
local add_for_install
local for_install
local need_autoup=false
local need_upgrade=false
local need_clean=false
local need_verbose=false
local _line=${BASH_LINENO[0]}
local _function=${FUNCNAME[1]}
local _file=$(basename "${BASH_SOURCE[1]}")
local tmp_file=$(mktemp /tmp/install_log_XXXXX)
export DEBIAN_FRONTEND=noninteractive
if [ -d $(dirname $LOG_OUTPUT_FILE) ]; then
log_file=${LOG_OUTPUT_FILE}
else
log_file="${SRC}/output/${LOG_SUBPATH}/install.log"
fi
for p in $*; do
case $p in
autoupdate)
need_autoup=true
continue
;;
upgrade)
need_upgrade=true
continue
;;
clean)
need_clean=true
continue
;;
verbose)
need_verbose=true
continue
;;
\| | \(* | *\)) continue ;;
*[.]deb)
listdeb+=" $p"
continue
;;
*) list+=" $p" ;;
esac
done
# This is necessary first when there is no apt cache.
if $need_upgrade; then
apt-get -q update || echo "apt cannot update" >> $tmp_file
apt-get -y upgrade || echo "apt cannot upgrade" >> $tmp_file
fi
# Install debian package files
if [ -n "$listdeb" ]; then
for f in $listdeb; do
# Calculate dependencies for installing the package file
add_for_install=" $(
dpkg-deb -f $f Depends | awk '{gsub(/[,]/, "", $0); print $0}'
)"
echo -e "\nfile $f depends on:\n$add_for_install" >> $log_file
install_pkg_deb $add_for_install
dpkg -i $f 2>> $log_file
dpkg-query -W \
-f '${binary:Package;-27} ${Version;-23}\n' \
$(dpkg-deb -f $f Package) >> $log_file
done
fi
# If the package is not installed, check the latest
# up-to-date version in the apt cache.
# Exclude bad package names and send a message to the log.
for_install=$(
for p in $list; do
if $(dpkg-query -W -f '${db:Status-Abbrev}' $p |& awk '/ii/{exit 1}'); then
apt-cache show $p -o APT::Cache::AllVersions=no |&
awk -v p=$p -v tmp_file=$tmp_file \
'/^Package:/{print $2} /^E:/{print "Bad package name: ",p >>tmp_file}'
fi
done
)
# This information should be logged.
if [ -s $tmp_file ]; then
echo -e "\nInstalling packages in function: $_function" "[$_file:$_line]" \
>> $log_file
echo -e "\nIncoming list:" >> $log_file
printf "%-30s %-30s %-30s %-30s\n" $list >> $log_file
echo "" >> $log_file
cat $tmp_file >> $log_file
fi
if [ -n "$for_install" ]; then
if $need_autoup; then
apt-get -q update
apt-get -y upgrade
fi
apt-get install -qq -y --no-install-recommends $for_install
echo -e "\nPackages installed:" >> $log_file
dpkg-query -W \
-f '${binary:Package;-27} ${Version;-23}\n' \
$for_install >> $log_file
fi
# We will show the status after installation all listed
if $need_verbose; then
echo -e "\nstatus after installation:" >> $log_file
dpkg-query -W \
-f '${binary:Package;-27} ${Version;-23} [ ${Status} ]\n' \
$list >> $log_file
fi
if $need_clean; then apt-get clean; fi
rm $tmp_file
}

View File

@@ -19,15 +19,47 @@ mount_chroot() {
# helper to reduce code duplication # helper to reduce code duplication
# #
umount_chroot() { umount_chroot() {
local target=$1 local target=$1
display_alert "Unmounting" "$target" "info" display_alert "Unmounting" "$target" "info"
while grep -Eq "${target}/*(dev|proc|sys|tmp)" /proc/mounts; do while grep -Eq "${target}.*(dev|proc|sys|tmp)" /proc/mounts; do
umount -l --recursive "${target}"/dev > /dev/null 2>&1 umount --recursive "${target}"/dev > /dev/null 2>&1 || true
umount -l "${target}"/proc > /dev/null 2>&1 umount "${target}"/proc > /dev/null 2>&1 || true
umount -l "${target}"/sys > /dev/null 2>&1 umount "${target}"/sys > /dev/null 2>&1 || true
umount -l "${target}"/tmp > /dev/null 2>&1 umount "${target}"/tmp > /dev/null 2>&1 || true
sleep 5 sync
done
}
# demented recursive version, for final umount.
function umount_chroot_recursive() {
local target="${1}/"
if [[ ! -d "${target}" ]]; then # only even try if target is a directory
return 0 # success, nothing to do.
elif [[ "${target}" == "/" ]]; then # make sure we're not trying to umount root itself.
return 0
fi
display_alert "Unmounting recursively" "${target}" ""
sync # sync. coalesce I/O. wait for writes to flush to disk. it might take a second.
# First, try to umount some well-known dirs, in a certain order. for speed.
local -a well_known_list=("dev/pts" "dev" "proc" "sys" "boot/efi" "boot/firmware" "boot" "tmp" ".")
for well_known in "${well_known_list[@]}"; do
umount --recursive "${target}${well_known}" &> /dev/null || true # ignore errors
done done
# now try in a loop to unmount all that's still mounted under the target
local -i tries=1 # the first try above
mapfile -t current_mount_list < <(cut -d " " -f 2 "/proc/mounts" | grep "^${target}" || true) # don't let grep error out.
while [[ ${#current_mount_list[@]} -gt 0 ]]; do
if [[ $tries -gt 10 ]]; then
display_alert "${#current_mount_list[@]} dirs still mounted after ${tries} tries:" "${current_mount_list[*]}" "wrn"
fi
cut -d " " -f 2 "/proc/mounts" | grep "^${target}" | xargs -n1 umount --recursive &> /dev/null || true # ignore errors
sync # wait for fsync, then count again for next loop.
mapfile -t current_mount_list < <(cut -d " " -f 2 "/proc/mounts" | grep "^${target}")
tries=$((tries + 1))
done
display_alert "Unmounted OK after ${tries} attempt(s)" "$target" "info"
return 0
} }

View File

@@ -1,19 +1,21 @@
#!/bin/bash
# cleaning <target> # cleaning <target>
# #
# target: what to clean
# "make" - "make clean" for selected kernel and u-boot
# "debs" - delete output/debs for board&branch
# "ubootdebs" - delete output/debs for uboot&board&branch
# "alldebs" - delete output/debs
# "cache" - delete output/cache
# "oldcache" - remove old output/cache
# "images" - delete output/images
# "sources" - delete output/sources
#
cleaning() { # target: what to clean
# "make-atf" = make clean for ATF, if it is built.
# "make-uboot" = make clean for uboot, if it is built.
# "make-kernel" = make clean for kernel, if it is built. very slow.
# *important*: "make" by itself has disabled, since Armbian knows how to handle Make timestamping now.
# "debs" = delete packages in "./output/debs" for current branch and family. causes rebuilds, hopefully cached.
# "ubootdebs" - delete output/debs for uboot&board&branch
# "alldebs" = delete all packages in "./output/debs"
# "images" = delete "./output/images"
# "cache" = delete "./output/cache"
# "sources" = delete "./sources"
# "oldcache" = remove old cached rootfs except for the newest 8 files
general_cleaning() {
case $1 in case $1 in
debs) # delete ${DEB_STORAGE} for current branch and family debs) # delete ${DEB_STORAGE} for current branch and family
if [[ -d "${DEB_STORAGE}" ]]; then if [[ -d "${DEB_STORAGE}" ]]; then
@@ -59,7 +61,7 @@ cleaning() {
[[ -d "${DEST}"/images ]] && display_alert "Cleaning" "output/images" "info" && rm -rf "${DEST}"/images/* [[ -d "${DEST}"/images ]] && display_alert "Cleaning" "output/images" "info" && rm -rf "${DEST}"/images/*
;; ;;
sources) # delete output/sources and output/buildpkg sources) # delete cache/sources and output/buildpkg
[[ -d "${SRC}"/cache/sources ]] && display_alert "Cleaning" "sources" "info" && rm -rf "${SRC}"/cache/sources/* "${DEST}"/buildpkg/* [[ -d "${SRC}"/cache/sources ]] && display_alert "Cleaning" "sources" "info" && rm -rf "${SRC}"/cache/sources/* "${DEST}"/buildpkg/*
;; ;;

View File

@@ -40,7 +40,12 @@ function get_urls() {
echo "${urls[@]}" echo "${urls[@]}"
} }
download_and_verify() { # Terrible idea, this runs download_and_verify_internal() with error handling disabled.
function download_and_verify() {
download_and_verify_internal "${@}" || true
}
function download_and_verify_internal() {
local catalog=$1 local catalog=$1
local filename=$2 local filename=$2

View File

@@ -1,173 +1,38 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# #
# This function retries Git operations to avoid failure in case remote is borked # This function retries Git operations to avoid failure in case remote is borked
# If the git team needs to call a remote server, use this function.
# #
improved_git() { improved_git() {
local real_git
local realgit=$(command -v git) real_git="$(command -v git)"
local retries=3 local retries=3
local delay=10 local delay=10
local count=1 local count=0
while [ $count -lt $retries ]; do while [ $count -lt $retries ]; do
$realgit "$@" run_host_command_logged_raw "$real_git" --no-pager "$@" && return 0 # this gobbles up errors, but returns if OK, so everything after is error
if [[ $? -eq 0 || -f .git/index.lock ]]; then count=$((count + 1))
retries=0 display_alert "improved_git try $count failed, retrying in ${delay} seconds" "git $*" "warn"
break
fi
let count=$count+1
sleep $delay sleep $delay
done done
display_alert "improved_git, too many retries" "git $*" "err"
return 17 # explode with error if this is reached, "too many retries"
} }
clean_up_git() { # Not improved, just regular, but logged "correctly".
local target_dir=$1 regular_git() {
run_host_command_logged_raw git --no-pager "$@"
# Files that are not tracked by git and were added
# when the patch was applied must be removed.
git -C $target_dir clean -qdf
# Return the files that are tracked by git to the initial state.
git -C $target_dir checkout -qf HEAD
} }
# used : waiter_local_git arg1='value' arg2:'value' # avoid repeating myself too much
# waiter_local_git \ function improved_git_fetch() {
# url='https://github.com/megous/linux' \ improved_git fetch --progress --verbose --no-auto-maintenance "$@"
# name='megous' \ }
# dir='linux-mainline/5.14' \
# branch='orange-pi-5.14' \
# obj=<tag|commit> or tag:$tag ...
# An optional parameter for switching to a git object such as a tag, commit,
# or a specific branch. The object must exist in the local repository.
# This optional parameter takes precedence. If it is specified, then
# the commit state corresponding to the specified git object will be extracted
# to the working directory. Otherwise, the commit corresponding to the top of
# the branch will be extracted.
# The settings for the kernel variables of the original kernel
# VAR_SHALLOW_ORIGINAL=var_origin_kernel must be in the main script
# before calling the function
waiter_local_git() {
for arg in $@; do
case $arg in # workaround new limitations imposed by CVE-2022-24765 fix in git, otherwise "fatal: unsafe repository"
url=* | https://* | git://*) function git_ensure_safe_directory() {
eval "local url=${arg/url=/}" local git_dir="$1"
;; display_alert "git: Marking directory as safe" "$git_dir" "debug"
dir=* | /*/*/*) run_host_command_logged git config --global --add safe.directory "$git_dir"
eval "local dir=${arg/dir=/}"
;;
*=* | *:*)
eval "local ${arg/:/=}"
;;
esac
done
# Required variables cannot be empty.
for var in url name dir branch; do
[ "${var#*=}" == "" ] && exit_with_error "Error in configuration"
done
local reachability
# The 'offline' variable must always be set to 'true' or 'false'
if [ "$OFFLINE_WORK" == "yes" ]; then
local offline=true
else
local offline=false
fi
local work_dir="$(realpath ${SRC}/cache/sources)/$dir"
mkdir -p $work_dir
cd $work_dir || exit_with_error
display_alert "Checking git sources" "$dir $url$name/$branch" "info"
if [ "$(git rev-parse --git-dir 2> /dev/null)" != ".git" ]; then
git init -q .
# Run in the sub shell to avoid mixing environment variables.
if [ -n "$VAR_SHALLOW_ORIGINAL" ]; then
(
$VAR_SHALLOW_ORIGINAL
display_alert "Add original git sources" "$dir $name/$branch" "info"
if [ "$(improved_git ls-remote -h $url $branch |
awk -F'/' '{if (NR == 1) print $NF}')" != "$branch" ]; then
display_alert "Bad $branch for $url in $VAR_SHALLOW_ORIGINAL"
exit 177
fi
git remote add -t $branch $name $url
# Handle an exception if the initial tag is the top of the branch
# As v5.16 == HEAD
if [ "${start_tag}.1" == "$(improved_git ls-remote -t $url ${start_tag}.1 |
awk -F'/' '{ print $NF }')" ]; then
improved_git fetch --shallow-exclude=$start_tag $name
else
improved_git fetch --depth 1 $name
fi
improved_git fetch --deepen=1 $name
# For a shallow clone, this works quickly and saves space.
git gc
)
[ "$?" == "177" ] && exit
fi
fi
files_for_clean="$(git status -s | wc -l)"
if [ "$files_for_clean" != "0" ]; then
display_alert " Cleaning .... " "$files_for_clean files"
clean_up_git $work_dir
fi
if [ "$name" != "$(git remote show | grep $name)" ]; then
git remote add -t $branch $name $url
fi
if ! $offline; then
for t_name in $(git remote show); do
improved_git fetch $t_name
done
fi
# When switching, we use the concept of only "detached branch". Therefore,
# we extract the hash from the tag, the branch name, or from the hash itself.
# This serves as a check of the reachability of the extraction.
# We do not use variables that characterize the current state of the git,
# such as `HEAD` and `FETCH_HEAD`.
reachability=false
for var in obj tag commit branch; do
eval pval=\$$var
if [ -n "$pval" ] && [ "$pval" != *HEAD ]; then
case $var in
obj | tag | commit) obj=$pval ;;
branch) obj=${name}/$branch ;;
esac
if t_hash=$(git rev-parse $obj 2> /dev/null); then
reachability=true
break
else
display_alert "Variable $var=$obj unreachable for extraction"
fi
fi
done
if $reachability && [ "$t_hash" != "$(git rev-parse @ 2> /dev/null)" ]; then
# Switch "detached branch" as hash
display_alert "Switch $obj = $t_hash"
git checkout -qf $t_hash
else
# the working directory corresponds to the target commit,
# nothing needs to be done
display_alert "Up to date"
fi
} }
# fetch_from_repo <url> <directory> <ref> <ref_subdir> # fetch_from_repo <url> <directory> <ref> <ref_subdir>
@@ -184,168 +49,343 @@ waiter_local_git() {
# <ref_subdir>: "yes" to create subdirectory for tag or branch name # <ref_subdir>: "yes" to create subdirectory for tag or branch name
# #
fetch_from_repo() { fetch_from_repo() {
display_alert "fetch_from_repo" "$*" "git"
local url=$1 local url=$1
local dir=$2 local dir=$2
local ref=$3 local ref=$3
local ref_subdir=$4 local ref_subdir=$4
local git_work_dir
# Set GitHub mirror before anything else touches $url # Set GitHub mirror before anything else touches $url
url=${url//'https://github.com/'/$GITHUB_SOURCE'/'} url=${url//'https://github.com/'/$GITHUB_SOURCE'/'}
# The 'offline' variable must always be set to 'true' or 'false' # The 'offline' variable must always be set to 'true' or 'false'
if [ "$OFFLINE_WORK" == "yes" ]; then local offline=false
local offline=true if [[ "${OFFLINE_WORK}" == "yes" ]]; then
else offline=true
local offline=false
fi fi
[[ -z $ref || ($ref != tag:* && $ref != branch:* && $ref != head && $ref != commit:*) ]] && exit_with_error "Error in configuration" [[ -z $ref || ($ref != tag:* && $ref != branch:* && $ref != head && $ref != commit:*) ]] && exit_with_error "Error in configuration"
local ref_type=${ref%%:*} local ref_type=${ref%%:*} ref_name=${ref##*:}
if [[ $ref_type == head ]]; then if [[ $ref_type == head ]]; then
local ref_name=HEAD ref_name=HEAD
else
local ref_name=${ref##*:}
fi fi
display_alert "Checking git sources" "$dir $ref_name" "info" display_alert "Getting sources from Git" "$dir $ref_name" "info"
# get default remote branch name without cloning
# local ref_name=$(git ls-remote --symref $url HEAD | grep -o 'refs/heads/\S*' | sed 's%refs/heads/%%')
# for git:// protocol comparing hashes of "git ls-remote -h $url" and "git ls-remote --symref $url HEAD" is needed
local workdir=$dir
if [[ $ref_subdir == yes ]]; then if [[ $ref_subdir == yes ]]; then
local workdir=$dir/$ref_name workdir=$dir/$ref_name
else
local workdir=$dir
fi fi
mkdir -p "${SRC}/cache/sources/${workdir}" 2> /dev/null || git_work_dir="${SRC}/cache/sources/${workdir}"
exit_with_error "No path or no write permission" "${SRC}/cache/sources/${workdir}"
cd "${SRC}/cache/sources/${workdir}" || exit # if GIT_FIXED_WORKDIR has something, ignore above logic and use that directly.
if [[ "${GIT_FIXED_WORKDIR}" != "" ]]; then
# check if existing remote URL for the repo or branch does not match current one display_alert "GIT_FIXED_WORKDIR is set to" "${GIT_FIXED_WORKDIR}" "git"
# may not be supported by older git versions git_work_dir="${SRC}/cache/sources/${GIT_FIXED_WORKDIR}"
# Check the folder as a git repository.
# Then the target URL matches the local URL.
if [[ "$(git rev-parse --git-dir 2> /dev/null)" == ".git" &&
"$url" != *"$(git remote get-url origin | sed 's/^.*@//' | sed 's/^.*\/\///' 2> /dev/null)" ]]; then
display_alert "Remote URL does not match, removing existing local copy"
rm -rf .git ./*
fi fi
if [[ "$(git rev-parse --git-dir 2> /dev/null)" != ".git" ]]; then mkdir -p "${git_work_dir}" || exit_with_error "No path or no write permission" "${git_work_dir}"
display_alert "Creating local copy"
git init -q . cd "${git_work_dir}" || exit
git remote add origin "${url}"
# Here you need to upload from a new address display_alert "Git working dir" "${git_work_dir}" "git"
offline=false
git_ensure_safe_directory "${git_work_dir}"
local expected_origin_url actual_origin_url
expected_origin_url="$(echo -n "${url}" | sed 's/^.*@//' | sed 's/^.*\/\///')"
# Make sure the origin matches what is expected. If it doesn't, clean up and start again.
if [[ "$(git rev-parse --git-dir)" == ".git" ]]; then
actual_origin_url="$(git config remote.origin.url | sed 's/^.*@//' | sed 's/^.*\/\///')"
if [[ "${expected_origin_url}" != "${actual_origin_url}" ]]; then
display_alert "Remote git URL does not match, deleting working copy" "${git_work_dir} expected: '${expected_origin_url}' actual: '${actual_origin_url}'" "warn"
cd "${SRC}" || exit 3 # free up cwd
run_host_command_logged rm -rf "${git_work_dir}" # delete the dir
mkdir -p "${git_work_dir}" || exit_with_error "No path or no write permission" "${git_work_dir}" # recreate
cd "${git_work_dir}" || exit #reset cwd
fi
fi
local do_warmup_remote="no" do_cold_bundle="no" do_add_origin="no"
if [[ "$(git rev-parse --git-dir)" != ".git" ]]; then
# Dir is not a git working copy. Make it so.
display_alert "Creating local copy" "$dir $ref_name"
regular_git init -q --initial-branch="armbian_unused_initial_branch" .
offline=false # Force online, we'll need to fetch.
do_add_origin="yes" # Just created the repo, it needs an origin later.
do_warmup_remote="yes" # Just created the repo, mark it as ready to receive the warm remote if exists.
do_cold_bundle="yes" # Just created the repo, mark it as ready to receive a cold bundle if that is available.
# @TODO: possibly hang a cleanup handler here: if this fails, ${git_work_dir} should be removed.
fi fi
local changed=false local changed=false
# get local hash; might fail
local local_hash
local_hash=$(git rev-parse @ 2> /dev/null || true) # Don't fail nor output anything if failure
# when we work offline we simply return the sources to their original state # when we work offline we simply return the sources to their original state
if ! $offline; then if ! $offline; then
local local_hash
local_hash=$(git rev-parse @ 2> /dev/null)
case $ref_type in case $ref_type in
branch) branch)
# TODO: grep refs/heads/$name # TODO: grep refs/heads/$name
local remote_hash local remote_hash
remote_hash=$(improved_git ls-remote -h "${url}" "$ref_name" | head -1 | cut -f1) remote_hash=$(git ls-remote -h "${url}" "$ref_name" | head -1 | cut -f1)
[[ -z $local_hash || "${local_hash}" != "${remote_hash}" ]] && changed=true [[ -z $local_hash || "${local_hash}" != "a${remote_hash}" ]] && changed=true
;; ;;
tag) tag)
local remote_hash local remote_hash
remote_hash=$(improved_git ls-remote -t "${url}" "$ref_name" | cut -f1) remote_hash=$(git ls-remote -t "${url}" "$ref_name" | cut -f1)
if [[ -z $local_hash || "${local_hash}" != "${remote_hash}" ]]; then if [[ -z $local_hash || "${local_hash}" != "${remote_hash}" ]]; then
remote_hash=$(improved_git ls-remote -t "${url}" "$ref_name^{}" | cut -f1) remote_hash=$(git ls-remote -t "${url}" "$ref_name^{}" | cut -f1)
[[ -z $remote_hash || "${local_hash}" != "${remote_hash}" ]] && changed=true [[ -z $remote_hash || "${local_hash}" != "${remote_hash}" ]] && changed=true
fi fi
;; ;;
head) head)
local remote_hash local remote_hash
remote_hash=$(improved_git ls-remote "${url}" HEAD | cut -f1) remote_hash=$(git ls-remote "${url}" HEAD | cut -f1)
[[ -z $local_hash || "${local_hash}" != "${remote_hash}" ]] && changed=true [[ -z $local_hash || "${local_hash}" != "${remote_hash}" ]] && changed=true
;; ;;
commit) commit)
[[ -z $local_hash || $local_hash == "@" ]] && changed=true [[ -z $local_hash || $local_hash == "@" ]] && changed=true
;; ;;
esac esac
display_alert "Git local_hash vs remote_hash" "${local_hash} vs ${remote_hash}" "git"
fi # offline fi # offline
if [[ $changed == true ]]; then local checkout_from="HEAD" # Probably best to use the local revision?
# remote was updated, fetch and check out updates if [[ "${changed}" == "true" ]]; then
display_alert "Fetching updates" git_handle_cold_and_warm_bundle_remotes # Delegate to function to find or create cache if appropriate.
case $ref_type in
branch) improved_git fetch --depth 200 origin "${ref_name}" ;;
tag) improved_git fetch --depth 200 origin tags/"${ref_name}" ;;
head) improved_git fetch --depth 200 origin HEAD ;;
esac
# commit type needs support for older git servers that doesn't support fetching id directly
if [[ $ref_type == commit ]]; then
improved_git fetch --depth 200 origin "${ref_name}"
# cover old type
if [[ $? -ne 0 ]]; then
display_alert "Commit checkout not supported on this repository. Doing full clone." "" "wrn"
improved_git pull
git checkout -fq "${ref_name}"
display_alert "Checkout out to" "$(git --no-pager log -2 --pretty=format:"$ad%s [%an]" | head -1)" "info"
else
display_alert "Checking out"
git checkout -f -q FETCH_HEAD
git clean -qdf
fi
else
display_alert "Checking out"
git checkout -f -q FETCH_HEAD
git clean -qdf
if [[ "${do_add_origin}" == "yes" ]]; then
regular_git remote add origin "${url}"
fi fi
elif [[ -n $(git status -uno --porcelain --ignore-submodules=all) ]]; then
# working directory is not clean
display_alert " Cleaning .... " "$(git status -s | wc -l) files"
# Return the files that are tracked by git to the initial state. # remote was updated, fetch and check out updates, but not tags; tags pull their respective commits too, making it a huge fetch.
git checkout -f -q HEAD display_alert "Fetching updates from origin" "$dir $ref_name"
case $ref_type in
# Files that are not tracked by git and were added branch | commit) improved_git_fetch --no-tags origin "${ref_name}" ;;
# when the patch was applied must be removed. tag) improved_git_fetch --no-tags origin tags/"${ref_name}" ;;
git clean -qdf head) improved_git_fetch --no-tags origin HEAD ;;
else esac
# working directory is clean, nothing to do display_alert "Origin fetch completed, working copy size" "$(du -h -s | awk '{print $1}')" "git"
display_alert "Up to date" checkout_from="FETCH_HEAD"
fi fi
# should be declared in outside scope, so can be read.
checked_out_revision_ts="$(git log -1 --pretty=%ct "${checkout_from}")" # unix timestamp of the commit date
checked_out_revision_mtime="$(date +%Y%m%d%H%M%S -d "@${checked_out_revision_ts}")" # convert timestamp to local date/time
display_alert "checked_out_revision_mtime set!" "${checked_out_revision_mtime} - ${checked_out_revision_ts}" "git"
display_alert "Cleaning git dir" "$(git status -s 2> /dev/null | wc -l) files" # working directory is not clean, show it
#fasthash_debug "before git checkout of $dir $ref_name" # fasthash interested in this
regular_git checkout -f -q "${checkout_from}" # Return the files that are tracked by git to the initial state.
#fasthash_debug "before git clean of $dir $ref_name"
regular_git clean -q -d -f # Files that are not tracked by git and were added when the patch was applied must be removed.
# set the checkout date on all the versioned files.
# @TODO: this is contentious. disable for now. patches will still use the mininum date set by checked_out_revision_mtime above
#git ls-tree -r -z --name-only "${checkout_from}" | xargs -0 -- touch -m -t "${checked_out_revision_mtime:0:12}.${checked_out_revision_mtime:12}"
#fasthash_debug "after setting checkout time for $dir $ref_name" #yeah
if [[ -f .gitmodules ]]; then if [[ -f .gitmodules ]]; then
display_alert "Updating submodules" "" "ext" if [[ "${GIT_SKIP_SUBMODULES}" == "yes" ]]; then
# FML: http://stackoverflow.com/a/17692710 display_alert "Skipping submodules" "GIT_SKIP_SUBMODULES=yes" "debug"
for i in $(git config -f .gitmodules --get-regexp path | awk '{ print $2 }'); do else
cd "${SRC}/cache/sources/${workdir}" || exit display_alert "Updating submodules" "" "ext"
local surl sref # FML: http://stackoverflow.com/a/17692710
surl=$(git config -f .gitmodules --get "submodule.$i.url") for i in $(git config -f .gitmodules --get-regexp path | awk '{ print $2 }'); do
sref=$(git config -f .gitmodules --get "submodule.$i.branch") cd "${git_work_dir}" || exit
if [[ -n $sref ]]; then local surl sref
sref="branch:$sref" surl=$(git config -f .gitmodules --get "submodule.$i.url")
else sref=$(git config -f .gitmodules --get "submodule.$i.branch" || true)
sref="head" if [[ -n $sref ]]; then
fi sref="branch:$sref"
fetch_from_repo "$surl" "$workdir/$i" "$sref" else
done sref="head"
fi
# @TODO: in case of the bundle stuff this will fail terribly
fetch_from_repo "$surl" "$workdir/$i" "$sref"
done
fi
fi
display_alert "Final working copy size" "$(du -h -s | awk '{print $1}')" "git"
#fasthash_debug "at the end of fetch_from_repo $dir $ref_name"
}
function git_fetch_from_bundle_file() {
local bundle_file="${1}" remote_name="${2}" shallow_file="${3}"
regular_git bundle verify "${bundle_file}" # Make sure bundle is valid.
regular_git remote add "${remote_name}" "${bundle_file}" # Add the remote pointing to the cold bundle file
if [[ -f "${shallow_file}" ]]; then
display_alert "Bundle is shallow" "${shallow_file}" "git"
cp -p "${shallow_file}" ".git/shallow"
fi
improved_git_fetch --tags "${remote_name}" # Fetch it! (including tags!)
display_alert "Bundle fetch '${remote_name}' completed, working copy size" "$(du -h -s | awk '{print $1}')" "git"
}
function download_git_bundle_from_http() {
local bundle_file="${1}" bundle_url="${2}"
if [[ ! -f "${git_cold_bundle_cache_file}" ]]; then # Download the bundle file if it does not exist.
display_alert "Downloading Git cold bundle via HTTP" "${bundle_url}" "info" # This gonna take a while. And waste bandwidth
run_host_command_logged wget --continue --progress=dot:giga --output-document="${bundle_file}" "${bundle_url}"
else
display_alert "Cold bundle file exists, using it" "${bundle_file}" "git"
fi fi
} }
function git_remove_cold_and_warm_bundle_remotes() {
# Remove the cold bundle remote, otherwise it holds references that impede the shallow to actually work.
if [[ ${has_git_cold_remote} -gt 0 ]]; then
regular_git remote remove "${git_cold_bundle_remote_name}"
has_git_cold_remote=0
fi
# Remove the warmup remote, otherwise it holds references forever.
if [[ ${has_git_warm_remote} -gt 0 ]]; then
regular_git remote remove "${GIT_WARM_REMOTE_NAME}"
has_git_warm_remote=0
fi
}
function git_handle_cold_and_warm_bundle_remotes() {
local has_git_cold_remote=0
local has_git_warm_remote=0
local git_warm_remote_bundle_file git_warm_remote_bundle_cache_dir git_warm_remote_bundle_file_shallowfile
local git_warm_remote_bundle_extra_fn=""
# First check the warm remote bundle cache. If that exists, use that, and skip the cold bundle.
if [[ "${do_warmup_remote}" == "yes" ]]; then
if [[ "${GIT_WARM_REMOTE_NAME}" != "" ]] && [[ "${GIT_WARM_REMOTE_BUNDLE}" != "" ]]; then
# Add extras to filename, for shallow by tag or revision
if [[ "${GIT_WARM_REMOTE_SHALLOW_REVISION}" != "" ]]; then
git_warm_remote_bundle_extra_fn="-shallow-rev-${GIT_WARM_REMOTE_SHALLOW_REVISION}"
elif [[ "${GIT_WARM_REMOTE_SHALLOW_AT_TAG}" != "" ]]; then
git_warm_remote_bundle_extra_fn="-shallow-tag-${GIT_WARM_REMOTE_SHALLOW_AT_TAG}"
fi
git_warm_remote_bundle_cache_dir="${SRC}/cache/gitbundles/warm" # calculate the id, dir and name of local file and remote
git_warm_remote_bundle_file="${git_warm_remote_bundle_cache_dir}/${GIT_WARM_REMOTE_BUNDLE}${git_warm_remote_bundle_extra_fn}.gitbundle" # final filename of bundle
git_warm_remote_bundle_file_shallowfile="${git_warm_remote_bundle_file}.shallow" # it can be there's a shallow revision
if [[ -f "${git_warm_remote_bundle_file}" ]]; then
display_alert "Fetching from warm git bundle, wait" "${GIT_WARM_REMOTE_BUNDLE}" "info" # This is gonna take a long while...
git_fetch_from_bundle_file "${git_warm_remote_bundle_file}" "${GIT_WARM_REMOTE_NAME}" "${git_warm_remote_bundle_file_shallowfile}"
do_cold_bundle="no" # Skip the cold bundle, below.
do_warmup_remote="no" # Skip the warm bundle creation, below, too.
has_git_warm_remote=1 # mark warm remote as added.
else
display_alert "Could not find warm bundle file" "${git_warm_remote_bundle_file}" "git"
fi
fi
fi
if [[ "${do_cold_bundle}" == "yes" ]]; then
# If there's a cold bundle URL specified:
# - if there's already a cold_bundle_xxx remote, move on.
# - grab the bundle via http/https first, and fetch from that, into "cold_bundle_xxx" remote.
# - do nothing else with this, it'll be used internally by git to avoid a huge fetch later.
# - but, after this, the wanted branch will be fetched. signal has_git_cold_remote=1 for later.
if [[ "${GIT_COLD_BUNDLE_URL}" != "" ]]; then
local git_cold_bundle_id git_cold_bundle_cache_dir git_cold_bundle_cache_file git_cold_bundle_remote_name
git_cold_bundle_cache_dir="${SRC}/cache/gitbundles/cold" # calculate the id, dir and name of local file and remote
git_cold_bundle_id="$(echo -n "${GIT_COLD_BUNDLE_URL}" | md5sum | awk '{print $1}')" # md5 of the URL.
git_cold_bundle_cache_file="${git_cold_bundle_cache_dir}/${git_cold_bundle_id}.gitbundle" # final filename of bundle
git_cold_bundle_remote_name="cold_bundle_${git_cold_bundle_id}" # name of the remote that will point to bundle
mkdir -p "${git_cold_bundle_cache_dir}" # make sure directory exists before downloading
download_git_bundle_from_http "${git_cold_bundle_cache_file}" "${GIT_COLD_BUNDLE_URL}"
display_alert "Fetching from cold git bundle, wait" "${git_cold_bundle_id}" "info" # This is gonna take a long while...
git_fetch_from_bundle_file "${git_cold_bundle_cache_file}" "${git_cold_bundle_remote_name}"
has_git_cold_remote=1 # marker for pruning logic below
fi
fi
# If there's a warmup remote specified.
# - if there's a cached warmup bundle file, add it as remote and fetch from it, and move on.
# - add the warmup as remote, fetch from it; export it as a cached bundle for next time.
if [[ "${do_warmup_remote}" == "yes" ]]; then
if [[ "${GIT_WARM_REMOTE_NAME}" != "" ]] && [[ "${GIT_WARM_REMOTE_URL}" != "" ]] && [[ "${GIT_WARM_REMOTE_BRANCH}" != "" ]]; then
display_alert "Using Warmup Remote before origin fetch" "${GIT_WARM_REMOTE_NAME} - ${GIT_WARM_REMOTE_BRANCH}" "git"
regular_git remote add "${GIT_WARM_REMOTE_NAME}" "${GIT_WARM_REMOTE_URL}" # Add the remote to the warmup source
has_git_warm_remote=1 # mark as done. Will export the bundle!
improved_git_fetch --no-tags "${GIT_WARM_REMOTE_NAME}" "${GIT_WARM_REMOTE_BRANCH}" # Fetch the remote branch, but no tags
display_alert "After warm bundle, working copy size" "$(du -h -s | awk '{print $1}')" "git" # Show size after bundle pull
# Checkout that to a branch. We wanna have a local reference to what has been fetched.
# @TODO: could be a param instead of FETCH_HEAD; would drop commits after that rev
local git_warm_branch_name="warm__${GIT_WARM_REMOTE_BRANCH}"
regular_git branch "${git_warm_branch_name}" FETCH_HEAD || true
improved_git_fetch "${GIT_WARM_REMOTE_NAME}" "'refs/tags/${GIT_WARM_REMOTE_FETCH_TAGS}:refs/tags/${GIT_WARM_REMOTE_FETCH_TAGS}'" || true # Fetch the remote branch, but no tags
display_alert "After warm bundle tags, working copy size" "$(du -h -s | awk '{print $1}')" "git" # Show size after bundle pull
# Lookup the tag (at the warm remote directly) to find the rev to shallow to.
if [[ "${GIT_WARM_REMOTE_SHALLOW_AT_TAG}" != "" ]]; then
display_alert "GIT_WARM_REMOTE_SHALLOW_AT_TAG" "${GIT_WARM_REMOTE_SHALLOW_AT_TAG}" "git"
GIT_WARM_REMOTE_SHALLOW_AT_DATE="$(git tag --list --format="%(creatordate)" "${GIT_WARM_REMOTE_SHALLOW_AT_TAG}")"
display_alert "GIT_WARM_REMOTE_SHALLOW_AT_TAG ${GIT_WARM_REMOTE_SHALLOW_AT_TAG} resulted in GIT_WARM_REMOTE_SHALLOW_AT_DATE" "Date: ${GIT_WARM_REMOTE_SHALLOW_AT_DATE}" "git"
fi
# At this stage, we might wanna make the local copy shallow and re-pack it.
if [[ "${GIT_WARM_REMOTE_SHALLOW_AT_DATE}" != "" ]]; then
display_alert "Making working copy shallow" "before date ${GIT_WARM_REMOTE_SHALLOW_AT_DATE}" "info"
# 'git clone' is the only consistent, usable thing we can do to do this.
# it does require a temporary dir, though. use one.
local temp_git_dir="${git_work_dir}.making.shallow.temp"
rm -rf "${temp_git_dir}"
regular_git clone --no-checkout --progress --verbose \
--single-branch --branch="${git_warm_branch_name}" \
--tags --shallow-since="${GIT_WARM_REMOTE_SHALLOW_AT_DATE}" \
"file://${git_work_dir}" "${temp_git_dir}"
display_alert "After shallow clone, temp_git_dir" "$(du -h -s "${temp_git_dir}" | awk '{print $1}')" "git" # Show size after shallow
# Get rid of original, replace with new. Move cwd so no warnings are produced.
cd "${SRC}" || exit_with_error "Failed to move cwd away so we can remove" "${git_work_dir}"
rm -rf "${git_work_dir}"
mv -v "${temp_git_dir}" "${git_work_dir}"
cd "${git_work_dir}" || exit_with_error "Failed to get new dir after clone" "${git_work_dir}"
# dir switched, no more the original remotes. but origin is leftover, remove it
regular_git remote remove origin || true
has_git_cold_remote=0
has_git_warm_remote=0
display_alert "After shallow, working copy size" "$(du -h -s | awk '{print $1}')" "git" # Show size after shallow
fi
# Now git working copy has a precious state we might wanna preserve (export the bundle).
if [[ "${GIT_WARM_REMOTE_BUNDLE}" != "" ]]; then
mkdir -p "${git_warm_remote_bundle_cache_dir}"
display_alert "Exporting warm remote bundle" "${git_warm_remote_bundle_file}" "info"
regular_git bundle create "${git_warm_remote_bundle_file}" --all
rm -f "${git_warm_remote_bundle_file_shallowfile}" # not shallow at first...
if [[ -f ".git/shallow" ]]; then
display_alert "Exported bundle is shallow" "Will copy to ${git_warm_remote_bundle_file_shallowfile}" "git"
cp -p ".git/shallow" "${git_warm_remote_bundle_file_shallowfile}"
fi
fi
fi
fi
# Make sure to remove the cold and warm bundle remote, otherwise it holds references for no good reason.
git_remove_cold_and_warm_bundle_remotes
}

View File

@@ -0,0 +1,75 @@
# Management of apt-cacher-ng aka acng
function acng_configure_and_restart_acng() {
[[ $NO_APT_CACHER == yes ]] && return 0 # don't if told not to. NO_something=yes is very confusing, but kept for historical reasons
[[ "${APT_PROXY_ADDR:-localhost:3142}" != "localhost:3142" ]] && return 0 # also not if acng not local to builder machine
display_alert "Preparing acng configuration" "apt-cacher-ng" "info"
run_host_command_logged systemctl stop apt-cacher-ng || true # ignore errors, it might already be stopped.
[[ ! -f /etc/apt-cacher-ng/acng.conf.orig.pre.armbian ]] && cp /etc/apt-cacher-ng/acng.conf /etc/apt-cacher-ng/acng.conf.orig.pre.armbian
cat <<- ACNG_CONFIG > /etc/apt-cacher-ng/acng.conf
CacheDir: ${APT_CACHER_NG_CACHE_DIR:-/var/cache/apt-cacher-ng}
LogDir: /var/log/apt-cacher-ng
SupportDir: /usr/lib/apt-cacher-ng
LocalDirs: acng-doc /usr/share/doc/apt-cacher-ng
ReportPage: acng-report.html
ExThreshold: 4
# Remapping is disabled, many times we hit broken mirrors due to this.
#Remap-debrep: file:deb_mirror*.gz /debian ; file:backends_debian # Debian Archives
#Remap-uburep: file:ubuntu_mirrors /ubuntu ; file:backends_ubuntu # Ubuntu Archives
# Turn debug logging and verbosity
Debug: 7
VerboseLog: 1
# Connections tuning.
MaxStandbyConThreads: 10
DlMaxRetries: 50
NetworkTimeout: 60
FastTimeout: 20
ConnectProto: v4 v6
RedirMax: 15
ReuseConnections: 1
# Allow HTTPS CONNECT, although this is not ideal, since packages are not actually cached.
# Enabled, since PPA's require this.
PassThroughPattern: .*
ACNG_CONFIG
# Ensure correct permissions on the directories
mkdir -p "${APT_CACHER_NG_CACHE_DIR:-/var/cache/apt-cacher-ng}" /var/log/apt-cacher-ng
chown apt-cacher-ng:apt-cacher-ng "${APT_CACHER_NG_CACHE_DIR:-/var/cache/apt-cacher-ng}" /var/log/apt-cacher-ng
if [[ "${APT_CACHER_NG_CLEAR_LOGS}" == "yes" ]]; then
display_alert "Clearing acng logs" "apt-cacher-ng logs cleaning" "debug"
run_host_command_logged rm -rfv /var/log/apt-cacher-ng/*
fi
run_host_command_logged systemctl start apt-cacher-ng
run_host_command_logged systemctl status apt-cacher-ng
}
function acng_check_status_or_restart() {
[[ $NO_APT_CACHER == yes ]] && return 0 # don't if told not to
[[ "${APT_PROXY_ADDR:-localhost:3142}" != "localhost:3142" ]] && return 0 # also not if acng not local to builder machine
if ! systemctl -q is-active apt-cacher-ng.service; then
display_alert "ACNG systemd service is not active" "restarting apt-cacher-ng" "warn"
acng_configure_and_restart_acng
fi
if ! wget -q --timeout=10 --output-document=/dev/null http://localhost:3142/acng-report.html; then
display_alert "ACNG is not correctly listening for requests" "restarting apt-cacher-ng" "warn"
acng_configure_and_restart_acng
if ! wget -q --timeout=10 --output-document=/dev/null http://localhost:3142/acng-report.html; then
exit_with_error "ACNG is not correctly listening for requests" "apt-cacher-ng NOT WORKING"
fi
fi
display_alert "apt-cacher-ng running correctly" "apt-cacher-ng OK" "debug"
}

View File

@@ -12,7 +12,7 @@ prepare_host_basic() {
"dialog:dialog" "dialog:dialog"
"fuser:psmisc" "fuser:psmisc"
"getfacl:acl" "getfacl:acl"
"uuid:uuid uuid-runtime" "uuidgen:uuid-runtime"
"curl:curl" "curl:curl"
"gpg:gnupg" "gpg:gnupg"
"gawk:gawk" "gawk:gawk"
@@ -24,7 +24,8 @@ prepare_host_basic() {
if [[ -n $install_pack ]]; then if [[ -n $install_pack ]]; then
display_alert "Updating and installing basic packages on host" "$install_pack" display_alert "Updating and installing basic packages on host" "$install_pack"
sudo bash -c "apt-get -qq update && apt-get install -qq -y --no-install-recommends $install_pack" run_host_command_logged sudo apt-get -qq update
run_host_command_logged sudo apt-get install -qq -y --no-install-recommends $install_pack
fi fi
} }

View File

@@ -0,0 +1,70 @@
# This is mostly deprecated, since SKIP_EXTERNAL_TOOLCHAINS=yes by default.
function download_external_toolchains() {
# build aarch64
if [[ $(dpkg --print-architecture) == amd64 ]]; then
if [[ "${SKIP_EXTERNAL_TOOLCHAINS}" != "yes" ]]; then
# bind mount toolchain if defined
if [[ -d "${ARMBIAN_CACHE_TOOLCHAIN_PATH}" ]]; then
mountpoint -q "${SRC}"/cache/toolchain && umount -l "${SRC}"/cache/toolchain
mount --bind "${ARMBIAN_CACHE_TOOLCHAIN_PATH}" "${SRC}"/cache/toolchain
fi
display_alert "Checking for external GCC compilers" "" "info"
# download external Linaro compiler and missing special dependencies since they are needed for certain sources
local toolchains=(
"gcc-linaro-aarch64-none-elf-4.8-2013.11_linux.tar.xz"
"gcc-linaro-arm-none-eabi-4.8-2014.04_linux.tar.xz"
"gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux.tar.xz"
"gcc-linaro-7.4.1-2019.02-x86_64_arm-linux-gnueabi.tar.xz"
"gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz"
"gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz"
"gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz"
"gcc-arm-9.2-2019.12-x86_64-arm-none-linux-gnueabihf.tar.xz"
"gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz"
"gcc-arm-11.2-2022.02-x86_64-arm-none-linux-gnueabihf.tar.xz"
"gcc-arm-11.2-2022.02-x86_64-aarch64-none-linux-gnu.tar.xz"
)
USE_TORRENT_STATUS=${USE_TORRENT}
USE_TORRENT="no"
for toolchain in ${toolchains[@]}; do
local toolchain_zip="${SRC}/cache/toolchain/${toolchain}"
local toolchain_dir="${toolchain_zip%.tar.*}"
if [[ ! -f "${toolchain_dir}/.download-complete" ]]; then
download_and_verify "toolchain" "${toolchain}" ||
exit_with_error "Failed to download toolchain" "${toolchain}"
display_alert "decompressing"
pv -p -b -r -c -N "[ .... ] ${toolchain}" "${toolchain_zip}" |
xz -dc |
tar xp --xattrs --no-same-owner --overwrite -C "${SRC}/cache/toolchain/"
if [[ $? -ne 0 ]]; then
rm -rf "${toolchain_dir}"
exit_with_error "Failed to decompress toolchain" "${toolchain}"
fi
touch "${toolchain_dir}/.download-complete"
rm -rf "${toolchain_zip}"* # Also delete asc file
fi
done
USE_TORRENT=${USE_TORRENT_STATUS}
local existing_dirs=($(ls -1 "${SRC}"/cache/toolchain))
for dir in ${existing_dirs[@]}; do
local found=no
for toolchain in ${toolchains[@]}; do
[[ $dir == ${toolchain%.tar.*} ]] && found=yes
done
if [[ $found == no ]]; then
display_alert "Removing obsolete toolchain" "$dir"
rm -rf "${SRC}/cache/toolchain/${dir}"
fi
done
else
display_alert "Ignoring toolchains" "SKIP_EXTERNAL_TOOLCHAINS: ${SKIP_EXTERNAL_TOOLCHAINS}" "info"
fi
fi
}

View File

@@ -1,4 +1,18 @@
#!/usr/bin/env bash #!/usr/bin/env bash
function fetch_and_build_host_tools() {
call_extension_method "fetch_sources_tools" <<- 'FETCH_SOURCES_TOOLS'
*fetch host-side sources needed for tools and build*
Run early to fetch_from_repo or otherwise obtain sources for needed tools.
FETCH_SOURCES_TOOLS
call_extension_method "build_host_tools" <<- 'BUILD_HOST_TOOLS'
*build needed tools for the build, host-side*
After sources are fetched, build host-side tools needed for the build.
BUILD_HOST_TOOLS
}
# wait_for_package_manager # wait_for_package_manager
# #
# * installation will break if we try to install when package manager is running # * installation will break if we try to install when package manager is running
@@ -20,3 +34,34 @@ wait_for_package_manager() {
fi fi
done done
} }
# Install the whitespace-delimited packages listed in the first parameter, in the host (not chroot).
# It handles correctly the case where all wanted packages are already installed, and in that case does nothing.
# If packages are to be installed, it does an apt-get update first.
function install_host_side_packages() {
declare wanted_packages_string
declare -a currently_installed_packages missing_packages
wanted_packages_string=${*}
missing_packages=()
# shellcheck disable=SC2207 # I wanna split, thanks.
currently_installed_packages=($(dpkg-query --show --showformat='${Package} '))
for PKG_TO_INSTALL in ${wanted_packages_string}; do
# shellcheck disable=SC2076 # I wanna match literally, thanks.
if [[ ! " ${currently_installed_packages[*]} " =~ " ${PKG_TO_INSTALL} " ]]; then
display_alert "Should install package" "${PKG_TO_INSTALL}"
missing_packages+=("${PKG_TO_INSTALL}")
fi
done
if [[ ${#missing_packages[@]} -gt 0 ]]; then
display_alert "Updating apt host-side for installing packages" "${#missing_packages[@]} packages" "info"
host_apt_get update
display_alert "Installing host-side packages" "${missing_packages[*]}" "info"
host_apt_get_install "${missing_packages[@]}"
else
display_alert "All host-side dependencies/packages already installed." "Skipping host-hide install" "debug"
fi
unset currently_installed_packages
return 0
}

View File

@@ -26,54 +26,66 @@ prepare_host() {
export LC_ALL="en_US.UTF-8" export LC_ALL="en_US.UTF-8"
# packages list for host # don't use mirrors that throws garbage on 404
# NOTE: please sync any changes here with the Dockerfile and Vagrantfile if [[ -z ${ARMBIAN_MIRROR} && "${SKIP_ARMBIAN_REPO}" != "yes" ]]; then
display_alert "Determining best Armbian mirror to use" "via redirector" "debug"
local hostdeps="acl aptly aria2 bc binfmt-support bison btrfs-progs \ declare -i armbian_mirror_tries=1
build-essential ca-certificates ccache cpio cryptsetup curl \ while true; do
debian-archive-keyring debian-keyring debootstrap device-tree-compiler \ display_alert "Obtaining Armbian mirror" "via https://redirect.armbian.com" "debug"
dialog dirmngr dosfstools dwarves f2fs-tools fakeroot flex gawk \ ARMBIAN_MIRROR=$(wget -SO- -T 1 -t 1 https://redirect.armbian.com 2>&1 | egrep -i "Location" | awk '{print $2}' | head -1)
gcc-arm-linux-gnueabi gcc-aarch64-linux-gnu gdisk gpg busybox \ if [[ ${ARMBIAN_MIRROR} != *armbian.hosthatch* ]]; then # @TODO: hosthatch is not good enough. Why?
imagemagick jq kmod libbison-dev libc6-dev-armhf-cross libcrypto++-dev \ display_alert "Obtained Armbian mirror OK" "${ARMBIAN_MIRROR}" "debug"
libelf-dev libfdt-dev libfile-fcntllock-perl parallel libmpc-dev \ break
libfl-dev liblz4-tool libncurses-dev libpython2.7-dev libssl-dev \ else
libusb-1.0-0-dev linux-base locales lzop ncurses-base ncurses-term \ display_alert "Obtained Armbian mirror is invalid, retrying..." "${ARMBIAN_MIRROR}" "debug"
nfs-kernel-server ntpdate p7zip-full parted patchutils pigz pixz \ fi
pkg-config pv python3-dev python3-distutils qemu-user-static rsync swig \ armbian_mirror_tries=$((armbian_mirror_tries + 1))
systemd-container u-boot-tools udev unzip uuid-dev wget whiptail zip \ if [[ $armbian_mirror_tries -ge 5 ]]; then
zlib1g-dev zstd fdisk" exit_with_error "Unable to obtain ARMBIAN_MIRROR after ${armbian_mirror_tries} tries. Please set ARMBIAN_MIRROR to a valid mirror manually, or avoid the automatic mirror selection by setting SKIP_ARMBIAN_REPO=yes"
fi
if [[ $(dpkg --print-architecture) == amd64 ]]; then done
hostdeps+=" distcc lib32ncurses-dev lib32stdc++6 libc6-i386"
grep -q i386 <(dpkg --print-foreign-architectures) || dpkg --add-architecture i386
if [[ $ARCH == "riscv64" ]]; then
hostdeps+=" gcc-riscv64-linux-gnu libncurses5-dev \
qtbase5-dev schedtool zstd debian-ports-archive-keyring"
fi
elif [[ $(dpkg --print-architecture) == arm64 ]]; then
hostdeps+=" gcc-arm-none-eabi libc6 libc6-amd64-cross qemu"
else
display_alert "Please read documentation to set up proper compilation environment"
display_alert "https://www.armbian.com/using-armbian-tools/"
exit_with_error "Running this tool on non x86_64 build host is not supported"
fi fi
# Add support for Ubuntu 20.04, 21.04 and Mint 20.x # packages list for host
if [[ $HOSTRELEASE =~ ^(focal|impish|hirsute|jammy|kinetic|lunar|ulyana|ulyssa|bullseye|bookworm|uma|una|vanessa|vera)$ ]]; then # NOTE: please sync any changes here with the Dockerfile and Vagrantfile
hostdeps+=" python2 python3" declare -a host_dependencies=(
ln -fs /usr/bin/python2.7 /usr/bin/python2 # big bag of stuff from before
ln -fs /usr/bin/python2.7 /usr/bin/python acl aptly bc binfmt-support bison btrfs-progs
build-essential ca-certificates ccache cpio cryptsetup
debian-archive-keyring debian-keyring debootstrap device-tree-compiler
dialog dirmngr dosfstools dwarves f2fs-tools fakeroot flex gawk
gnupg gpg imagemagick jq kmod libbison-dev
libelf-dev libfdt-dev libfile-fcntllock-perl libmpc-dev
libfl-dev liblz4-tool libncurses-dev libssl-dev
libusb-1.0-0-dev linux-base locales ncurses-base ncurses-term
ntpdate patchutils
pkg-config pv python3-dev python3-distutils qemu-user-static rsync swig
systemd-container u-boot-tools udev uuid-dev whiptail
zlib1g-dev busybox
# python2, including headers, mostly used by some u-boot builds (2017 et al, odroidxu4 and others).
python2 python2-dev
# non-mess below?
file ccze colorized-logs tree # logging utilities
unzip zip p7zip-full pigz pixz pbzip2 lzop zstd # compressors et al
parted gdisk # partition tools
aria2 curl wget # downloaders et al
parallel # do things in parallel
# toolchains. NEW: using metapackages, allow us to have same list of all arches; brings both C and C++ compilers
crossbuild-essential-armhf crossbuild-essential-armel # for ARM 32-bit, both HF and EL are needed in some cases.
crossbuild-essential-arm64 # For ARM 64-bit, arm64.
crossbuild-essential-amd64 # For AMD 64-bit, x86_64.
)
if [[ $(dpkg --print-architecture) == amd64 ]]; then
:
elif [[ $(dpkg --print-architecture) == arm64 ]]; then
host_dependencies+=(libc6-amd64-cross qemu) # Support for running x86 binaries on ARM64 under qemu.
else else
hostdeps+=" python libpython-dev" display_alert "Please read documentation to set up proper compilation environment"
display_alert "https://www.armbian.com/using-armbian-tools/"
exit_with_error "Running this tool on non x86_64 or arm64 build host is not supported"
fi fi
display_alert "Build host OS release" "${HOSTRELEASE:-(unknown)}" "info" display_alert "Build host OS release" "${HOSTRELEASE:-(unknown)}" "info"
@@ -119,10 +131,11 @@ prepare_host() {
# Skip verification if you are working offline # Skip verification if you are working offline
if ! $offline; then if ! $offline; then
# warning: apt-cacher-ng will fail if installed and used both on host and in # warning: apt-cacher-ng will fail if installed and used both on host and in container/chroot environment with shared network
# container/chroot environment with shared network
# set NO_APT_CACHER=yes to prevent installation errors in such case # set NO_APT_CACHER=yes to prevent installation errors in such case
if [[ $NO_APT_CACHER != yes ]]; then hostdeps+=" apt-cacher-ng"; fi if [[ $NO_APT_CACHER != yes ]]; then
host_dependencies+=("apt-cacher-ng")
fi
export EXTRA_BUILD_DEPS="" export EXTRA_BUILD_DEPS=""
call_extension_method "add_host_dependencies" <<- 'ADD_HOST_DEPENDENCIES' call_extension_method "add_host_dependencies" <<- 'ADD_HOST_DEPENDENCIES'
@@ -130,19 +143,22 @@ prepare_host() {
you can add packages to install, space separated, to ${EXTRA_BUILD_DEPS} here. you can add packages to install, space separated, to ${EXTRA_BUILD_DEPS} here.
ADD_HOST_DEPENDENCIES ADD_HOST_DEPENDENCIES
if [ -n "${EXTRA_BUILD_DEPS}" ]; then hostdeps+=" ${EXTRA_BUILD_DEPS}"; fi if [ -n "${EXTRA_BUILD_DEPS}" ]; then
# shellcheck disable=SC2206 # I wanna expand. @TODO: later will convert to proper array
host_dependencies+=(${EXTRA_BUILD_DEPS})
fi
display_alert "Installing build dependencies" display_alert "Installing build dependencies"
# don't prompt for apt cacher selection
# don't prompt for apt cacher selection. this is to skip the prompt only, since we'll manage acng config later.
sudo echo "apt-cacher-ng apt-cacher-ng/tunnelenable boolean false" | sudo debconf-set-selections sudo echo "apt-cacher-ng apt-cacher-ng/tunnelenable boolean false" | sudo debconf-set-selections
LOG_OUTPUT_FILE="${DEST}"/${LOG_SUBPATH}/hostdeps.log # This handles the wanted list in $host_dependencies, updates apt only if needed
install_pkg_deb "autoupdate $hostdeps" install_host_side_packages "${host_dependencies[@]}"
unset LOG_OUTPUT_FILE
update-ccache-symlinks run_host_command_logged update-ccache-symlinks
export FINAL_HOST_DEPS="$hostdeps ${EXTRA_BUILD_DEPS}" export FINAL_HOST_DEPS="${host_dependencies[*]}"
call_extension_method "host_dependencies_ready" <<- 'HOST_DEPENDENCIES_READY' call_extension_method "host_dependencies_ready" <<- 'HOST_DEPENDENCIES_READY'
*run after all host dependencies are installed* *run after all host dependencies are installed*
At this point we can read `${FINAL_HOST_DEPS}`, but changing won't have any effect. At this point we can read `${FINAL_HOST_DEPS}`, but changing won't have any effect.
@@ -150,13 +166,16 @@ prepare_host() {
are installed at this point. The system clock has not yet been synced. are installed at this point. The system clock has not yet been synced.
HOST_DEPENDENCIES_READY HOST_DEPENDENCIES_READY
# Manage apt-cacher-ng
acng_configure_and_restart_acng
# sync clock # sync clock
if [[ $SYNC_CLOCK != no ]]; then if [[ $SYNC_CLOCK != no ]]; then
display_alert "Syncing clock" "host" "info" display_alert "Syncing clock" "host" "info"
ntpdate -s "${NTP_SERVER:-pool.ntp.org}" run_host_command_logged ntpdate "${NTP_SERVER:-pool.ntp.org}"
fi fi
# create directory structure # create directory structure # @TODO: this should be close to DEST, otherwise super-confusing
mkdir -p "${SRC}"/{cache,output} "${USERPATCHES_PATH}" mkdir -p "${SRC}"/{cache,output} "${USERPATCHES_PATH}"
if [[ -n $SUDO_USER ]]; then if [[ -n $SUDO_USER ]]; then
chgrp --quiet sudo cache output "${USERPATCHES_PATH}" chgrp --quiet sudo cache output "${USERPATCHES_PATH}"
@@ -166,80 +185,17 @@ prepare_host() {
find "${SRC}"/output "${USERPATCHES_PATH}" -type d ! -group sudo -exec chgrp --quiet sudo {} \; find "${SRC}"/output "${USERPATCHES_PATH}" -type d ! -group sudo -exec chgrp --quiet sudo {} \;
find "${SRC}"/output "${USERPATCHES_PATH}" -type d ! -perm -g+w,g+s -exec chmod --quiet g+w,g+s {} \; find "${SRC}"/output "${USERPATCHES_PATH}" -type d ! -perm -g+w,g+s -exec chmod --quiet g+w,g+s {} \;
fi fi
mkdir -p "${DEST}"/debs-beta/extra "${DEST}"/debs/extra "${DEST}"/{config,debug,patch} "${USERPATCHES_PATH}"/overlay "${SRC}"/cache/{sources,hash,hash-beta,toolchain,utility,rootfs} "${SRC}"/.tmp # @TODO: original: mkdir -p "${DEST}"/debs-beta/extra "${DEST}"/debs/extra "${DEST}"/{config,debug,patch} "${USERPATCHES_PATH}"/overlay "${SRC}"/cache/{sources,hash,hash-beta,toolchain,utility,rootfs} "${SRC}"/.tmp
mkdir -p "${USERPATCHES_PATH}"/overlay "${SRC}"/cache/{sources,hash,hash-beta,toolchain,utility,rootfs} "${SRC}"/.tmp
# build aarch64 # Mostly deprecated.
if [[ $(dpkg --print-architecture) == amd64 ]]; then download_external_toolchains
if [[ "${SKIP_EXTERNAL_TOOLCHAINS}" != "yes" ]]; then
# bind mount toolchain if defined
if [[ -d "${ARMBIAN_CACHE_TOOLCHAIN_PATH}" ]]; then
mountpoint -q "${SRC}"/cache/toolchain && umount -l "${SRC}"/cache/toolchain
mount --bind "${ARMBIAN_CACHE_TOOLCHAIN_PATH}" "${SRC}"/cache/toolchain
fi
display_alert "Checking for external GCC compilers" "" "info"
# download external Linaro compiler and missing special dependencies since they are needed for certain sources
local toolchains=(
"gcc-linaro-aarch64-none-elf-4.8-2013.11_linux.tar.xz"
"gcc-linaro-arm-none-eabi-4.8-2014.04_linux.tar.xz"
"gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux.tar.xz"
"gcc-linaro-7.4.1-2019.02-x86_64_arm-linux-gnueabi.tar.xz"
"gcc-linaro-7.4.1-2019.02-x86_64_aarch64-linux-gnu.tar.xz"
"gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf.tar.xz"
"gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz"
"gcc-arm-9.2-2019.12-x86_64-arm-none-linux-gnueabihf.tar.xz"
"gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz"
"gcc-arm-11.2-2022.02-x86_64-arm-none-linux-gnueabihf.tar.xz"
"gcc-arm-11.2-2022.02-x86_64-aarch64-none-linux-gnu.tar.xz"
)
USE_TORRENT_STATUS=${USE_TORRENT}
USE_TORRENT="no"
for toolchain in ${toolchains[@]}; do
local toolchain_zip="${SRC}/cache/toolchain/${toolchain}"
local toolchain_dir="${toolchain_zip%.tar.*}"
if [[ ! -f "${toolchain_dir}/.download-complete" ]]; then
download_and_verify "toolchain" "${toolchain}" ||
exit_with_error "Failed to download toolchain" "${toolchain}"
display_alert "decompressing"
pv -p -b -r -c -N "[ .... ] ${toolchain}" "${toolchain_zip}" |
xz -dc |
tar xp --xattrs --no-same-owner --overwrite -C "${SRC}/cache/toolchain/"
if [[ $? -ne 0 ]]; then
rm -rf "${toolchain_dir}"
exit_with_error "Failed to decompress toolchain" "${toolchain}"
fi
touch "${toolchain_dir}/.download-complete"
rm -rf "${toolchain_zip}"* # Also delete asc file
fi
done
USE_TORRENT=${USE_TORRENT_STATUS}
local existing_dirs=($(ls -1 "${SRC}"/cache/toolchain))
for dir in ${existing_dirs[@]}; do
local found=no
for toolchain in ${toolchains[@]}; do
[[ $dir == ${toolchain%.tar.*} ]] && found=yes
done
if [[ $found == no ]]; then
display_alert "Removing obsolete toolchain" "$dir"
rm -rf "${SRC}/cache/toolchain/${dir}"
fi
done
else
display_alert "Ignoring toolchains" "SKIP_EXTERNAL_TOOLCHAINS: ${SKIP_EXTERNAL_TOOLCHAINS}" "info"
fi
fi
fi # check offline fi # check offline
# enable arm binary format so that the cross-architecture chroot environment will work # enable arm binary format so that the cross-architecture chroot environment will work
if build_task_is_enabled "bootstrap"; then if build_task_is_enabled "bootstrap"; then
modprobe -q binfmt_misc modprobe -q binfmt_misc || display_alert "Failed to modprobe" "binfmt_misc" "warn"
mountpoint -q /proc/sys/fs/binfmt_misc/ || mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc mountpoint -q /proc/sys/fs/binfmt_misc/ || mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc
if [[ "$(arch)" != "aarch64" ]]; then if [[ "$(arch)" != "aarch64" ]]; then
test -e /proc/sys/fs/binfmt_misc/qemu-arm || update-binfmts --enable qemu-arm test -e /proc/sys/fs/binfmt_misc/qemu-arm || update-binfmts --enable qemu-arm
@@ -247,8 +203,9 @@ prepare_host() {
fi fi
fi fi
[[ ! -f "${USERPATCHES_PATH}"/customize-image.sh ]] && cp "${SRC}"/config/templates/customize-image.sh.template "${USERPATCHES_PATH}"/customize-image.sh [[ ! -f "${USERPATCHES_PATH}"/customize-image.sh ]] && run_host_command_logged cp -pv "${SRC}"/config/templates/customize-image.sh.template "${USERPATCHES_PATH}"/customize-image.sh
# @TODO: what is this, and why?
if [[ ! -f "${USERPATCHES_PATH}"/README ]]; then if [[ ! -f "${USERPATCHES_PATH}"/README ]]; then
rm -f "${USERPATCHES_PATH}"/readme.txt rm -f "${USERPATCHES_PATH}"/readme.txt
echo 'Please read documentation about customizing build configuration' > "${USERPATCHES_PATH}"/README echo 'Please read documentation about customizing build configuration' > "${USERPATCHES_PATH}"/README
@@ -258,12 +215,13 @@ prepare_host() {
find "${SRC}"/patch -maxdepth 2 -type d ! -name . | sed "s%/.*patch%/$USERPATCHES_PATH%" | xargs mkdir -p find "${SRC}"/patch -maxdepth 2 -type d ! -name . | sed "s%/.*patch%/$USERPATCHES_PATH%" | xargs mkdir -p
fi fi
# check free space (basic) # check free space (basic) @TODO probably useful to refactor and implement in multiple spots.
local freespace=$(findmnt --noheadings --output AVAIL --bytes --target "${SRC}" --uniq 2> /dev/null) # in bytes declare -i free_space_bytes
if [ -n "$freespace" ] && [ "$((freespace / 1073741824))" -lt 10 ]; then free_space_bytes=$(findmnt --noheadings --output AVAIL --bytes --target "${SRC}" --uniq 2> /dev/null) # in bytes
display_alert "Low free space left" "$(($freespace / 1073741824)) GiB" "wrn" if [[ -n "$free_space_bytes" && $((free_space_bytes / 1073741824)) -lt 10 ]]; then
display_alert "Low free space left" "$((free_space_bytes / 1073741824)) GiB" "wrn"
# pause here since dialog-based menu will hide this message otherwise # pause here since dialog-based menu will hide this message otherwise
echo -e "Press \e[0;33m<Ctrl-C>\x1B[0m to abort compilation, \e[0;33m<Enter>\x1B[0m to ignore and continue" echo -e "Press \e[0;33m<Ctrl-C>\x1B[0m to abort compilation, \e[0;33m<Enter>\x1B[0m to ignore and continue"
read read # @TODO: this fails if stdin is not a tty, or just hangs
fi fi
} }

View File

@@ -0,0 +1,72 @@
function image_compress_and_checksum() {
[[ -n $SEND_TO_SERVER ]] && return 0
if [[ $COMPRESS_OUTPUTIMAGE == "" || $COMPRESS_OUTPUTIMAGE == no ]]; then
COMPRESS_OUTPUTIMAGE="sha,gpg,img"
elif [[ $COMPRESS_OUTPUTIMAGE == yes ]]; then
COMPRESS_OUTPUTIMAGE="sha,gpg,7z"
fi
if [[ $COMPRESS_OUTPUTIMAGE == *gz* ]]; then
display_alert "Compressing" "${DESTIMG}/${version}.img.gz" "info"
pigz -3 < $DESTIMG/${version}.img > $DESTIMG/${version}.img.gz
compression_type=".gz"
fi
if [[ $COMPRESS_OUTPUTIMAGE == *xz* ]]; then
# @TODO: rpardini: I'd just move to zstd and be done with it. It does it right.
display_alert "Compressing" "${DESTIMG}/${version}.img.xz" "info"
# compressing consumes a lot of memory we don't have. Waiting for previous packing job to finish helps to run a lot more builds in parallel
available_cpu=$(grep -c 'processor' /proc/cpuinfo)
[[ ${available_cpu} -gt 16 ]] && available_cpu=16 # using more cpu cores for compressing is pointless
available_mem=$(LC_ALL=c free | grep Mem | awk '{print $4/$2 * 100.0}' | awk '{print int($1)}') # in percentage
# build optimisations when memory drops below 5%
if [[ ${BUILD_ALL} == yes && (${available_mem} -lt 15 || $(ps -uax | grep "pixz" | wc -l) -gt 4) ]]; then
while [[ $(ps -uax | grep "pixz" | wc -l) -gt 2 ]]; do
echo -en "#"
sleep 20
done
fi
pixz -7 -p ${available_cpu} -f $(expr ${available_cpu} + 2) < $DESTIMG/${version}.img > ${DESTIMG}/${version}.img.xz
compression_type=".xz"
fi
if [[ $COMPRESS_OUTPUTIMAGE == *img* || $COMPRESS_OUTPUTIMAGE == *7z* ]]; then
# mv $DESTIMG/${version}.img ${FINALDEST}/${version}.img || exit 1
compression_type=""
fi
if [[ $COMPRESS_OUTPUTIMAGE == *sha* ]]; then
cd ${DESTIMG}
display_alert "SHA256 calculating" "${version}.img${compression_type}" "info"
sha256sum -b ${version}.img${compression_type} > ${version}.img${compression_type}.sha
fi
if [[ $COMPRESS_OUTPUTIMAGE == *gpg* ]]; then
cd ${DESTIMG}
if [[ -n $GPG_PASS ]]; then
display_alert "GPG signing" "${version}.img${compression_type}" "info"
if [[ -n $SUDO_USER ]]; then
sudo chown -R ${SUDO_USER}:${SUDO_USER} "${DESTIMG}"/
SUDO_PREFIX="sudo -H -u ${SUDO_USER}"
else
SUDO_PREFIX=""
fi
echo "${GPG_PASS}" | $SUDO_PREFIX bash -c "gpg --passphrase-fd 0 --armor --detach-sign --pinentry-mode loopback --batch --yes ${DESTIMG}/${version}.img${compression_type}" || exit 1
else
display_alert "GPG signing skipped - no GPG_PASS" "${version}.img" "wrn"
fi
fi
fingerprint_image "${DESTIMG}/${version}.img${compression_type}.txt" "${version}"
if [[ $COMPRESS_OUTPUTIMAGE == *7z* ]]; then
display_alert "Compressing" "${DESTIMG}/${version}.7z" "info"
7za a -t7z -bd -m0=lzma2 -mx=3 -mfb=64 -md=32m -ms=on \
${DESTIMG}/${version}.7z ${version}.key ${version}.img* > /dev/null 2>&1
find ${DESTIMG}/ -type \
f \( -name "${version}.img" -o -name "${version}.img.asc" -o -name "${version}.img.txt" -o -name "${version}.img.sha" \) -print0 |
xargs -0 rm > /dev/null 2>&1
fi
}

View File

@@ -6,10 +6,10 @@
fingerprint_image() { fingerprint_image() {
cat <<- EOF > "${1}" cat <<- EOF > "${1}"
-------------------------------------------------------------------------------- --------------------------------------------------------------------------------
Title: ${VENDOR} $REVISION ${BOARD^} $BRANCH Title: ${VENDOR} $REVISION ${BOARD^} $BRANCH
Kernel: Linux $VER Kernel: Linux $VER
Build date: $(date +'%d.%m.%Y') Build date: $(date +'%d.%m.%Y')
Builder rev: $BUILD_REPOSITORY_COMMIT Builder rev: ${BUILD_REPOSITORY_COMMIT}
Maintainer: $MAINTAINER <$MAINTAINERMAIL> Maintainer: $MAINTAINER <$MAINTAINERMAIL>
Authors: https://www.armbian.com/authors Authors: https://www.armbian.com/authors
Sources: https://github.com/armbian/ Sources: https://github.com/armbian/

View File

@@ -12,31 +12,95 @@
# since Debian buster, it has to be called within create_image() on the $MOUNT # since Debian buster, it has to be called within create_image() on the $MOUNT
# path instead of $SDCARD (which can be a tmpfs and breaks cryptsetup-initramfs). # path instead of $SDCARD (which can be a tmpfs and breaks cryptsetup-initramfs).
# see: https://github.com/armbian/build/issues/1584 # see: https://github.com/armbian/build/issues/1584
#
update_initramfs() { update_initramfs() {
local chroot_target=$1 local chroot_target=$1
local target_dir=$( local target_dir="$(find "${chroot_target}/lib/modules"/ -maxdepth 1 -type d -name "*${VER}*")" # @TODO: rpardini: this will break when we add multi-kernel images
find ${chroot_target}/lib/modules/ -maxdepth 1 -type d -name "*${VER}*" local initrd_kern_ver initrd_file initrd_cache_key initrd_cache_file_path initrd_hash
) local initrd_cache_current_manifest_filepath initrd_cache_last_manifest_filepath
if [ "$target_dir" != "" ]; then if [ "$target_dir" != "" ]; then
update_initramfs_cmd="TMPDIR=/tmp update-initramfs -uv -k $(basename $target_dir)" initrd_kern_ver="$(basename "$target_dir")"
initrd_file="${chroot_target}/boot/initrd.img-${initrd_kern_ver}"
update_initramfs_cmd="TMPDIR=/tmp update-initramfs -uv -k ${initrd_kern_ver}" # @TODO: why? TMPDIR=/tmp
else else
exit_with_error "No kernel installed for the version" "${VER}" exit_with_error "No kernel installed for the version" "${VER}"
fi fi
display_alert "Updating initramfs..." "$update_initramfs_cmd" ""
cp /usr/bin/$QEMU_BINARY $chroot_target/usr/bin/ # Caching.
# Find all modules and all firmware in the target.
# Find all initramfs configuration in /etc
# Find all bash, cpio and gzip binaries in /bin
# Hash the contents of them all.
# If there's a match, use the cache.
display_alert "computing initrd cache hash" "${chroot_target}" "debug"
mkdir -p "${SRC}/cache/initrd"
initrd_cache_current_manifest_filepath="${WORKDIR}/initrd.img-${initrd_kern_ver}.${ARMBIAN_BUILD_UUID}.manifest"
initrd_cache_last_manifest_filepath="${SRC}/cache/initrd/initrd.manifest-${initrd_kern_ver}.last.manifest"
# Find all the affected files; parallel md5sum sum them; invert hash and path, and remove chroot prefix.
find "${target_dir}" "${chroot_target}/usr/bin/bash" "${chroot_target}/etc/initramfs" \
"${chroot_target}/etc/initramfs-tools" -type f | parallel -X md5sum |
awk '{print $2 " - " $1}' |
sed -e "s|^${chroot_target}||g" | LC_ALL=C sort > "${initrd_cache_current_manifest_filepath}"
initrd_hash="$(cat "${initrd_cache_current_manifest_filepath}" | md5sum | cut -d ' ' -f 1)" # hash of the hashes.
initrd_cache_key="initrd.img-${initrd_kern_ver}-${initrd_hash}"
initrd_cache_file_path="${SRC}/cache/initrd/${initrd_cache_key}"
display_alert "initrd cache hash" "${initrd_hash}" "debug"
display_alert "Mounting chroot for update-initramfs" "update-initramfs" "debug"
deploy_qemu_binary_to_chroot "${chroot_target}"
mount_chroot "$chroot_target/" mount_chroot "$chroot_target/"
chroot $chroot_target /bin/bash -c "$update_initramfs_cmd" >> $DEST/${LOG_SUBPATH}/install.log 2>&1 || { if [[ -f "${initrd_cache_file_path}" ]]; then
display_alert "Updating initramfs FAILED, see:" "$DEST/${LOG_SUBPATH}/install.log" "err" display_alert "initrd cache hit" "${initrd_cache_key}" "cachehit"
exit 23 run_host_command_logged cp -pv "${initrd_cache_file_path}" "${initrd_file}"
} touch "${initrd_cache_file_path}" # touch cached file timestamp; LRU bump.
display_alert "Updated initramfs." "for details see: $DEST/${LOG_SUBPATH}/install.log" "info" if [[ -f "${initrd_cache_last_manifest_filepath}" ]]; then
touch "${initrd_cache_last_manifest_filepath}" # touch the manifest file timestamp; LRU bump.
fi
# Convert to bootscript expected format, by calling into the script manually.
if [[ -f "${chroot_target}"/etc/initramfs/post-update.d/99-uboot ]]; then
chroot_custom "$chroot_target" /etc/initramfs/post-update.d/99-uboot "${initrd_kern_ver}" "/boot/initrd.img-${initrd_kern_ver}"
fi
else
display_alert "Cache miss for initrd cache" "${initrd_cache_key}" "debug"
# Show the differences between the last and the current, so we realize why it isn't hit (eg; what changed).
if [[ -f "${initrd_cache_last_manifest_filepath}" ]]; then
if [[ "${SHOW_DEBUG}" == "yes" ]]; then
display_alert "Showing diff between last and current initrd cache manifests" "initrd" "debug"
run_host_command_logged diff -u --color=always "${initrd_cache_last_manifest_filepath}" "${initrd_cache_current_manifest_filepath}" "|| true" # no errors please
fi
fi
display_alert "Updating initramfs..." "$update_initramfs_cmd" ""
local logging_filter="2>&1 | grep --line-buffered -v -e '.xz' -e 'ORDER ignored' -e 'Adding binary ' -e 'Adding module ' -e 'Adding firmware ' "
chroot_custom_long_running "$chroot_target" "$update_initramfs_cmd" "${logging_filter}"
display_alert "Updated initramfs." "${update_initramfs_cmd}" "info"
display_alert "Storing initrd in cache" "${initrd_cache_key}" "debug" # notice there's no -p here: no need to touch LRU
run_host_command_logged cp -v "${initrd_file}" "${initrd_cache_file_path}" # store the new initrd in the cache.
run_host_command_logged cp -v "${initrd_cache_current_manifest_filepath}" "${initrd_cache_last_manifest_filepath}" # store the current contents in the last file.
# clean old cache files so they don't pile up forever.
if [[ "${SHOW_DEBUG}" == "yes" ]]; then
display_alert "Showing which initrd caches would be removed/expired" "initrd" "debug"
# 60: keep the last 30 initrd + manifest pairs. this should be higher than the total number of kernels we support, otherwise churn will be high
find "${SRC}/cache/initrd" -type f -printf "%T@ %p\\n" | sort -n -r | sed "1,60d" | xargs rm -fv
fi
fi
display_alert "Re-enabling" "initramfs-tools hook for kernel" display_alert "Re-enabling" "initramfs-tools hook for kernel"
chroot $chroot_target /bin/bash -c "chmod -v +x /etc/kernel/postinst.d/initramfs-tools" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot_custom "$chroot_target" chmod -v +x /etc/kernel/postinst.d/initramfs-tools
umount_chroot "$chroot_target/" display_alert "Unmounting chroot" "update-initramfs" "debug"
rm $chroot_target/usr/bin/$QEMU_BINARY umount_chroot "${chroot_target}/"
undeploy_qemu_binary_from_chroot "${chroot_target}"
# no need to remove ${initrd_cache_current_manifest_filepath} manually, since it's under ${WORKDIR}
return 0 # avoid future short-circuit problems
} }

View File

@@ -4,6 +4,7 @@
check_loop_device() { check_loop_device() {
local device=$1 local device=$1
#display_alert "Checking look device" "${device}" "wrn"
if [[ ! -b $device ]]; then if [[ ! -b $device ]]; then
if [[ $CONTAINER_COMPAT == yes && -b /tmp/$device ]]; then if [[ $CONTAINER_COMPAT == yes && -b /tmp/$device ]]; then
display_alert "Creating device node" "$device" display_alert "Creating device node" "$device"
@@ -15,26 +16,54 @@ check_loop_device() {
} }
#
# Copyright (c) 2013-2021 Igor Pecovnik, igor.pecovnik@gma**.com
#
# This file is licensed under the terms of the GNU General Public
# License version 2. This program is licensed "as is" without any
# warranty of any kind, whether express or implied.
#
# This file is a part of the Armbian build script
# https://github.com/armbian/build/
# write_uboot <loopdev> # write_uboot <loopdev>
# #
write_uboot() { write_uboot_to_loop_image() {
local loop=$1 revision local loop=$1 revision
display_alert "Writing U-boot bootloader" "$loop" "info" display_alert "Preparing u-boot bootloader" "$loop" "info"
TEMP_DIR=$(mktemp -d || exit 1) TEMP_DIR=$(mktemp -d) # set-e is in effect. no need to exit on errors explicitly
chmod 700 ${TEMP_DIR} chmod 700 ${TEMP_DIR}
revision=${REVISION} revision=${REVISION}
if [[ -n $UPSTREM_VER ]]; then if [[ -n $UBOOT_REPO_VERSION ]]; then
revision=${UPSTREM_VER} revision=${UBOOT_REPO_VERSION}
dpkg -x "${DEB_STORAGE}/linux-u-boot-${BOARD}-${BRANCH}_${revision}_${ARCH}.deb" ${TEMP_DIR}/ run_host_command_logged dpkg -x "${DEB_STORAGE}/linux-u-boot-${BOARD}-${BRANCH}_${revision}_${ARCH}.deb" ${TEMP_DIR}/
else else
dpkg -x "${DEB_STORAGE}/${CHOSEN_UBOOT}_${revision}_${ARCH}.deb" ${TEMP_DIR}/ run_host_command_logged dpkg -x "${DEB_STORAGE}/${CHOSEN_UBOOT}_${revision}_${ARCH}.deb" ${TEMP_DIR}/
fi fi
# source platform install to read $DIR if [[ ! -f "${TEMP_DIR}/usr/lib/u-boot/platform_install.sh" ]]; then
source ${TEMP_DIR}/usr/lib/u-boot/platform_install.sh exit_with_error "Missing ${TEMP_DIR}/usr/lib/u-boot/platform_install.sh"
write_uboot_platform "${TEMP_DIR}${DIR}" "$loop" fi
[[ $? -ne 0 ]] && exit_with_error "U-boot bootloader failed to install" "@host"
rm -rf ${TEMP_DIR}
display_alert "Sourcing u-boot install functions" "$loop" "info"
source ${TEMP_DIR}/usr/lib/u-boot/platform_install.sh
set -e # make sure, we just included something that might disable it
display_alert "Writing u-boot bootloader" "$loop" "info"
write_uboot_platform "${TEMP_DIR}${DIR}" "$loop" # @TODO: rpardini: what is ${DIR} ?
export UBOOT_CHROOT_DIR="${TEMP_DIR}${DIR}"
call_extension_method "post_write_uboot_platform" <<- 'POST_WRITE_UBOOT_PLATFORM'
*allow custom writing of uboot -- only during image build*
Called after `write_uboot_platform()`.
It receives `UBOOT_CHROOT_DIR` with the full path to the u-boot dir in the chroot.
Important: this is only called inside the build system.
Consider that `write_uboot_platform()` is also called board-side, when updating uboot, eg: nand-sata-install.
POST_WRITE_UBOOT_PLATFORM
#rm -rf ${TEMP_DIR}
return 0
} }

View File

@@ -5,7 +5,9 @@
# and mounts it to local dir # and mounts it to local dir
# FS-dependent stuff (boot and root fs partition types) happens here # FS-dependent stuff (boot and root fs partition types) happens here
# #
prepare_partitions() { # LOGGING: this is run under the log manager. so just redirect unwanted stderr to stdout, and it goes to log.
# this is under the logging manager. so just log to stdout (no redirections), and redirect stderr to stdout unless you want it on screen.
function prepare_partitions() {
display_alert "Preparing image file for rootfs" "$BOARD $RELEASE" "info" display_alert "Preparing image file for rootfs" "$BOARD $RELEASE" "info"
# possible partition combinations # possible partition combinations
@@ -72,10 +74,10 @@ prepare_partitions() {
ROOT_FS_LABEL="${ROOT_FS_LABEL:-armbi_root}" ROOT_FS_LABEL="${ROOT_FS_LABEL:-armbi_root}"
BOOT_FS_LABEL="${BOOT_FS_LABEL:-armbi_boot}" BOOT_FS_LABEL="${BOOT_FS_LABEL:-armbi_boot}"
call_extension_method "pre_prepare_partitions" "prepare_partitions_custom" << 'PRE_PREPARE_PARTITIONS' call_extension_method "pre_prepare_partitions" "prepare_partitions_custom" <<- 'PRE_PREPARE_PARTITIONS'
*allow custom options for mkfs* *allow custom options for mkfs*
Good time to change stuff like mkfs opts, types etc. Good time to change stuff like mkfs opts, types etc.
PRE_PREPARE_PARTITIONS PRE_PREPARE_PARTITIONS
# stage: determine partition configuration # stage: determine partition configuration
local next=1 local next=1
@@ -101,13 +103,13 @@ PRE_PREPARE_PARTITIONS
export rootfs_size=$(du -sm $SDCARD/ | cut -f1) # MiB export rootfs_size=$(du -sm $SDCARD/ | cut -f1) # MiB
display_alert "Current rootfs size" "$rootfs_size MiB" "info" display_alert "Current rootfs size" "$rootfs_size MiB" "info"
call_extension_method "prepare_image_size" "config_prepare_image_size" << 'PREPARE_IMAGE_SIZE' call_extension_method "prepare_image_size" "config_prepare_image_size" <<- 'PREPARE_IMAGE_SIZE'
*allow dynamically determining the size based on the $rootfs_size* *allow dynamically determining the size based on the $rootfs_size*
Called after `${rootfs_size}` is known, but before `${FIXED_IMAGE_SIZE}` is taken into account. Called after `${rootfs_size}` is known, but before `${FIXED_IMAGE_SIZE}` is taken into account.
A good spot to determine `FIXED_IMAGE_SIZE` based on `rootfs_size`. A good spot to determine `FIXED_IMAGE_SIZE` based on `rootfs_size`.
UEFISIZE can be set to 0 for no UEFI partition, or to a size in MiB to include one. UEFISIZE can be set to 0 for no UEFI partition, or to a size in MiB to include one.
Last chance to set `USE_HOOK_FOR_PARTITION`=yes and then implement create_partition_table hook_point. Last chance to set `USE_HOOK_FOR_PARTITION`=yes and then implement create_partition_table hook_point.
PREPARE_IMAGE_SIZE PREPARE_IMAGE_SIZE
if [[ -n $FIXED_IMAGE_SIZE && $FIXED_IMAGE_SIZE =~ ^[0-9]+$ ]]; then if [[ -n $FIXED_IMAGE_SIZE && $FIXED_IMAGE_SIZE =~ ^[0-9]+$ ]]; then
display_alert "Using user-defined image size" "$FIXED_IMAGE_SIZE MiB" "info" display_alert "Using user-defined image size" "$FIXED_IMAGE_SIZE MiB" "info"
@@ -133,18 +135,21 @@ PREPARE_IMAGE_SIZE
truncate --size=${sdsize}M ${SDCARD}.raw # sometimes results in fs corruption, revert to previous know to work solution truncate --size=${sdsize}M ${SDCARD}.raw # sometimes results in fs corruption, revert to previous know to work solution
sync sync
else else
dd if=/dev/zero bs=1M status=none count=$sdsize | pv -p -b -r -s $(($sdsize * 1024 * 1024)) -N "[ .... ] dd" | dd status=none of=${SDCARD}.raw dd if=/dev/zero bs=1M status=none count=$sdsize |
pv -p -b -r -s $(($sdsize * 1024 * 1024)) -N "$(logging_echo_prefix_for_pv "zero") zero" |
dd status=none of=${SDCARD}.raw
fi fi
# stage: calculate boot partition size
local bootstart=$(($OFFSET * 2048))
local rootstart=$(($bootstart + ($BOOTSIZE * 2048) + ($UEFISIZE * 2048)))
local bootend=$(($rootstart - 1))
# stage: create partition table # stage: create partition table
display_alert "Creating partitions" "${bootfs:+/boot: $bootfs }root: $ROOTFS_TYPE" "info" display_alert "Creating partitions" "${bootfs:+/boot: $bootfs }root: $ROOTFS_TYPE" "info"
if [[ "${USE_HOOK_FOR_PARTITION}" == "yes" ]]; then if [[ "${USE_HOOK_FOR_PARTITION}" == "yes" ]]; then
{ { [[ "$IMAGE_PARTITION_TABLE" == "msdos" ]] && echo "label: dos" || echo "label: $IMAGE_PARTITION_TABLE"; } |
[[ "$IMAGE_PARTITION_TABLE" == "msdos" ]] && run_host_command_logged sfdisk ${SDCARD}.raw || exit_with_error "Create partition table fail"
echo "label: dos" ||
echo "label: $IMAGE_PARTITION_TABLE"
} | sfdisk ${SDCARD}.raw >> "${DEST}/${LOG_SUBPATH}/install.log" 2>&1 ||
exit_with_error "Create partition table fail. Please check" "${DEST}/${LOG_SUBPATH}/install.log"
call_extension_method "create_partition_table" <<- 'CREATE_PARTITION_TABLE' call_extension_method "create_partition_table" <<- 'CREATE_PARTITION_TABLE'
*only called when USE_HOOK_FOR_PARTITION=yes to create the complete partition table* *only called when USE_HOOK_FOR_PARTITION=yes to create the complete partition table*
@@ -153,9 +158,7 @@ PREPARE_IMAGE_SIZE
CREATE_PARTITION_TABLE CREATE_PARTITION_TABLE
else else
{ {
[[ "$IMAGE_PARTITION_TABLE" == "msdos" ]] && [[ "$IMAGE_PARTITION_TABLE" == "msdos" ]] && echo "label: dos" || echo "label: $IMAGE_PARTITION_TABLE"
echo "label: dos" ||
echo "label: $IMAGE_PARTITION_TABLE"
local next=$OFFSET local next=$OFFSET
if [[ -n "$biospart" ]]; then if [[ -n "$biospart" ]]; then
@@ -167,17 +170,13 @@ PREPARE_IMAGE_SIZE
if [[ -n "$uefipart" ]]; then if [[ -n "$uefipart" ]]; then
# dos: EFI (FAT-12/16/32) # dos: EFI (FAT-12/16/32)
# gpt: EFI System # gpt: EFI System
[[ "$IMAGE_PARTITION_TABLE" != "gpt" ]] && [[ "$IMAGE_PARTITION_TABLE" != "gpt" ]] && local type="ef" || local type="C12A7328-F81F-11D2-BA4B-00A0C93EC93B"
local type="ef" ||
local type="C12A7328-F81F-11D2-BA4B-00A0C93EC93B"
echo "$uefipart : name=\"efi\", start=${next}MiB, size=${UEFISIZE}MiB, type=${type}" echo "$uefipart : name=\"efi\", start=${next}MiB, size=${UEFISIZE}MiB, type=${type}"
local next=$(($next + $UEFISIZE)) local next=$(($next + $UEFISIZE))
fi fi
if [[ -n "$bootpart" ]]; then if [[ -n "$bootpart" ]]; then
# Linux extended boot # Linux extended boot
[[ "$IMAGE_PARTITION_TABLE" != "gpt" ]] && [[ "$IMAGE_PARTITION_TABLE" != "gpt" ]] && local type="ea" || local type="BC13C2FF-59E6-4262-A352-B275FD6F7172"
local type="ea" ||
local type="BC13C2FF-59E6-4262-A352-B275FD6F7172"
if [[ -n "$rootpart" ]]; then if [[ -n "$rootpart" ]]; then
echo "$bootpart : name=\"bootfs\", start=${next}MiB, size=${BOOTSIZE}MiB, type=${type}" echo "$bootpart : name=\"bootfs\", start=${next}MiB, size=${BOOTSIZE}MiB, type=${type}"
local next=$(($next + $BOOTSIZE)) local next=$(($next + $BOOTSIZE))
@@ -189,14 +188,12 @@ PREPARE_IMAGE_SIZE
if [[ -n "$rootpart" ]]; then if [[ -n "$rootpart" ]]; then
# dos: Linux # dos: Linux
# gpt: Linux filesystem # gpt: Linux filesystem
[[ "$IMAGE_PARTITION_TABLE" != "gpt" ]] && [[ "$IMAGE_PARTITION_TABLE" != "gpt" ]] && local type="83" || local type="0FC63DAF-8483-4772-8E79-3D69D8477DE4"
local type="83" ||
local type="0FC63DAF-8483-4772-8E79-3D69D8477DE4"
# no `size` argument mean "as much as possible" # no `size` argument mean "as much as possible"
echo "$rootpart : name=\"rootfs\", start=${next}MiB, type=${type}" echo "$rootpart : name=\"rootfs\", start=${next}MiB, type=${type}"
fi fi
} | sfdisk ${SDCARD}.raw >> "${DEST}/${LOG_SUBPATH}/install.log" 2>&1 || } |
exit_with_error "Partition fail. Please check" "${DEST}/${LOG_SUBPATH}/install.log" run_host_command_logged sfdisk ${SDCARD}.raw || exit_with_error "Partition fail."
fi fi
call_extension_method "post_create_partitions" <<- 'POST_CREATE_PARTITIONS' call_extension_method "post_create_partitions" <<- 'POST_CREATE_PARTITIONS'
@@ -208,17 +205,19 @@ PREPARE_IMAGE_SIZE
exec {FD}> /var/lock/armbian-debootstrap-losetup exec {FD}> /var/lock/armbian-debootstrap-losetup
flock -x $FD flock -x $FD
LOOP=$(losetup -f) export LOOP
[[ -z $LOOP ]] && exit_with_error "Unable to find free loop device" LOOP=$(losetup -f) || exit_with_error "Unable to find free loop device"
display_alert "Allocated loop device" "LOOP=${LOOP}"
check_loop_device "$LOOP" check_loop_device "$LOOP"
losetup $LOOP ${SDCARD}.raw run_host_command_logged losetup $LOOP ${SDCARD}.raw # @TODO: had a '-P- here, what was it?
# loop device was grabbed here, unlock # loop device was grabbed here, unlock
flock -u $FD flock -u $FD
partprobe $LOOP display_alert "Running partprobe" "${LOOP}" "debug"
run_host_command_logged partprobe $LOOP
# stage: create fs, mount partitions, create fstab # stage: create fs, mount partitions, create fstab
rm -f $SDCARD/etc/fstab rm -f $SDCARD/etc/fstab
@@ -236,12 +235,14 @@ PREPARE_IMAGE_SIZE
check_loop_device "$rootdevice" check_loop_device "$rootdevice"
display_alert "Creating rootfs" "$ROOTFS_TYPE on $rootdevice" display_alert "Creating rootfs" "$ROOTFS_TYPE on $rootdevice"
mkfs.${mkfs[$ROOTFS_TYPE]} ${mkopts[$ROOTFS_TYPE]} ${mkopts_label[$ROOTFS_TYPE]:+${mkopts_label[$ROOTFS_TYPE]}"$ROOT_FS_LABEL"} $rootdevice >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 run_host_command_logged mkfs.${mkfs[$ROOTFS_TYPE]} ${mkopts[$ROOTFS_TYPE]} ${mkopts_label[$ROOTFS_TYPE]:+${mkopts_label[$ROOTFS_TYPE]}"$ROOT_FS_LABEL"} "${rootdevice}"
[[ $ROOTFS_TYPE == ext4 ]] && tune2fs -o journal_data_writeback $rootdevice > /dev/null [[ $ROOTFS_TYPE == ext4 ]] && run_host_command_logged tune2fs -o journal_data_writeback "$rootdevice"
if [[ $ROOTFS_TYPE == btrfs && $BTRFS_COMPRESSION != none ]]; then if [[ $ROOTFS_TYPE == btrfs && $BTRFS_COMPRESSION != none ]]; then
local fscreateopt="-o compress-force=${BTRFS_COMPRESSION}" local fscreateopt="-o compress-force=${BTRFS_COMPRESSION}"
fi fi
mount ${fscreateopt} $rootdevice $MOUNT/ sync # force writes to be really flushed
display_alert "Mounting rootfs" "$rootdevice"
run_host_command_logged mount ${fscreateopt} $rootdevice $MOUNT/
# create fstab (and crypttab) entry # create fstab (and crypttab) entry
if [[ $CRYPTROOT_ENABLE == yes ]]; then if [[ $CRYPTROOT_ENABLE == yes ]]; then
# map the LUKS container partition via its UUID to be the 'cryptroot' device # map the LUKS container partition via its UUID to be the 'cryptroot' device
@@ -256,20 +257,21 @@ PREPARE_IMAGE_SIZE
mount --bind --make-private $SDCARD $MOUNT/ mount --bind --make-private $SDCARD $MOUNT/
echo "/dev/nfs / nfs defaults 0 0" >> $SDCARD/etc/fstab echo "/dev/nfs / nfs defaults 0 0" >> $SDCARD/etc/fstab
fi fi
if [[ -n $bootpart ]]; then if [[ -n $bootpart ]]; then
display_alert "Creating /boot" "$bootfs on ${LOOP}p${bootpart}" display_alert "Creating /boot" "$bootfs on ${LOOP}p${bootpart}"
check_loop_device "${LOOP}p${bootpart}" check_loop_device "${LOOP}p${bootpart}"
mkfs.${mkfs[$bootfs]} ${mkopts[$bootfs]} ${mkopts_label[$bootfs]:+${mkopts_label[$bootfs]}"$BOOT_FS_LABEL"} ${LOOP}p${bootpart} >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 run_host_command_logged mkfs.${mkfs[$bootfs]} ${mkopts[$bootfs]} ${mkopts_label[$bootfs]:+${mkopts_label[$bootfs]}"$BOOT_FS_LABEL"} ${LOOP}p${bootpart}
mkdir -p $MOUNT/boot/ mkdir -p $MOUNT/boot/
mount ${LOOP}p${bootpart} $MOUNT/boot/ run_host_command_logged mount ${LOOP}p${bootpart} $MOUNT/boot/
echo "UUID=$(blkid -s UUID -o value ${LOOP}p${bootpart}) /boot ${mkfs[$bootfs]} defaults${mountopts[$bootfs]} 0 2" >> $SDCARD/etc/fstab echo "UUID=$(blkid -s UUID -o value ${LOOP}p${bootpart}) /boot ${mkfs[$bootfs]} defaults${mountopts[$bootfs]} 0 2" >> $SDCARD/etc/fstab
fi fi
if [[ -n $uefipart ]]; then if [[ -n $uefipart ]]; then
display_alert "Creating EFI partition" "FAT32 ${UEFI_MOUNT_POINT} on ${LOOP}p${uefipart} label ${UEFI_FS_LABEL}" display_alert "Creating EFI partition" "FAT32 ${UEFI_MOUNT_POINT} on ${LOOP}p${uefipart} label ${UEFI_FS_LABEL}"
check_loop_device "${LOOP}p${uefipart}" check_loop_device "${LOOP}p${uefipart}"
mkfs.fat -F32 -n "${UEFI_FS_LABEL}" ${LOOP}p${uefipart} >> "${DEST}"/debug/install.log 2>&1 run_host_command_logged mkfs.fat -F32 -n "${UEFI_FS_LABEL^^}" ${LOOP}p${uefipart} 2>&1 # "^^" makes variable UPPERCASE, required for FAT32.
mkdir -p "${MOUNT}${UEFI_MOUNT_POINT}" mkdir -p "${MOUNT}${UEFI_MOUNT_POINT}"
mount ${LOOP}p${uefipart} "${MOUNT}${UEFI_MOUNT_POINT}" run_host_command_logged mount ${LOOP}p${uefipart} "${MOUNT}${UEFI_MOUNT_POINT}"
echo "UUID=$(blkid -s UUID -o value ${LOOP}p${uefipart}) ${UEFI_MOUNT_POINT} vfat defaults 0 2" >> $SDCARD/etc/fstab echo "UUID=$(blkid -s UUID -o value ${LOOP}p${uefipart}) ${UEFI_MOUNT_POINT} vfat defaults 0 2" >> $SDCARD/etc/fstab
fi fi
echo "tmpfs /tmp tmpfs defaults,nosuid 0 0" >> $SDCARD/etc/fstab echo "tmpfs /tmp tmpfs defaults,nosuid 0 0" >> $SDCARD/etc/fstab
@@ -281,13 +283,22 @@ PREPARE_IMAGE_SIZE
# stage: adjust boot script or boot environment # stage: adjust boot script or boot environment
if [[ -f $SDCARD/boot/armbianEnv.txt ]]; then if [[ -f $SDCARD/boot/armbianEnv.txt ]]; then
display_alert "Found armbianEnv.txt" "${SDCARD}/boot/armbianEnv.txt" "debug"
if [[ $CRYPTROOT_ENABLE == yes ]]; then if [[ $CRYPTROOT_ENABLE == yes ]]; then
echo "rootdev=$rootdevice cryptdevice=UUID=$(blkid -s UUID -o value ${LOOP}p${rootpart}):$ROOT_MAPPER" >> $SDCARD/boot/armbianEnv.txt echo "rootdev=$rootdevice cryptdevice=UUID=$(blkid -s UUID -o value ${LOOP}p${rootpart}):$ROOT_MAPPER" >> "${SDCARD}/boot/armbianEnv.txt"
else else
echo "rootdev=$rootfs" >> $SDCARD/boot/armbianEnv.txt echo "rootdev=$rootfs" >> "${SDCARD}/boot/armbianEnv.txt"
fi fi
echo "rootfstype=$ROOTFS_TYPE" >> $SDCARD/boot/armbianEnv.txt echo "rootfstype=$ROOTFS_TYPE" >> $SDCARD/boot/armbianEnv.txt
elif [[ $rootpart != 1 ]] && [[ $SRC_EXTLINUX != yes ]]; then elif [[ $rootpart != 1 ]] && [[ $SRC_EXTLINUX != yes ]]; then
echo "rootfstype=$ROOTFS_TYPE" >> "${SDCARD}/boot/armbianEnv.txt"
call_extension_method "image_specific_armbian_env_ready" <<- 'IMAGE_SPECIFIC_ARMBIAN_ENV_READY'
*during image build, armbianEnv.txt is ready for image-specific customization (not in BSP)*
You can write to `"${SDCARD}/boot/armbianEnv.txt"` here, it is guaranteed to exist.
IMAGE_SPECIFIC_ARMBIAN_ENV_READY
elif [[ $rootpart != 1 && $SRC_EXTLINUX != yes && -f "${SDCARD}/boot/${bootscript_dst}" ]]; then
local bootscript_dst=${BOOTSCRIPT##*:} local bootscript_dst=${BOOTSCRIPT##*:}
sed -i 's/mmcblk0p1/mmcblk0p2/' $SDCARD/boot/$bootscript_dst sed -i 's/mmcblk0p1/mmcblk0p2/' $SDCARD/boot/$bootscript_dst
sed -i -e "s/rootfstype=ext4/rootfstype=$ROOTFS_TYPE/" \ sed -i -e "s/rootfstype=ext4/rootfstype=$ROOTFS_TYPE/" \
@@ -296,6 +307,7 @@ PREPARE_IMAGE_SIZE
# if we have boot.ini = remove armbianEnv.txt and add UUID there if enabled # if we have boot.ini = remove armbianEnv.txt and add UUID there if enabled
if [[ -f $SDCARD/boot/boot.ini ]]; then if [[ -f $SDCARD/boot/boot.ini ]]; then
display_alert "Found boot.ini" "${SDCARD}/boot/boot.ini" "debug"
sed -i -e "s/rootfstype \"ext4\"/rootfstype \"$ROOTFS_TYPE\"/" $SDCARD/boot/boot.ini sed -i -e "s/rootfstype \"ext4\"/rootfstype \"$ROOTFS_TYPE\"/" $SDCARD/boot/boot.ini
if [[ $CRYPTROOT_ENABLE == yes ]]; then if [[ $CRYPTROOT_ENABLE == yes ]]; then
local rootpart="UUID=$(blkid -s UUID -o value ${LOOP}p${rootpart})" local rootpart="UUID=$(blkid -s UUID -o value ${LOOP}p${rootpart})"
@@ -303,7 +315,7 @@ PREPARE_IMAGE_SIZE
else else
sed -i 's/^setenv rootdev .*/setenv rootdev "'$rootfs'"/' $SDCARD/boot/boot.ini sed -i 's/^setenv rootdev .*/setenv rootdev "'$rootfs'"/' $SDCARD/boot/boot.ini
fi fi
if [[ $LINUXFAMILY != meson64 ]]; then if [[ $LINUXFAMILY != meson64 ]]; then # @TODO: why only for meson64?
[[ -f $SDCARD/boot/armbianEnv.txt ]] && rm $SDCARD/boot/armbianEnv.txt [[ -f $SDCARD/boot/armbianEnv.txt ]] && rm $SDCARD/boot/armbianEnv.txt
fi fi
fi fi
@@ -318,16 +330,19 @@ PREPARE_IMAGE_SIZE
fi fi
# recompile .cmd to .scr if boot.cmd exists # recompile .cmd to .scr if boot.cmd exists
if [[ -f $SDCARD/boot/boot.cmd ]]; then if [[ -f $SDCARD/boot/boot.cmd ]]; then
if [ -z $BOOTSCRIPT_OUTPUT ]; then BOOTSCRIPT_OUTPUT=boot.scr; fi if [ -z $BOOTSCRIPT_OUTPUT ]; then
mkimage -C none -A arm -T script -d $SDCARD/boot/boot.cmd $SDCARD/boot/$BOOTSCRIPT_OUTPUT > /dev/null 2>&1 BOOTSCRIPT_OUTPUT=boot.scr
fi
run_host_command_logged mkimage -C none -A arm -T script -d $SDCARD/boot/boot.cmd $SDCARD/boot/${BOOTSCRIPT_OUTPUT}
fi fi
# create extlinux config # complement extlinux config if it exists; remove armbianEnv in this case.
if [[ -f $SDCARD/boot/extlinux/extlinux.conf ]]; then if [[ -f $SDCARD/boot/extlinux/extlinux.conf ]]; then
echo " append root=$rootfs $SRC_CMDLINE $MAIN_CMDLINE" >> $SDCARD/boot/extlinux/extlinux.conf echo " append root=$rootfs $SRC_CMDLINE $MAIN_CMDLINE" >> $SDCARD/boot/extlinux/extlinux.conf
[[ -f $SDCARD/boot/armbianEnv.txt ]] && rm $SDCARD/boot/armbianEnv.txt display_alert "extlinux.conf exists" "removing armbianEnv.txt" "warn"
[[ -f $SDCARD/boot/armbianEnv.txt ]] && run_host_command_logged rm -v $SDCARD/boot/armbianEnv.txt
fi fi
return 0 # there is a shortcircuit above! very tricky btw!
} }

View File

@@ -3,228 +3,137 @@
# #
# finishes creation of image from cached rootfs # finishes creation of image from cached rootfs
# #
create_image() { create_image_from_sdcard_rootfs() {
# create DESTIMG, hooks might put stuff there early. # create DESTIMG, hooks might put stuff there early.
mkdir -p $DESTIMG mkdir -p "${DESTIMG}"
# add a cleanup trap hook do make sure we don't leak it if stuff fails
add_cleanup_handler trap_handler_cleanup_destimg
# stage: create file name # stage: create file name
# @TODO: rpardini: determine the image file name produced. a bit late in the game, since it uses VER which is from the kernel package.
local version="${VENDOR}_${REVISION}_${BOARD^}_${RELEASE}_${BRANCH}_${VER/-$LINUXFAMILY/}${DESKTOP_ENVIRONMENT:+_$DESKTOP_ENVIRONMENT}" local version="${VENDOR}_${REVISION}_${BOARD^}_${RELEASE}_${BRANCH}_${VER/-$LINUXFAMILY/}${DESKTOP_ENVIRONMENT:+_$DESKTOP_ENVIRONMENT}"
[[ $BUILD_DESKTOP == yes ]] && version=${version}_desktop [[ $BUILD_DESKTOP == yes ]] && version=${version}_desktop
[[ $BUILD_MINIMAL == yes ]] && version=${version}_minimal [[ $BUILD_MINIMAL == yes ]] && version=${version}_minimal
[[ $ROOTFS_TYPE == nfs ]] && version=${version}_nfsboot [[ $ROOTFS_TYPE == nfs ]] && version=${version}_nfsboot
if [[ $ROOTFS_TYPE != nfs ]]; then if [[ $ROOTFS_TYPE != nfs ]]; then
display_alert "Copying files to" "/" display_alert "Copying files via rsync to" "/ at ${MOUNT}"
echo -e "\nCopying files to [/]" >> "${DEST}"/${LOG_SUBPATH}/install.log run_host_command_logged rsync -aHWXh \
rsync -aHWXh \
--exclude="/boot/*" \ --exclude="/boot/*" \
--exclude="/dev/*" \ --exclude="/dev/*" \
--exclude="/proc/*" \ --exclude="/proc/*" \
--exclude="/run/*" \ --exclude="/run/*" \
--exclude="/tmp/*" \ --exclude="/tmp/*" \
--exclude="/sys/*" \ --exclude="/sys/*" \
--info=progress0,stats1 $SDCARD/ $MOUNT/ >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 --info=progress0,stats1 $SDCARD/ $MOUNT/
else else
display_alert "Creating rootfs archive" "rootfs.tgz" "info" display_alert "Creating rootfs archive" "rootfs.tgz" "info"
tar cp --xattrs --directory=$SDCARD/ --exclude='./boot/*' --exclude='./dev/*' --exclude='./proc/*' --exclude='./run/*' --exclude='./tmp/*' \ tar cp --xattrs --directory=$SDCARD/ --exclude='./boot/*' --exclude='./dev/*' --exclude='./proc/*' --exclude='./run/*' --exclude='./tmp/*' \
--exclude='./sys/*' . | pv -p -b -r -s $(du -sb $SDCARD/ | cut -f1) -N "rootfs.tgz" | gzip -c > $DEST/images/${version}-rootfs.tgz --exclude='./sys/*' . |
pv -p -b -r -s "$(du -sb "$SDCARD"/ | cut -f1)" \
-N "$(logging_echo_prefix_for_pv "create_rootfs_archive") rootfs.tgz" |
gzip -c > "$DEST/images/${version}-rootfs.tgz"
fi fi
# stage: rsync /boot # stage: rsync /boot
display_alert "Copying files to" "/boot" display_alert "Copying files to" "/boot at ${MOUNT}"
echo -e "\nCopying files to [/boot]" >> "${DEST}"/${LOG_SUBPATH}/install.log
if [[ $(findmnt --noheadings --output FSTYPE --target "$MOUNT/boot" --uniq) == vfat ]]; then if [[ $(findmnt --noheadings --output FSTYPE --target "$MOUNT/boot" --uniq) == vfat ]]; then
# fat32 run_host_command_logged rsync -rLtWh --info=progress0,stats1 "$SDCARD/boot" "$MOUNT" # fat32
rsync -rLtWh \
--info=progress0,stats1 \
--log-file="${DEST}"/${LOG_SUBPATH}/install.log $SDCARD/boot $MOUNT
else else
# ext4 run_host_command_logged rsync -aHWXh --info=progress0,stats1 "$SDCARD/boot" "$MOUNT" # ext4
rsync -aHWXh \
--info=progress0,stats1 \
--log-file="${DEST}"/${LOG_SUBPATH}/install.log $SDCARD/boot $MOUNT
fi fi
call_extension_method "pre_update_initramfs" "config_pre_update_initramfs" << 'PRE_UPDATE_INITRAMFS' call_extension_method "pre_update_initramfs" "config_pre_update_initramfs" <<- 'PRE_UPDATE_INITRAMFS'
*allow config to hack into the initramfs create process* *allow config to hack into the initramfs create process*
Called after rsync has synced both `/root` and `/root` on the target, but before calling `update_initramfs`. Called after rsync has synced both `/root` and `/root` on the target, but before calling `update_initramfs`.
PRE_UPDATE_INITRAMFS PRE_UPDATE_INITRAMFS
# stage: create final initramfs # stage: create final initramfs
[[ -n $KERNELSOURCE ]] && { [[ -n $KERNELSOURCE ]] && {
update_initramfs $MOUNT update_initramfs "$MOUNT"
} }
# DEBUG: print free space # DEBUG: print free space
local freespace=$(LC_ALL=C df -h) local freespace
echo -e "$freespace" >> $DEST/${LOG_SUBPATH}/debootstrap.log freespace=$(LC_ALL=C df -h)
display_alert "Free SD cache" "$(echo -e "$freespace" | awk -v mp="${SDCARD}" '$6==mp {print $5}')" "info" display_alert "Free SD cache" "$(echo -e "$freespace" | awk -v mp="${SDCARD}" '$6==mp {print $5}')" "info"
display_alert "Mount point" "$(echo -e "$freespace" | awk -v mp="${MOUNT}" '$6==mp {print $5}')" "info" display_alert "Mount point" "$(echo -e "$freespace" | awk -v mp="${MOUNT}" '$6==mp {print $5}')" "info"
# stage: write u-boot, unless the deb is not there, which would happen if BOOTCONFIG=none # stage: write u-boot, unless the deb is not there, which would happen if BOOTCONFIG=none
# exception: if we use the one from repository, install version which was downloaded from repo # exception: if we use the one from repository, install version which was downloaded from repo
if [[ -f "${DEB_STORAGE}"/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb ]]; then if [[ -f "${DEB_STORAGE}"/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb ]] || [[ -n $UBOOT_REPO_VERSION ]]; then
write_uboot $LOOP write_uboot_to_loop_image "${LOOP}"
elif [[ "${UPSTREM_VER}" ]]; then
write_uboot $LOOP
fi fi
# fix wrong / permissions # fix wrong / permissions
chmod 755 $MOUNT chmod 755 "${MOUNT}"
call_extension_method "pre_umount_final_image" "config_pre_umount_final_image" << 'PRE_UMOUNT_FINAL_IMAGE' call_extension_method "pre_umount_final_image" "config_pre_umount_final_image" <<- 'PRE_UMOUNT_FINAL_IMAGE'
*allow config to hack into the image before the unmount* *allow config to hack into the image before the unmount*
Called before unmounting both `/root` and `/boot`. Called before unmounting both `/root` and `/boot`.
PRE_UMOUNT_FINAL_IMAGE PRE_UMOUNT_FINAL_IMAGE
# Check the partition table after the uboot code has been written # Check the partition table after the uboot code has been written
# and print to the log file. display_alert "nPartition table after write_uboot" "$LOOP" "debug"
echo -e "\nPartition table after write_uboot $LOOP" >> $DEST/${LOG_SUBPATH}/debootstrap.log run_host_command_logged sfdisk -l "${LOOP}" # @TODO: use asset..
sfdisk -l $LOOP >> $DEST/${LOG_SUBPATH}/debootstrap.log
# unmount /boot/efi first, then /boot, rootfs third, image file last # unmount /boot/efi first, then /boot, rootfs third, image file last
sync sync
[[ $UEFISIZE != 0 ]] && umount -l "${MOUNT}${UEFI_MOUNT_POINT}" [[ $UEFISIZE != 0 ]] && umount "${MOUNT}${UEFI_MOUNT_POINT}"
[[ $BOOTSIZE != 0 ]] && umount -l $MOUNT/boot [[ $BOOTSIZE != 0 ]] && umount "${MOUNT}/boot"
umount -l $MOUNT umount "${MOUNT}"
[[ $CRYPTROOT_ENABLE == yes ]] && cryptsetup luksClose $ROOT_MAPPER [[ $CRYPTROOT_ENABLE == yes ]] && cryptsetup luksClose $ROOT_MAPPER
call_extension_method "post_umount_final_image" "config_post_umount_final_image" << 'POST_UMOUNT_FINAL_IMAGE' umount_chroot_recursive "${MOUNT}" # @TODO: wait. NFS is not really unmounted above.
*allow config to hack into the image after the unmount*
Called after unmounting both `/root` and `/boot`.
POST_UMOUNT_FINAL_IMAGE
# to make sure its unmounted call_extension_method "post_umount_final_image" "config_post_umount_final_image" <<- 'POST_UMOUNT_FINAL_IMAGE'
while grep -Eq '(${MOUNT}|${DESTIMG})' /proc/mounts; do *allow config to hack into the image after the unmount*
display_alert "Wait for unmount" "${MOUNT}" "info" Called after unmounting both `/root` and `/boot`.
sleep 5 POST_UMOUNT_FINAL_IMAGE
done
losetup -d $LOOP display_alert "Freeing loop device" "${LOOP}"
# Don't delete $DESTIMG here, extensions might have put nice things there already. losetup -d "${LOOP}"
rm -rf --one-file-system $MOUNT unset LOOP # unset so cleanup handler does not try it again
mkdir -p $DESTIMG # We're done with ${MOUNT} by now, remove it.
mv ${SDCARD}.raw $DESTIMG/${version}.img rm -rf --one-file-system "${MOUNT}"
unset MOUNT
mkdir -p "${DESTIMG}"
# @TODO: misterious cwd, who sets it?
mv "${SDCARD}.raw" "${DESTIMG}/${version}.img"
# custom post_build_image_modify hook to run before fingerprinting and compression # custom post_build_image_modify hook to run before fingerprinting and compression
[[ $(type -t post_build_image_modify) == function ]] && display_alert "Custom Hook Detected" "post_build_image_modify" "info" && post_build_image_modify "${DESTIMG}/${version}.img" [[ $(type -t post_build_image_modify) == function ]] && display_alert "Custom Hook Detected" "post_build_image_modify" "info" && post_build_image_modify "${DESTIMG}/${version}.img"
if [[ -z $SEND_TO_SERVER ]]; then image_compress_and_checksum
if [[ $COMPRESS_OUTPUTIMAGE == "" || $COMPRESS_OUTPUTIMAGE == no ]]; then display_alert "Done building" "${FINALDEST}/${version}.img" "info" # A bit predicting the future, since it's still in DESTIMG at this point.
COMPRESS_OUTPUTIMAGE="sha,gpg,img"
elif [[ $COMPRESS_OUTPUTIMAGE == yes ]]; then
COMPRESS_OUTPUTIMAGE="sha,gpg,7z"
fi
if [[ $COMPRESS_OUTPUTIMAGE == *gz* ]]; then
display_alert "Compressing" "${DESTIMG}/${version}.img.gz" "info"
pigz -3 < $DESTIMG/${version}.img > $DESTIMG/${version}.img.gz
compression_type=".gz"
fi
if [[ $COMPRESS_OUTPUTIMAGE == *xz* ]]; then
display_alert "Compressing" "${DESTIMG}/${version}.img.xz" "info"
# compressing consumes a lot of memory we don't have. Waiting for previous packing job to finish helps to run a lot more builds in parallel
available_cpu=$(grep -c 'processor' /proc/cpuinfo)
[[ ${available_cpu} -gt 16 ]] && available_cpu=16 # using more cpu cores for compressing is pointless
available_mem=$(LC_ALL=c free | grep Mem | awk '{print $4/$2 * 100.0}' | awk '{print int($1)}') # in percentage
# build optimisations when memory drops below 5%
pixz -7 -p ${available_cpu} -f $(expr ${available_cpu} + 2) < $DESTIMG/${version}.img > ${DESTIMG}/${version}.img.xz
compression_type=".xz"
fi
if [[ $COMPRESS_OUTPUTIMAGE == *img* || $COMPRESS_OUTPUTIMAGE == *7z* ]]; then
# mv $DESTIMG/${version}.img ${FINALDEST}/${version}.img || exit 1
compression_type=""
fi
if [[ $COMPRESS_OUTPUTIMAGE == *sha* ]]; then
cd ${DESTIMG}
display_alert "SHA256 calculating" "${version}.img${compression_type}" "info"
sha256sum -b ${version}.img${compression_type} > ${version}.img${compression_type}.sha
fi
if [[ $COMPRESS_OUTPUTIMAGE == *gpg* ]]; then
cd ${DESTIMG}
if [[ -n $GPG_PASS ]]; then
display_alert "GPG signing" "${version}.img${compression_type}" "info"
if [[ -n $SUDO_USER ]]; then
sudo chown -R ${SUDO_USER}:${SUDO_USER} "${DESTIMG}"/
SUDO_PREFIX="sudo -H -u ${SUDO_USER}"
else
SUDO_PREFIX=""
fi
echo "${GPG_PASS}" | $SUDO_PREFIX bash -c "gpg --passphrase-fd 0 --armor --detach-sign --pinentry-mode loopback --batch --yes ${DESTIMG}/${version}.img${compression_type}" || exit 1
else
display_alert "GPG signing skipped - no GPG_PASS" "${version}.img" "wrn"
fi
fi
fingerprint_image "${DESTIMG}/${version}.img${compression_type}.txt" "${version}"
if [[ $COMPRESS_OUTPUTIMAGE == *7z* ]]; then
display_alert "Compressing" "${DESTIMG}/${version}.7z" "info"
7za a -t7z -bd -m0=lzma2 -mx=3 -mfb=64 -md=32m -ms=on \
${DESTIMG}/${version}.7z ${version}.key ${version}.img* > /dev/null 2>&1
find ${DESTIMG}/ -type \
f \( -name "${version}.img" -o -name "${version}.img.asc" -o -name "${version}.img.txt" -o -name "${version}.img.sha" \) -print0 |
xargs -0 rm > /dev/null 2>&1
fi
fi
display_alert "Done building" "${DESTIMG}/${version}.img" "info"
# Previously, post_build_image passed the .img path as an argument to the hook. Now its an ENV var. # Previously, post_build_image passed the .img path as an argument to the hook. Now its an ENV var.
export FINAL_IMAGE_FILE="${DESTIMG}/${version}.img" export FINAL_IMAGE_FILE="${DESTIMG}/${version}.img"
call_extension_method "post_build_image" << 'POST_BUILD_IMAGE' call_extension_method "post_build_image" <<- 'POST_BUILD_IMAGE'
*custom post build hook* *custom post build hook*
Called after the final .img file is built, before it is (possibly) written to an SD writer. Called after the final .img file is built, before it is (possibly) written to an SD writer.
- *NOTE*: this hook used to take an argument ($1) for the final image produced. - *NOTE*: this hook used to take an argument ($1) for the final image produced.
- Now it is passed as an environment variable `${FINAL_IMAGE_FILE}` - Now it is passed as an environment variable `${FINAL_IMAGE_FILE}`
It is the last possible chance to modify `$CARD_DEVICE`. It is the last possible chance to modify `$CARD_DEVICE`.
POST_BUILD_IMAGE POST_BUILD_IMAGE
# move artefacts from temporally directory to its final destination display_alert "Moving artefacts from temporary directory to its final destination" "${version}" "debug"
[[ -n $compression_type ]] && rm $DESTIMG/${version}.img [[ -n $compression_type ]] && run_host_command_logged rm -v "${DESTIMG}/${version}.img"
rsync -a --no-owner --no-group --remove-source-files $DESTIMG/${version}* ${FINALDEST} run_host_command_logged rsync -av --no-owner --no-group --remove-source-files "${DESTIMG}/${version}"* "${FINALDEST}"
rm -rf --one-file-system $DESTIMG run_host_command_logged rm -rfv --one-file-system "${DESTIMG}"
# write image to SD card # write image to SD card
if [[ $(lsblk "$CARD_DEVICE" 2> /dev/null) && -f ${FINALDEST}/${version}.img ]]; then write_image_to_device "${FINALDEST}/${version}.img" "${CARD_DEVICE}"
# make sha256sum if it does not exists. we need it for comparisson
if [[ -f "${FINALDEST}/${version}".img.sha ]]; then
local ifsha=$(cat ${FINALDEST}/${version}.img.sha | awk '{print $1}')
else
local ifsha=$(sha256sum -b "${FINALDEST}/${version}".img | awk '{print $1}')
fi
display_alert "Writing image" "$CARD_DEVICE ${readsha}" "info"
# write to SD card
pv -p -b -r -c -N "[ .... ] dd" ${FINALDEST}/${version}.img | dd of=$CARD_DEVICE bs=1M iflag=fullblock oflag=direct status=none
call_extension_method "post_write_sdcard" <<- 'POST_BUILD_IMAGE'
*run after writing img to sdcard*
After the image is written to `$CARD_DEVICE`, but before verifying it.
You can still set SKIP_VERIFY=yes to skip verification.
POST_BUILD_IMAGE
if [[ "${SKIP_VERIFY}" != "yes" ]]; then
# read and compare
display_alert "Verifying. Please wait!"
local ofsha=$(dd if=$CARD_DEVICE count=$(du -b ${FINALDEST}/${version}.img | cut -f1) status=none iflag=count_bytes oflag=direct | sha256sum | awk '{print $1}')
if [[ $ifsha == $ofsha ]]; then
display_alert "Writing verified" "${version}.img" "info"
else
display_alert "Writing failed" "${version}.img" "err"
fi
fi
elif [[ $(systemd-detect-virt) == 'docker' && -n $CARD_DEVICE ]]; then
# display warning when we want to write sd card under Docker
display_alert "Can't write to $CARD_DEVICE" "Enable docker privileged mode in config-docker.conf" "wrn"
fi
} }
function trap_handler_cleanup_destimg() {
[[ ! -d "${DESTIMG}" ]] && return 0
display_alert "Cleaning up temporary DESTIMG" "${DESTIMG}" "debug"
rm -rf --one-file-system "${DESTIMG}"
}

View File

@@ -0,0 +1,41 @@
# @TODO: make usable as a separate tool as well
function write_image_to_device() {
local image_file="${1}"
local device="${2}"
if [[ $(lsblk "${device}" 2> /dev/null) && -f "${image_file}" ]]; then
# create sha256sum if it does not exist. we need it for comparison, later.
local if_sha=""
if [[ -f "${image_file}.img.sha" ]]; then
# shellcheck disable=SC2002 # cat most definitely is useful. she purrs.
if_sha=$(cat "${image_file}.sha" | awk '{print $1}')
else
if_sha=$(sha256sum -b "${image_file}" | awk '{print $1}')
fi
display_alert "Writing image" "${device} ${if_sha}" "info"
# write to SD card
pv -p -b -r -c -N "$(logging_echo_prefix_for_pv "write_device") dd" "${image_file}" | dd "of=${device}" bs=1M iflag=fullblock oflag=direct status=none
call_extension_method "post_write_sdcard" <<- 'POST_BUILD_IMAGE'
*run after writing img to sdcard*
After the image is written to `${device}`, but before verifying it.
You can still set SKIP_VERIFY=yes to skip verification.
POST_BUILD_IMAGE
if [[ "${SKIP_VERIFY}" != "yes" ]]; then
# read and compare
display_alert "Verifying. Please wait!"
local of_sha=""
of_sha=$(dd "if=${device}" "count=$(du -b "${image_file}" | cut -f1)" status=none iflag=count_bytes oflag=direct | sha256sum | awk '{print $1}')
if [[ "$if_sha" == "$of_sha" ]]; then
display_alert "Writing verified" "${image_file}" "info"
else
display_alert "Writing failed" "${image_file}" "err"
fi
fi
elif [[ $(systemd-detect-virt) == 'docker' && -n ${device} ]]; then
# display warning when we want to write sd card under Docker
display_alert "Can't write to ${device}" "Enable docker privileged mode in config-docker.conf" "wrn"
fi
}

View File

@@ -0,0 +1,23 @@
function do_capturing_defs() {
# make sure to local with a value, otherwise they will appear in the list...
local pre_exec_vars="" post_exec_vars="" new_vars_list="" onevar="" all_vars_array=()
pre_exec_vars="$(compgen -A variable | grep -E '[[:upper:]]+' | grep -v -e "^BASH_" | sort)"
# run parameters passed. if this fails, so will we, immediately, and not capture anything correctly.
# if you ever find stacks referring here, please look at the caller and $1
"$@"
post_exec_vars="$(compgen -A variable | grep -E '[[:upper:]]+' | grep -v -e "^BASH_" | sort)"
new_vars_list="$(comm -13 <(echo "$pre_exec_vars") <(echo "${post_exec_vars}"))"
for onevar in ${new_vars_list}; do
# @TODO: rpardini: handle arrays and maps specially?
all_vars_array+=("$(declare -p "${onevar}")")
done
#IFS=$'\n'
export CAPTURED_VARS="${all_vars_array[*]}"
#display_alert "Vars defined during ${*@Q}:" "${CAPTURED_VARS}" "debug"
unset all_vars_array post_exec_vars new_vars_list pre_exec_vars onevar join_by
return 0 # return success explicitly , preemptively preventing short-circuit problems.
}

View File

@@ -1,60 +1,366 @@
#!/usr/bin/env bash #!/usr/bin/env bash
#--------------------------------------------------------------------------------------------------------------------------------
# Let's have unique way of displaying alerts
#--------------------------------------------------------------------------------------------------------------------------------
display_alert() {
# log function parameters to install.log
[[ -n "${DEST}" ]] && echo "Displaying message: $@" >> "${DEST}"/${LOG_SUBPATH}/output.log
local tmp="" function logging_init() {
[[ -n $2 ]] && tmp="[\e[0;33m $2 \x1B[0m]" # globals
export padding="" left_marker="[" right_marker="]"
export normal_color="\x1B[0m" gray_color="\e[1;30m" # "bright black", which is grey
declare -i logging_section_counter=0 # -i: integer
export logging_section_counter
export tool_color="${gray_color}" # default to gray... (should be ok on terminals)
if [[ "${CI}" == "true" ]]; then # ... but that is too dark for Github Actions
export tool_color="${normal_color}"
fi
}
case $3 in function logging_error_show_log() {
err) local logfile_to_show="${CURRENT_LOGFILE}" # store current logfile in separate var
echo -e "[\e[0;31m error \x1B[0m] $1 $tmp" unset CURRENT_LOGFILE # stop logging, otherwise crazy
[[ "${SHOW_LOG}" == "yes" ]] && return 0 # Do nothing if we're already showing the log on stderr.
if [[ "${CI}" == "true" ]]; then # Close opened CI group, even if there is none; errors would be buried otherwise.
echo "::endgroup::"
fi
if [[ -f "${logfile_to_show}" ]]; then
local prefix_sed_contents="${normal_color}${left_marker}${padding}👉${padding}${right_marker} "
local prefix_sed_cmd="s/^/${prefix_sed_contents}/;"
display_alert " 👇👇👇 Showing logfile below 👇👇👇" "${logfile_to_show}" "err"
if [[ -f /usr/bin/ccze ]]; then # use 'ccze' to colorize the log, making errors a lot more obvious.
# shellcheck disable=SC2002 # my cat is great. thank you, shellcheck.
cat "${logfile_to_show}" | grep -v -e "^$" | /usr/bin/ccze -o nolookups -A | sed -e "${prefix_sed_cmd}" 1>&2 # write it to stderr!!
else
# shellcheck disable=SC2002 # my cat is great. thank you, shellcheck.
cat "${logfile_to_show}" | grep -v -e "^$" | sed -e "${prefix_sed_cmd}" 1>&2 # write it to stderr!!
fi
display_alert " 👆👆👆 Showing logfile above 👆👆👆" "${logfile_to_show}" "err"
else
display_alert "✋ Error log not available at this stage of build" "check messages above" "debug"
fi
return 0
}
function start_logging_section() {
export logging_section_counter=$((logging_section_counter + 1)) # increment counter, used in filename
export CURRENT_LOGGING_COUNTER
CURRENT_LOGGING_COUNTER="$(printf "%03d" "$logging_section_counter")"
export CURRENT_LOGGING_SECTION=${LOG_SECTION:-early} # default to "early", should be overwritten soon enough
export CURRENT_LOGGING_SECTION_START=${SECONDS}
export CURRENT_LOGGING_DIR="${LOGDIR}" # set in cli-entrypoint.sh
export CURRENT_LOGFILE="${CURRENT_LOGGING_DIR}/${CURRENT_LOGGING_COUNTER}.000.${CURRENT_LOGGING_SECTION}.log"
mkdir -p "${CURRENT_LOGGING_DIR}"
touch "${CURRENT_LOGFILE}" # Touch it, make sure it's writable.
# Markers for CI (GitHub Actions); CI env var comes predefined as true there.
if [[ "${CI}" == "true" ]]; then # On CI, this has special meaning.
echo "::group::[🥑] Group ${CURRENT_LOGGING_SECTION}"
else
display_alert "" "<${CURRENT_LOGGING_SECTION}>" "group"
fi
return 0
}
function finish_logging_section() {
# Close opened CI group.
if [[ "${CI}" == "true" ]]; then
echo "Section '${CURRENT_LOGGING_SECTION}' took $((SECONDS - CURRENT_LOGGING_SECTION_START))s to execute." 1>&2 # write directly to stderr
echo "::endgroup::"
else
display_alert "" "</${CURRENT_LOGGING_SECTION}> in $((SECONDS - CURRENT_LOGGING_SECTION_START))s" "group"
fi
}
function do_with_logging() {
[[ -z "${DEST}" ]] && exit_with_error "DEST is not defined. Can't start logging."
# @TODO: check we're not currently logging (eg: this has been called 2 times without exiting)
start_logging_section
# Important: no error control is done here.
# Called arguments are run with set -e in effect.
# We now execute whatever was passed as parameters, in some different conditions:
# In both cases, writing to stderr will display to terminal.
# So whatever is being called, should prevent rogue stuff writing to stderr.
# this is mostly handled by redirecting stderr to stdout: 2>&1
if [[ "${SHOW_LOG}" == "yes" ]]; then
local prefix_sed_contents
prefix_sed_contents="$(logging_echo_prefix_for_pv "tool") $(echo -n -e "${tool_color}")" # spaces are significant
local prefix_sed_cmd="s/^/${prefix_sed_contents}/;"
# This is sick. Create a 3rd file descriptor sending it to sed. https://unix.stackexchange.com/questions/174849/redirecting-stdout-to-terminal-and-file-without-using-a-pipe
# Also terrible: don't hold a reference to cwd by changing to SRC always
exec 3> >(
cd "${SRC}" || exit 2
# First, log to file, then add prefix via sed for what goes to screen.
tee -a "${CURRENT_LOGFILE}" | sed -u -e "${prefix_sed_cmd}"
)
"$@" >&3
exec 3>&- # close the file descriptor, lest sed keeps running forever.
else
# If not showing the log, just send stdout to logfile. stderr will flow to screen.
"$@" >> "${CURRENT_LOGFILE}"
fi
finish_logging_section
return 0
}
# This takes LOG_ASSET, which can and should include an extension.
function do_with_log_asset() {
# @TODO: check that CURRENT_LOGGING_COUNTER is set, otherwise crazy?
local ASSET_LOGFILE="${CURRENT_LOGGING_DIR}/${CURRENT_LOGGING_COUNTER}.${LOG_ASSET}"
display_alert "Logging to asset" "${CURRENT_LOGGING_COUNTER}.${LOG_ASSET}" "debug"
"$@" >> "${ASSET_LOGFILE}"
}
function display_alert() {
# If asked, avoid any fancy ANSI escapes completely. For python-driven log collection. Formatting could be improved.
# If used, also does not write to logfile even if it exists.
if [[ "${ANSI_COLOR}" == "none" ]]; then
echo -e "${@}" | sed 's/\x1b\[[0-9;]*m//g' >&2
return 0
fi
local message="${1}" level="${3}" # params
local level_indicator="" inline_logs_color="" extra="" ci_log="" # this log
local skip_screen=0 # setting to 1 will write to logfile only
case "${level}" in
err | error)
level_indicator="💥"
inline_logs_color="\e[1;31m"
ci_log="error"
;; ;;
wrn) wrn | warn)
echo -e "[\e[0;35m warn \x1B[0m] $1 $tmp" level_indicator="🚸"
inline_logs_color="\e[1;35m"
ci_log="warning"
;; ;;
ext) ext)
echo -e "[\e[0;32m o.k. \x1B[0m] \e[1;32m$1\x1B[0m $tmp" level_indicator="✨" # or ✅ ?
inline_logs_color="\e[1;32m"
;; ;;
info) info)
echo -e "[\e[0;32m o.k. \x1B[0m] $1 $tmp" level_indicator="🌱"
inline_logs_color="\e[0;32m"
;;
cachehit)
level_indicator="💖"
inline_logs_color="\e[0;32m"
;;
cleanup | trap)
if [[ "${SHOW_TRAPS}" != "yes" ]]; then # enable debug for many, many debugging msgs
skip_screen=1
fi
level_indicator="🧽"
inline_logs_color="\e[1;33m"
;;
debug | deprecation)
if [[ "${SHOW_DEBUG}" != "yes" ]]; then # enable debug for many, many debugging msgs
skip_screen=1
fi
level_indicator="🐛"
inline_logs_color="\e[1;33m"
;;
group)
if [[ "${SHOW_DEBUG}" != "yes" && "${SHOW_GROUPS}" != "yes" ]]; then # show when debugging, or when specifically requested
skip_screen=1
fi
level_indicator="🦋"
inline_logs_color="\e[1;34m" # blue; 36 would be cyan
;;
command)
if [[ "${SHOW_COMMAND}" != "yes" ]]; then # enable to log all calls to external cmds
skip_screen=1
fi
level_indicator="🐸"
inline_logs_color="\e[0;36m" # a dim cyan
;;
timestamp | fasthash)
if [[ "${SHOW_FASTHASH}" != "yes" ]]; then # timestamp-related debugging messages, very very verbose
skip_screen=1
fi
level_indicator="🐜"
inline_logs_color="${tool_color}" # either gray or normal, a bit subdued.
;;
git)
if [[ "${SHOW_GIT}" != "yes" ]]; then # git-related debugging messages, very very verbose
skip_screen=1
fi
level_indicator="🔖"
inline_logs_color="${tool_color}" # either gray or normal, a bit subdued.
;; ;;
*) *)
echo -e "[\e[0;32m .... \x1B[0m] $1 $tmp" level="${level:-other}" # for file logging.
level_indicator="🌿"
inline_logs_color="\e[1;37m"
;; ;;
esac esac
# Now, log to file. This will be colorized later by ccze and such, so remove any colors it might already have.
# See also the stuff done in runners.sh for logging exact command lines and runtimes.
# the "echo" runs in a subshell due to the "sed" pipe (! important !), so we store BASHPID (current subshell) outside the scope
# BASHPID is the current subshell; $$ is parent's?; $_ is the current bashopts
local CALLER_PID="${BASHPID}"
if [[ -f "${CURRENT_LOGFILE}" ]]; then
# ANSI-less version
#echo -e "--> ${level_indicator} $(printf "%4s" "${SECONDS}"): $$ - ${CALLER_PID} - ${BASHPID}: $-: ${level}: ${1} [ ${2} ]" >> "${CURRENT_LOGFILE}" # | sed 's/\x1b\[[0-9;]*m//g'
echo -e "--> ${level_indicator} $(printf "%4s" "${SECONDS}"): $$ - ${CALLER_PID} - ${BASHPID}: $-: ${level}: ${1} [ ${2} ]" >> "${CURRENT_LOGFILE}" # | sed 's/\x1b\[[0-9;]*m//g'
fi
if [[ ${skip_screen} -eq 1 ]]; then
return 0
fi
local timing_info=""
if [[ "${SHOW_TIMING}" == "yes" ]]; then
timing_info="${tool_color}(${normal_color}$(printf "%3s" "${SECONDS}")${tool_color})" # SECONDS is bash builtin for seconds since start of script.
fi
local pids_info=""
if [[ "${SHOW_PIDS}" == "yes" ]]; then
pids_info="${tool_color}(${normal_color}$$ - ${CALLER_PID}${tool_color})" # BASHPID is the current subshell (should be equal to CALLER_PID here); $$ is parent's?
fi
local bashopts_info=""
if [[ "${SHOW_BASHOPTS}" == "yes" ]]; then
bashopts_info="${tool_color}(${normal_color}$-${tool_color})" # $- is the currently active bashopts
fi
[[ -n $2 ]] && extra=" [${inline_logs_color} ${2} ${normal_color}]"
echo -e "${normal_color}${left_marker}${padding}${level_indicator}${padding}${normal_color}${right_marker}${timing_info}${pids_info}${bashopts_info} ${normal_color}${message}${extra}${normal_color}" >&2
# Now write to CI, if we're running on it. Remove ANSI escapes which confuse GitHub Actions.
if [[ "${CI}" == "true" ]] && [[ "${ci_log}" != "" ]]; then
echo -e "::${ci_log} ::" "$@" | sed 's/\x1b\[[0-9;]*m//g' >&2
fi
return 0 # make sure to exit with success, always
} }
# is a formatted output of the values of variables function logging_echo_prefix_for_pv() {
# from the list at the place of the function call. local what="$1"
# local indicator="🤓" # you guess who this is
# The LOG_OUTPUT_FILE variable must be defined in the calling function case $what in
# before calling the `show_checklist_variables` function and unset after. extract_rootfs)
# indicator="💖"
show_checklist_variables() { ;;
local checklist=$* tool)
local var pval indicator="🔨"
local log_file=${LOG_OUTPUT_FILE:-"${SRC}"/output/${LOG_SUBPATH}/trash.log} ;;
local _line=${BASH_LINENO[0]} compile)
local _function=${FUNCNAME[1]} indicator="🐴"
local _file=$(basename "${BASH_SOURCE[1]}") ;;
write_device)
indicator="💾"
;;
create_rootfs_archive | decompress | compress_kernel_sources)
indicator="🤐"
;;
esac
echo -e "Show variables in function: $_function" "[$_file:$_line]\n" >> $log_file echo -n -e "${normal_color}${left_marker}${padding}${indicator}${padding}${normal_color}${right_marker}"
return 0
for var in $checklist; do }
eval pval=\$$var
echo -e "\n$var =:" >> $log_file # Cleanup for logging.
if [ $(echo "$pval" | awk -F"/" '{print NF}') -ge 4 ]; then function trap_handler_cleanup_logging() {
printf "%s\n" $pval >> $log_file [[ ! -d "${LOGDIR}" ]] && return 0
# Just delete LOGDIR if in CONFIG_DEFS_ONLY mode.
if [[ "${CONFIG_DEFS_ONLY}" == "yes" ]]; then
display_alert "Discarding logs" "CONFIG_DEFS_ONLY=${CONFIG_DEFS_ONLY}" "debug"
rm -rf --one-file-system "${LOGDIR}"
return 0
fi
local target_path="${DEST}/logs"
mkdir -p "${target_path}"
local target_file="${target_path}/armbian-logs-${ARMBIAN_BUILD_UUID}.html"
# Before writing new logfile, compress and move existing ones to archive folder. Unless running under CI.
if [[ "${CI}" != "true" ]]; then
declare -a existing_log_files_array
mapfile -t existing_log_files_array < <(find "${target_path}" -maxdepth 1 -type f -name "armbian-logs-*.html")
declare one_old_logfile old_logfile_fn target_archive_path="${target_path}"/archive
for one_old_logfile in "${existing_log_files_array[@]}"; do
old_logfile_fn="$(basename "${one_old_logfile}")"
display_alert "Archiving old logfile" "${old_logfile_fn}" "debug"
mkdir -p "${target_archive_path}"
# shellcheck disable=SC2002 # my cat is not useless. a bit whiny. not useless.
zstdmt --quiet "${one_old_logfile}" -o "${target_archive_path}/${old_logfile_fn}.zst"
rm -f "${one_old_logfile}"
done
fi
display_alert "Preparing HTML log from" "${LOGDIR}" "debug"
cat <<- HTML_HEADER > "${target_file}"
<html>
<head>
<title>Armbian logs for ${ARMBIAN_BUILD_UUID}</title>
<style>
html, html pre { background-color: black !important; color: white !important; font-family: JetBrains Mono, monospace, cursive !important; }
hr { border: 0; border-bottom: 1px dashed silver; }
</style>
</head>
<body>
<h2>Armbian build at $(LC_ALL=C LANG=C date) on $(hostname || true)</h2>
<h2>${ARMBIAN_ORIGINAL_ARGV[@]@Q}</h2>
<hr/>
$(git --git-dir="${SRC}/.git" log -1 --color --format=short --decorate | ansi2html --no-wrap --no-header)
<hr/>
$(git -c color.status=always --work-tree="${SRC}" --git-dir="${SRC}/.git" status | ansi2html --no-wrap --no-header)
<hr/>
$(git --work-tree="${SRC}" --git-dir="${SRC}/.git" diff -u --color | ansi2html --no-wrap --no-header)
<hr/>
HTML_HEADER
# Find and sort the files there, store in array one per logfile
declare -a logfiles_array
mapfile -t logfiles_array < <(find "${LOGDIR}" -type f | LC_ALL=C sort -h)
for logfile_full in "${logfiles_array[@]}"; do
local logfile_base="$(basename "${logfile_full}")"
if [[ -f /usr/bin/ccze ]] && [[ -f /usr/bin/ansi2html ]]; then
cat <<- HTML_ONE_LOGFILE_WITH_CCZE >> "${target_file}"
<h3>${logfile_base}</h3>
<div style="padding: 1em">
$(ccze -o nolookups --raw-ansi < "${logfile_full}" | ansi2html --no-wrap --no-header)
</div>
<hr/>
HTML_ONE_LOGFILE_WITH_CCZE
else else
printf "%-30s %-30s %-30s %-30s\n" $pval >> $log_file cat <<- HTML_ONE_LOGFILE_NO_CCZE >> "${target_file}"
<h3>${logfile_base}</h3>
<pre>$(cat "${logfile_full}")</pre>
HTML_ONE_LOGFILE_NO_CCZE
fi fi
done done
cat <<- HTML_FOOTER >> "${target_file}"
</body></html>
HTML_FOOTER
rm -rf --one-file-system "${LOGDIR}"
display_alert "Build log file" "${target_file}"
} }

View File

@@ -1,7 +1,229 @@
#!/usr/bin/env bash #!/usr/bin/env bash
run_on_sdcard() {
# Lack of quotes allows for redirections and pipes easily.
chroot "${SDCARD}" /bin/bash -c "${@}" >> "${DEST}"/${LOG_SUBPATH}/install.log
# shortcut
function chroot_sdcard_apt_get_install() {
chroot_sdcard_apt_get --no-install-recommends install "$@"
}
function chroot_sdcard_apt_get_install_download_only() {
chroot_sdcard_apt_get --no-install-recommends --download-only install "$@"
}
function chroot_sdcard_apt_get() {
acng_check_status_or_restart # make sure apt-cacher-ng is running OK.
local -a apt_params=("-${APT_OPTS:-y}")
[[ $NO_APT_CACHER != yes ]] && apt_params+=(
-o "Acquire::http::Proxy=\"http://${APT_PROXY_ADDR:-localhost:3142}\""
-o "Acquire::http::Proxy::localhost=\"DIRECT\""
)
apt_params+=(-o "Dpkg::Use-Pty=0") # Please be quiet
# Allow for clean-environment apt-get
local -a prelude_clean_env=()
if [[ "${use_clean_environment:-no}" == "yes" ]]; then
display_alert "Running with clean environment" "$*" "debug"
prelude_clean_env=("env" "-i")
fi
chroot_sdcard "${prelude_clean_env[@]}" DEBIAN_FRONTEND=noninteractive apt-get "${apt_params[@]}" "$@"
}
# please, please, unify around this function.
function chroot_sdcard() {
TMPDIR="" run_host_command_logged_raw chroot "${SDCARD}" /bin/bash -e -o pipefail -c "$*"
}
# please, please, unify around this function.
function chroot_mount() {
TMPDIR="" run_host_command_logged_raw chroot "${MOUNT}" /bin/bash -e -o pipefail -c "$*"
}
# This should be used if you need to capture the stdout produced by the command. It is NOT logged, and NOT run thru bash, and NOT quoted.
function chroot_sdcard_with_stdout() {
TMPDIR="" chroot "${SDCARD}" "$@"
}
function chroot_custom_long_running() {
local target=$1
shift
# @TODO: disabled, the pipe causes the left-hand side to subshell and caos ensues.
# local _exit_code=1
# if [[ "${SHOW_LOG}" == "yes" ]] || [[ "${CI}" == "true" ]]; then
# TMPDIR="" run_host_command_logged_raw chroot "${target}" /bin/bash -e -o pipefail -c "$*"
# _exit_code=$?
# else
# TMPDIR="" run_host_command_logged_raw chroot "${target}" /bin/bash -e -o pipefail -c "$*" | pv -N "$(logging_echo_prefix_for_pv "${INDICATOR:-compile}")" --progress --timer --line-mode --force --cursor --delay-start 0 -i "0.5"
# _exit_code=$?
# fi
# return $_exit_code
TMPDIR="" run_host_command_logged_raw chroot "${target}" /bin/bash -e -o pipefail -c "$*"
}
function chroot_custom() {
local target=$1
shift
TMPDIR="" run_host_command_logged_raw chroot "${target}" /bin/bash -e -o pipefail -c "$*"
}
# for deb building.
function fakeroot_dpkg_deb_build() {
display_alert "Building .deb package" "$(basename "${3:-${2:-${1}}}" || true)" "debug"
run_host_command_logged_raw fakeroot dpkg-deb -b "-Z${DEB_COMPRESS}" "$@"
}
# for long-running, host-side expanded bash invocations.
# the user gets a pv-based spinner based on the number of lines that flows to stdout (log messages).
# the raw version is already redirect stderr to stdout, and we'll be running under do_with_logging,
# so: _the stdout must flow_!!!
function run_host_command_logged_long_running() {
# @TODO: disabled. The Pipe used for "pv" causes the left-hand side to run in a subshell.
#local _exit_code=1
#if [[ "${SHOW_LOG}" == "yes" ]] || [[ "${CI}" == "true" ]]; then
# run_host_command_logged_raw /bin/bash -e -o pipefail-c "$*"
# _exit_code=$?
#else
# run_host_command_logged_raw /bin/bash -e -o pipefail -c "$*" | pv -N "$(logging_echo_prefix_for_pv "${INDICATOR:-compile}") " --progress --timer --line-mode --force --cursor --delay-start 0 -i "2"
# _exit_code=$?
#fi
#return $_exit_code
# Run simple and exit with it's code. Sorry.
run_host_command_logged_raw /bin/bash -e -o pipefail -c "$*"
}
# For installing packages host-side. Not chroot!
function host_apt_get_install() {
host_apt_get --no-install-recommends install "$@"
}
# For running apt-get stuff host-side. Not chroot!
function host_apt_get() {
local -a apt_params=("-${APT_OPTS:-y}")
apt_params+=(-o "Dpkg::Use-Pty=0") # Please be quiet
run_host_command_logged DEBIAN_FRONTEND=noninteractive apt-get "${apt_params[@]}" "$@"
}
# For host-side invocations of binaries we _know_ are x86-only.
# Determine if we're building on non-amd64, and if so, which qemu binary to use.
function run_host_x86_binary_logged() {
local -a qemu_invocation target_bin_arch
target_bin_arch="unknown - file util missing"
if [[ -f /usr/bin/file ]]; then
target_bin_arch="$(file -b "$1" | cut -d "," -f 1,2 | xargs echo -n)" # obtain the ELF name from the binary using 'file'
fi
qemu_invocation=("$@") # Default to calling directly, without qemu.
if [[ "$(uname -m)" != "x86_64" ]]; then # If we're NOT on x86...
if [[ -f /usr/bin/qemu-x86_64-static ]]; then
display_alert "Using qemu-x86_64-static for running on $(uname -m)" "$1 (${target_bin_arch})" "debug"
qemu_invocation=("/usr/bin/qemu-x86_64-static" "-L" "/usr/x86_64-linux-gnu" "$@")
elif [[ -f /usr/bin/qemu-x86_64 ]]; then
display_alert "Using qemu-x86_64 (non-static) for running on $(uname -m)" "$1 (${target_bin_arch})" "debug"
qemu_invocation=("/usr/bin/qemu-x86_64" "-L" "/usr/x86_64-linux-gnu" "$@")
else
exit_with_error "Can't find appropriate qemu binary for running '$1' on $(uname -m), missing packages?"
fi
else
display_alert "Not using qemu for running x86 binary on $(uname -m)" "$1 (${target_bin_arch})" "debug"
fi
run_host_command_logged "${qemu_invocation[@]}" # Exit with this result code
}
# run_host_command_logged is the very basic, should be used for everything, but, please use helpers above, this is very low-level.
function run_host_command_logged() {
run_host_command_logged_raw /bin/bash -e -o pipefail -c "$*"
}
# for interactive, dialog-like host-side invocations. no redirections performed, but same bash usage and expansion, for consistency.
function run_host_command_dialog() {
/bin/bash -e -o pipefail -c "$*"
}
# do NOT use directly, it does NOT expand the way it should (through bash)
function run_host_command_logged_raw() {
# Log the command to the current logfile, so it has context of what was run.
display_alert "Command debug" "$*" "command" # A special 'command' level.
# In this case I wanna KNOW exactly what failed, thus disable errexit, then re-enable immediately after running.
set +e
local exit_code=666
local seconds_start=${SECONDS} # Bash has a builtin SECONDS that is seconds since start of script
"$@" 2>&1 # redirect stderr to stdout. $* is NOT $@!
exit_code=$?
set -e
if [[ ${exit_code} != 0 ]]; then
if [[ -f "${CURRENT_LOGFILE}" ]]; then
echo "-->--> command failed with error code ${exit_code} after $((SECONDS - seconds_start)) seconds" >> "${CURRENT_LOGFILE}"
fi
# This is very specific; remove CURRENT_LOGFILE's value when calling display_alert here otherwise logged twice.
CURRENT_LOGFILE="" display_alert "cmd exited with code ${exit_code}" "$*" "wrn"
CURRENT_LOGFILE="" display_alert "stacktrace for failed command" "$(show_caller_full)" "wrn"
# Obtain extra info about error, eg, log files produced, extra messages set by caller, etc.
logging_enrich_run_command_error_info
elif [[ -f "${CURRENT_LOGFILE}" ]]; then
echo "-->--> command run successfully after $((SECONDS - seconds_start)) seconds" >> "${CURRENT_LOGFILE}"
fi
logging_clear_run_command_error_info # clear the error info vars, always, otherwise they'll leak into the next invocation.
return ${exit_code} # exiting with the same error code as the original error
}
function logging_clear_run_command_error_info() {
# Unset those globals; they're only valid for the first invocation of a runner helper function after they're set.
unset if_error_detail_message
unset if_error_find_files_sdcard # remember, this is global.
}
function logging_enrich_run_command_error_info() {
declare -a found_files=()
for path in "${if_error_find_files_sdcard[@]}"; do
declare -a sdcard_files
# shellcheck disable=SC2086 # I wanna expand, thank you...
mapfile -t sdcard_files < <(find ${SDCARD}/${path} -type f)
display_alert "Found if_error_find_files_sdcard files" "${sdcard_files[@]}" "debug"
found_files+=("${sdcard_files[@]}") # add to result
done
display_alert "Error-related files found" "${found_files[*]}" "debug"
for found_file in "${found_files[@]}"; do
# Log to asset, so it's available in the HTML log
LOG_ASSET="chroot_error_context__$(basename "${found_file}")" do_with_log_asset cat "${found_file}"
display_alert "File contents for error context" "${found_file}" "err"
# shellcheck disable=SC2002 # cat is not useless, ccze _only_ takes stdin
cat "${found_file}" | ccze -A 1>&2 # to stderr
# @TODO: 3x repeated ccze invocation, lets refactor it later
done
### if_error_detail_message, array: messages to display if the command failed.
if [[ -n ${if_error_detail_message} ]]; then
display_alert "Error context msg" "${if_error_detail_message}" "err"
fi
}
# @TODO: logging: used by desktop.sh exclusively. let's unify?
run_on_sdcard() {
chroot_sdcard "${@}"
}
# Auto retries the number of times passed on first argument to run all the other arguments.
function do_with_retries() {
local retries="${1}"
shift
local counter=0
while [[ $counter -lt $retries ]]; do
counter=$((counter + 1))
"$@" && return 0 # execute and return 0 if success; if not, let it loop;
display_alert "Command failed, retrying in 5s" "$*" "warn"
sleep 5
done
display_alert "Command failed ${counter} times, giving up" "$*" "warn"
return 1
} }

View File

@@ -0,0 +1,47 @@
# Helper function, to get clean "stack traces" that do not include the hook/extension infrastructure code.
function get_extension_hook_stracktrace() {
local sources_str="$1" # Give this ${BASH_SOURCE[*]} - expanded
local lines_str="$2" # And this # Give this ${BASH_LINENO[*]} - expanded
local sources lines index final_stack=""
IFS=' ' read -r -a sources <<< "${sources_str}"
IFS=' ' read -r -a lines <<< "${lines_str}"
for index in "${!sources[@]}"; do
local source="${sources[index]}" line="${lines[((index - 1))]}"
# skip extension infrastructure sources, these only pollute the trace and add no insight to users
[[ ${source} == */extension_function_definition.sh ]] && continue
[[ ${source} == *lib/extensions.sh ]] && continue
[[ ${source} == *lib/functions/logging.sh ]] && continue
[[ ${source} == */compile.sh ]] && continue
[[ ${line} -lt 1 ]] && continue
# relativize the source, otherwise too long to display
source="${source#"${SRC}/"}"
# remove 'lib/'. hope this is not too confusing.
source="${source#"lib/functions/"}"
source="${source#"lib/"}"
# add to the list
# shellcheck disable=SC2015 # i know. thanks. I won't write an if here
arrow="$([[ "$final_stack" != "" ]] && echo "-> " || true)"
final_stack="${source}:${line} ${arrow} ${final_stack} "
done
# output the result, no newline
# shellcheck disable=SC2086 # I wanna suppress double spacing, thanks
echo -n $final_stack
}
function show_caller_full() {
{
local i=1 # skip the first frame
local line_no
local function_name
local file_name
local padded_function_name
local short_file_name
while caller $i; do
((i++))
done | while read -r line_no function_name file_name; do
padded_function_name="$(printf "%30s" "$function_name()")"
short_file_name="${file_name/"${SRC}/"/"./"}"
echo -e "$padded_function_name --> $short_file_name:$line_no"
done
} || true # always success
}

View File

@@ -1,35 +1,129 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# exit_with_error <message> <highlight>
#
# a way to terminate build process
# with verbose error message
#
exit_with_error() { # Initialize and prepare the trap managers, one for each of ERR, INT, TERM and EXIT traps.
local _file # Bash goes insane regarding line numbers and other stuff if we try to overwrite the traps.
local _line=${BASH_LINENO[0]} # This also implements the custom "cleanup" handlers, which always run at the end of build, or when exiting prematurely for any reason.
local _function=${FUNCNAME[1]} function traps_init() {
local _description=$1 # shellcheck disable=SC2034 # Array of cleanup handlers.
local _highlight=$2 declare -a trap_manager_cleanup_handlers=()
_file=$(basename "${BASH_SOURCE[1]}") # shellcheck disable=SC2034 # Global to avoid doubly reporting ERR/EXIT pairs.
local stacktrace="$(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")" declare -i trap_manager_error_handled=0
trap 'main_trap_handler "ERR" "$?"' ERR
trap 'main_trap_handler "EXIT" "$?"' EXIT
trap 'main_trap_handler "INT" "$?"' INT
trap 'main_trap_handler "TERM" "$?"' TERM
}
display_alert "ERROR in function $_function" "$stacktrace" "err" # This is setup early in compile.sh as a trap handler for ERR, EXIT and INT signals.
display_alert "$_description" "$_highlight" "err" # There are arrays trap_manager_error_handlers=() trap_manager_exit_handlers=() trap_manager_int_handlers=()
display_alert "Process terminated" "" "info" # that will receive the actual handlers.
# First param is the type of trap, the second is the value of "$?"
# In order of occurrence.
# 1) Ctrl-C causes INT [stack unreliable], then ERR, then EXIT with trap_exit_code > 0
# 2) Stuff failing causes ERR [stack OK], then EXIT with trap_exit_code > 0
# 3) exit_with_error causes EXIT [stack OK, with extra frame] directly with trap_exit_code == 43
# 4) EXIT can also be called directly [stack unreliable], with trap_exit_code == 0 if build successful.
# So the EXIT trap will do:
# - show stack, if not previously shown (trap_manager_error_handled==0), and if trap_exit_code > 0
# - allow for debug shell, if trap_exit_code > 0
# - call all the cleanup functions (always)
function main_trap_handler() {
local trap_type="${1}"
local trap_exit_code="${2}"
local stack_caller short_stack
stack_caller="$(show_caller_full)"
short_stack="${BASH_SOURCE[1]}:${BASH_LINENO[0]}"
if [[ "${ERROR_DEBUG_SHELL}" == "yes" ]]; then display_alert "main_trap_handler" "${trap_type} and ${trap_exit_code} trap_manager_error_handled:${trap_manager_error_handled}" "trap"
display_alert "MOUNT" "${MOUNT}" "err"
display_alert "SDCARD" "${SDCARD}" "err" case "${trap_type}" in
display_alert "Here's a shell." "debug it" "err" TERM | INT)
bash < /dev/tty || true display_alert "Build interrupted" "Build interrupted by SIG${trap_type}" "warn"
trap_manager_error_handled=1
return # Nothing else to do here. Let the ERR trap show the stack, and the EXIT trap do cleanups.
;;
ERR)
logging_error_show_log
display_alert "Error occurred" "code ${trap_exit_code} at ${short_stack}\n${stack_caller}\n" "err"
trap_manager_error_handled=1
return # Nothing else to do here, let the EXIT trap do the cleanups.
;;
EXIT)
if [[ ${trap_manager_error_handled} -lt 1 ]] && [[ ${trap_exit_code} -gt 0 ]]; then
logging_error_show_log
display_alert "Exit with error detected" "${trap_exit_code} at ${short_stack} -\n${stack_caller}\n" "err"
trap_manager_error_handled=1
fi
if [[ ${trap_exit_code} -gt 0 ]] && [[ "${ERROR_DEBUG_SHELL}" == "yes" ]]; then
export ERROR_DEBUG_SHELL=no # dont do it twice
display_alert "MOUNT" "${MOUNT}" "debug"
display_alert "SDCARD" "${SDCARD}" "debug"
display_alert "ERROR_DEBUG_SHELL=yes, starting a shell." "ERROR_DEBUG_SHELL; exit to cleanup." "debug"
bash < /dev/tty >&2 || true
fi
# Run the cleanup handlers, always.
run_cleanup_handlers || true
;;
esac
}
# Run the cleanup handlers, if any, and clean the cleanup list.
function run_cleanup_handlers() {
display_alert "run_cleanup_handlers! list:" "${trap_manager_cleanup_handlers[*]}" "cleanup"
if [[ ${#trap_manager_cleanup_handlers[@]} -lt 1 ]]; then
return 0 # No handlers set, just return.
else
display_alert "Cleaning up" "please wait for cleanups to finish" "debug"
fi fi
# Loop over the handlers, execute one by one. Ignore errors.
local one_cleanup_handler
for one_cleanup_handler in "${trap_manager_cleanup_handlers[@]}"; do
display_alert "Running cleanup handler" "${one_cleanup_handler}" "debug"
"${one_cleanup_handler}" || true
done
# Clear the cleanup handler list, so they don't accidentally run again.
trap_manager_cleanup_handlers=()
}
# TODO: execute run_after_build here? # Adds a callback for trap types; first argument is function name; extra params are the types to add for.
function add_cleanup_handler() {
local callback="$1"
display_alert "Add callback as cleanup handler" "${callback}" "cleanup"
trap_manager_cleanup_handlers+=("$callback")
}
function execute_and_remove_cleanup_handler() {
local callback="$1"
display_alert "Execute and remove cleanup handler" "${callback}" "cleanup"
# @TODO implement!
}
function remove_all_trap_handlers() {
display_alert "Will remove ALL trap handlers, for a clean exit..." "" "cleanup"
}
# exit_with_error <message> <highlight>
# a way to terminate build process with verbose error message
function exit_with_error() {
# Log the error and exit.
# Everything else will be done by shared trap handling, above.
local _file="${BASH_SOURCE[1]}"
local _function=${FUNCNAME[1]}
local _line="${BASH_LINENO[0]}"
display_alert "error: ${1}" "${2} in ${_function}() at ${_file}:${_line}" "err"
# @TODO: move this into trap handler
# @TODO: integrate both overlayfs and the FD locking with cleanup logic
display_alert "Build terminating... wait for cleanups..." "" "err"
overlayfs_wrapper "cleanup" overlayfs_wrapper "cleanup"
# unlock loop device access in case of starvation # unlock loop device access in case of starvation # @TODO: hmm, say that again?
exec {FD}> /var/lock/armbian-debootstrap-losetup exec {FD}> /var/lock/armbian-debootstrap-losetup
flock -u "${FD}" flock -u "${FD}"
exit 255 exit 43
} }

View File

@@ -1,68 +1,34 @@
#!/usr/bin/env bash #!/usr/bin/env bash
DISTRIBUTIONS_DESC_DIR="config/distributions"
function prepare_and_config_main_build_single() { function prepare_and_config_main_build_single() {
# default umask for root is 022 so parent directories won't be group writeable without this # default umask for root is 022 so parent directories won't be group writeable without this
# this is used instead of making the chmod in prepare_host() recursive # this is used instead of making the chmod in prepare_host() recursive
umask 002 umask 002
# destination
if [ -d "$CONFIG_PATH/output" ]; then
DEST="${CONFIG_PATH}"/output
else
DEST="${SRC}"/output
fi
interactive_config_prepare_terminal interactive_config_prepare_terminal
# Warnings mitigation # Warnings mitigation
[[ -z $LANGUAGE ]] && export LANGUAGE="en_US:en" # set to english if not set [[ -z $LANGUAGE ]] && export LANGUAGE="en_US:en" # set to english if not set
[[ -z $CONSOLE_CHAR ]] && export CONSOLE_CHAR="UTF-8" # set console to UTF-8 if not set [[ -z $CONSOLE_CHAR ]] && export CONSOLE_CHAR="UTF-8" # set console to UTF-8 if not set
interactive_config_prepare_terminal export SHOW_WARNING=yes # If you try something that requires EXPERT=yes.
# set log path display_alert "Starting single build process" "${BOARD}" "info"
LOG_SUBPATH=${LOG_SUBPATH:=debug}
# compress and remove old logs
mkdir -p "${DEST}"/${LOG_SUBPATH}
(cd "${DEST}"/${LOG_SUBPATH} && tar -czf logs-"$(< timestamp)".tgz ./*.log) > /dev/null 2>&1
rm -f "${DEST}"/${LOG_SUBPATH}/*.log > /dev/null 2>&1
date +"%d_%m_%Y-%H_%M_%S" > "${DEST}"/${LOG_SUBPATH}/timestamp
# delete compressed logs older than 7 days
(cd "${DEST}"/${LOG_SUBPATH} && find . -name '*.tgz' -mtime +7 -delete) > /dev/null
if [[ $PROGRESS_DISPLAY == none ]]; then
OUTPUT_VERYSILENT=yes
elif [[ $PROGRESS_DISPLAY == dialog ]]; then
OUTPUT_DIALOG=yes
fi
if [[ $PROGRESS_LOG_TO_FILE != yes ]]; then unset PROGRESS_LOG_TO_FILE; fi
SHOW_WARNING=yes
# @TODO: rpardini: ccache belongs in compilation, not config. I think.
if [[ $USE_CCACHE != no ]]; then if [[ $USE_CCACHE != no ]]; then
CCACHE=ccache CCACHE=ccache
export PATH="/usr/lib/ccache:$PATH" export PATH="/usr/lib/ccache:$PATH"
# private ccache directory to avoid permission issues when using build script with "sudo" # private ccache directory to avoid permission issues when using build script with "sudo"
# see https://ccache.samba.org/manual.html#_sharing_a_cache for alternative solution # see https://ccache.samba.org/manual.html#_sharing_a_cache for alternative solution
[[ $PRIVATE_CCACHE == yes ]] && export CCACHE_DIR=$SRC/cache/ccache [[ $PRIVATE_CCACHE == yes ]] && export CCACHE_DIR=$SRC/cache/ccache
# Check if /tmp is mounted as tmpfs make a temporary ccache folder there for faster operation. # Done elsewhere in a-n # # Check if /tmp is mounted as tmpfs make a temporary ccache folder there for faster operation.
if [ "$(findmnt --noheadings --output FSTYPE --target "/tmp" --uniq)" == "tmpfs" ]; then # Done elsewhere in a-n # if [ "$(findmnt --noheadings --output FSTYPE --target "/tmp" --uniq)" == "tmpfs" ]; then
export CCACHE_TEMPDIR="/tmp/ccache-tmp" # Done elsewhere in a-n # export CCACHE_TEMPDIR="/tmp/ccache-tmp"
fi # Done elsewhere in a-n # fi
else else
CCACHE="" CCACHE=""
fi fi
# if KERNEL_ONLY, KERNEL_CONFIGURE, BOARD, BRANCH or RELEASE are not set, display selection menu # if KERNEL_ONLY, KERNEL_CONFIGURE, BOARD, BRANCH or RELEASE are not set, display selection menu
@@ -70,32 +36,38 @@ function prepare_and_config_main_build_single() {
backward_compatibility_build_only backward_compatibility_build_only
interactive_config_ask_kernel interactive_config_ask_kernel
[[ -z $KERNEL_ONLY ]] && exit_with_error "No option selected: KERNEL_ONLY"
[[ -z $KERNEL_CONFIGURE ]] && exit_with_error "No option selected: KERNEL_CONFIGURE"
interactive_config_ask_board_list interactive_config_ask_board_list # this uses get_list_of_all_buildable_boards too
[[ -z $BOARD ]] && exit_with_error "No board selected: BOARD"
if [[ -f $SRC/config/boards/${BOARD}.conf ]]; then declare -a arr_all_board_names=() # arrays
BOARD_TYPE='conf' declare -A dict_all_board_types=() dict_all_board_source_files=() # dictionaries
elif [[ -f $SRC/config/boards/${BOARD}.csc ]]; then get_list_of_all_buildable_boards arr_all_board_names "" dict_all_board_types dict_all_board_source_files "" # invoke
BOARD_TYPE='csc'
elif [[ -f $SRC/config/boards/${BOARD}.wip ]]; then
BOARD_TYPE='wip'
elif [[ -f $SRC/config/boards/${BOARD}.eos ]]; then
BOARD_TYPE='eos'
elif [[ -f $SRC/config/boards/${BOARD}.tvb ]]; then
BOARD_TYPE='tvb'
fi
# shellcheck source=/dev/null BOARD_TYPE="${dict_all_board_types["${BOARD}"]}"
source "${SRC}/config/boards/${BOARD}.${BOARD_TYPE}" BOARD_SOURCE_FILES="${dict_all_board_source_files["${BOARD}"]}"
LINUXFAMILY="${BOARDFAMILY}"
for BOARD_SOURCE_FILE in ${BOARD_SOURCE_FILES}; do # No quotes, so expand the space-delimited list
display_alert "Sourcing board configuration" "${BOARD_SOURCE_FILE}" "info"
# shellcheck source=/dev/null
source "${BOARD_SOURCE_FILE}"
done
LINUXFAMILY="${BOARDFAMILY}" # @TODO: wtf? why? this is (100%?) rewritten by family config!
# this sourced the board config. do_main_configuration will source the family file.
[[ -z $KERNEL_TARGET ]] && exit_with_error "Board configuration does not define valid kernel config" [[ -z $KERNEL_TARGET ]] && exit_with_error "Board configuration does not define valid kernel config"
interactive_config_ask_branch interactive_config_ask_branch
[[ -z $BRANCH ]] && exit_with_error "No kernel branch selected: BRANCH"
[[ $KERNEL_TARGET != *$BRANCH* ]] && display_alert "Kernel branch not defined for this board" "$BRANCH for ${BOARD}" "warn"
build_task_is_enabled "bootstrap" && { build_task_is_enabled "bootstrap" && {
interactive_config_ask_release interactive_config_ask_release
[[ -z $RELEASE && ${KERNEL_ONLY} != yes ]] && exit_with_error "No release selected: RELEASE"
interactive_config_ask_desktop_build interactive_config_ask_desktop_build
@@ -118,24 +90,21 @@ function prepare_and_config_main_build_single() {
[[ ${KERNEL_CONFIGURE} == prebuilt ]] && [[ -z ${REPOSITORY_INSTALL} ]] && [[ ${KERNEL_CONFIGURE} == prebuilt ]] && [[ -z ${REPOSITORY_INSTALL} ]] &&
REPOSITORY_INSTALL="u-boot,kernel,bsp,armbian-zsh,armbian-config,armbian-bsp-cli,armbian-firmware${BUILD_DESKTOP:+,armbian-desktop,armbian-bsp-desktop}" REPOSITORY_INSTALL="u-boot,kernel,bsp,armbian-zsh,armbian-config,armbian-bsp-cli,armbian-firmware${BUILD_DESKTOP:+,armbian-desktop,armbian-bsp-desktop}"
do_main_configuration do_main_configuration # This initializes the extension manager among a lot of other things, and call extension_prepare_config() hook
# @TODO: this does not belong in configuration. it's a compilation thing. move there
# optimize build time with 100% CPU usage # optimize build time with 100% CPU usage
CPUS=$(grep -c 'processor' /proc/cpuinfo) CPUS=$(grep -c 'processor' /proc/cpuinfo)
if [[ $USEALLCORES != no ]]; then if [[ $USEALLCORES != no ]]; then
CTHREADS="-j$((CPUS + CPUS / 2))" CTHREADS="-j$((CPUS + CPUS / 2))"
else else
CTHREADS="-j1" CTHREADS="-j1"
fi fi
call_extension_method "post_determine_cthreads" "config_post_determine_cthreads" << 'POST_DETERMINE_CTHREADS' call_extension_method "post_determine_cthreads" "config_post_determine_cthreads" <<- 'POST_DETERMINE_CTHREADS'
*give config a chance modify CTHREADS programatically. A build server may work better with hyperthreads-1 for example.* *give config a chance modify CTHREADS programatically. A build server may work better with hyperthreads-1 for example.*
Called early, before any compilation work starts. Called early, before any compilation work starts.
POST_DETERMINE_CTHREADS POST_DETERMINE_CTHREADS
if [[ "$BETA" == "yes" ]]; then if [[ "$BETA" == "yes" ]]; then
IMAGE_TYPE=nightly IMAGE_TYPE=nightly
@@ -145,25 +114,73 @@ POST_DETERMINE_CTHREADS
IMAGE_TYPE=user-built IMAGE_TYPE=user-built
fi fi
BOOTSOURCEDIR="${BOOTDIR}/$(branch2dir "${BOOTBRANCH}")" export BOOTSOURCEDIR="${BOOTDIR}/$(branch2dir "${BOOTBRANCH}")"
LINUXSOURCEDIR="${KERNELDIR}/$(branch2dir "${KERNELBRANCH}")" [[ -n $ATFSOURCE ]] && export ATFSOURCEDIR="${ATFDIR}/$(branch2dir "${ATFBRANCH}")"
[[ -n $ATFSOURCE ]] && ATFSOURCEDIR="${ATFDIR}/$(branch2dir "${ATFBRANCH}")"
BSP_CLI_PACKAGE_NAME="armbian-bsp-cli-${BOARD}${EXTRA_BSP_NAME}" export BSP_CLI_PACKAGE_NAME="armbian-bsp-cli-${BOARD}${EXTRA_BSP_NAME}"
BSP_CLI_PACKAGE_FULLNAME="${BSP_CLI_PACKAGE_NAME}_${REVISION}_${ARCH}" export BSP_CLI_PACKAGE_FULLNAME="${BSP_CLI_PACKAGE_NAME}_${REVISION}_${ARCH}"
BSP_DESKTOP_PACKAGE_NAME="armbian-bsp-desktop-${BOARD}${EXTRA_BSP_NAME}" export BSP_DESKTOP_PACKAGE_NAME="armbian-bsp-desktop-${BOARD}${EXTRA_BSP_NAME}"
BSP_DESKTOP_PACKAGE_FULLNAME="${BSP_DESKTOP_PACKAGE_NAME}_${REVISION}_${ARCH}" export BSP_DESKTOP_PACKAGE_FULLNAME="${BSP_DESKTOP_PACKAGE_NAME}_${REVISION}_${ARCH}"
CHOSEN_UBOOT=linux-u-boot-${BRANCH}-${BOARD} export CHOSEN_UBOOT=linux-u-boot-${BRANCH}-${BOARD}
CHOSEN_KERNEL=linux-image-${BRANCH}-${LINUXFAMILY} export CHOSEN_KERNEL=linux-image-${BRANCH}-${LINUXFAMILY}
CHOSEN_ROOTFS=${BSP_CLI_PACKAGE_NAME} export CHOSEN_ROOTFS=${BSP_CLI_PACKAGE_NAME}
CHOSEN_DESKTOP=armbian-${RELEASE}-desktop-${DESKTOP_ENVIRONMENT} export CHOSEN_DESKTOP=armbian-${RELEASE}-desktop-${DESKTOP_ENVIRONMENT}
CHOSEN_KSRC=linux-source-${BRANCH}-${LINUXFAMILY} export CHOSEN_KSRC=linux-source-${BRANCH}-${LINUXFAMILY}
# So for kernel full cached rebuilds.
# We wanna be able to rebuild kernels very fast. so it only makes sense to use a dir for each built kernel.
# That is the "default" layout; there will be as many source dirs as there are built kernel debs.
# But, it really makes much more sense if the major.minor (such as 5.10, 5.15, or 4.4) of kernel has its own
# tree. So in the end:
# <arch>-<major.minor>[-<family>]
# So we gotta explictly know the major.minor to be able to do that scheme.
# If we don't know, we could use BRANCH as reference, but that changes over time, and leads to wastage.
if [[ -n "${KERNELSOURCE}" ]]; then
export ARMBIAN_WILL_BUILD_KERNEL="${CHOSEN_KERNEL}-${ARCH}"
if [[ "x${KERNEL_MAJOR_MINOR}x" == "xx" ]]; then
exit_with_error "BAD config, missing" "KERNEL_MAJOR_MINOR" "err"
fi
export KERNEL_HAS_WORKING_HEADERS="no" # assume the worst, and all surprises will be happy ones
# Parse/validate the the major, bail if no match
if linux-version compare "${KERNEL_MAJOR_MINOR}" ge "5.4"; then # We support 5.x from 5.4
export KERNEL_HAS_WORKING_HEADERS="yes" # We can build working headers for 5.x even when cross compiling.
export KERNEL_MAJOR=5
export KERNEL_MAJOR_SHALLOW_TAG="v${KERNEL_MAJOR_MINOR}-rc1"
elif linux-version compare "${KERNEL_MAJOR_MINOR}" ge "4.4" && linux-version compare "${KERNEL_MAJOR_MINOR}" lt "5.0"; then
export KERNEL_MAJOR=4 # We support 4.x from 4.4
export KERNEL_MAJOR_SHALLOW_TAG="v${KERNEL_MAJOR_MINOR}-rc1"
else
# If you think you can patch packaging to support this, you're probably right. Is _worth_ it though?
exit_with_error "Kernel series unsupported" "'${KERNEL_MAJOR_MINOR}' is unsupported, or bad config"
fi
export LINUXSOURCEDIR="kernel/${ARCH}__${KERNEL_MAJOR_MINOR}__${LINUXFAMILY}"
else
export KERNEL_HAS_WORKING_HEADERS="yes" # I assume non-Armbian kernels have working headers, eg: Debian/Ubuntu generic do.
export ARMBIAN_WILL_BUILD_KERNEL=no
fi
if [[ -n "${BOOTCONFIG}" ]] && [[ "${BOOTCONFIG}" != "none" ]]; then
export ARMBIAN_WILL_BUILD_UBOOT=yes
else
export ARMBIAN_WILL_BUILD_UBOOT=no
fi
display_alert "Extensions: finish configuration" "extension_finish_config" "debug"
call_extension_method "extension_finish_config" <<- 'EXTENSION_FINISH_CONFIG'
*allow extensions a last chance at configuration just before it is done*
After kernel versions are set, package names determined, etc.
This runs *late*, and is the final step before finishing configuration.
Don't change anything not coming from other variables or meant to be configured by the user.
EXTENSION_FINISH_CONFIG
display_alert "Done with prepare_and_config_main_build_single" "${BOARD}.${BOARD_TYPE}" "info"
} }
# cli-bsp also uses this
function set_distribution_status() { function set_distribution_status() {
local distro_support_desc_filepath="${SRC}/config/distributions/${RELEASE}/support"
local distro_support_desc_filepath="${SRC}/${DISTRIBUTIONS_DESC_DIR}/${RELEASE}/support"
if [[ ! -f "${distro_support_desc_filepath}" ]]; then if [[ ! -f "${distro_support_desc_filepath}" ]]; then
exit_with_error "Distribution ${distribution_name} does not exist" exit_with_error "Distribution ${distribution_name} does not exist"
else else
@@ -172,8 +189,10 @@ function set_distribution_status() {
[[ "${DISTRIBUTION_STATUS}" != "supported" ]] && [[ "${EXPERT}" != "yes" ]] && exit_with_error "Armbian ${RELEASE} is unsupported and, therefore, only available to experts (EXPERT=yes)" [[ "${DISTRIBUTION_STATUS}" != "supported" ]] && [[ "${EXPERT}" != "yes" ]] && exit_with_error "Armbian ${RELEASE} is unsupported and, therefore, only available to experts (EXPERT=yes)"
return 0 # due to last stmt above being a shortcut conditional
} }
# Some utility functions
branch2dir() { branch2dir() {
[[ "${1}" == "head" ]] && echo "HEAD" || echo "${1##*:}" [[ "${1}" == "head" ]] && echo "HEAD" || echo "${1##*:}"
} }

View File

@@ -0,0 +1,212 @@
# This does NOT run under the logging manager. We should invoke the do_with_logging wrapper for
# strategic parts of this. Attention: rootfs does it's own logging, so just let that be.
function main_default_build_single() {
# Starting work. Export TMPDIR, which will be picked up by all `mktemp` invocations hopefully.
# Runner functions in logging/runners.sh will explicitly unset TMPDIR before invoking chroot.
# Invoking chroot directly will fail in subtle ways, so, please use the runner.sh functions.
display_alert "Starting single build, exporting TMPDIR" "${WORKDIR}" "debug"
mkdir -p "${WORKDIR}"
add_cleanup_handler trap_handler_cleanup_workdir
export TMPDIR="${WORKDIR}"
start=$(date +%s)
### Write config summary
LOG_SECTION="config_summary" do_with_logging write_config_summary_output_file
# Check and install dependencies, directory structure and settings
LOG_SECTION="prepare_host" do_with_logging prepare_host
if [[ "${JUST_INIT}" == "yes" ]]; then
exit 0
fi
if [[ $CLEAN_LEVEL == *sources* ]]; then
cleaning "sources"
fi
# Too many things being done. Allow doing only one thing. For core development, mostly.
# Also because "KERNEL_ONLY=yes" should really be spelled "PACKAGES_ONLY=yes"
local do_build_uboot="yes" do_build_kernel="yes" exit_after_kernel_build="no" exit_after_uboot_build="no" do_host_tools="yes"
if [[ "${JUST_UBOOT}" == "yes" && "${JUST_KERNEL}" == "yes" ]]; then
exit_with_error "User of build system" "can't make up his mind about JUST_KERNEL or JUST_UBOOT"
elif [[ "${JUST_UBOOT}" == "yes" ]]; then
display_alert "JUST_KERNEL set to yes" "Building only kernel and exiting after that" "debug"
do_build_uboot="yes"
do_host_tools="yes" # rkbin, fips, etc.
exit_after_uboot_build="yes"
elif [[ "${JUST_KERNEL}" == "yes" ]]; then
display_alert "JUST_KERNEL set to yes" "Building only kernel and exiting after that" "debug"
do_build_uboot="no"
exit_after_kernel_build="yes"
do_host_tools="no"
fi
# ignore updates help on building all images - for internal purposes
if [[ $IGNORE_UPDATES != yes ]]; then
# Fetch and build the host tools (via extensions)
if [[ "${do_host_tools}" == "yes" ]]; then
LOG_SECTION="fetch_and_build_host_tools" do_with_logging fetch_and_build_host_tools
fi
for cleaning_fragment in $(tr ',' ' ' <<< "${CLEAN_LEVEL}"); do
if [[ $cleaning_fragment != sources ]] && [[ $cleaning_fragment != make* ]]; then
LOG_SECTION="cleaning_${cleaning_fragment}" do_with_logging general_cleaning "${cleaning_fragment}"
fi
done
fi
if [[ "${do_build_uboot}" == "yes" ]]; then
# Don't build u-boot at all if the BOOTCONFIG is 'none'.
if [[ "${BOOTCONFIG}" != "none" ]]; then
# @TODO: refactor this. we use it very often
# Compile u-boot if packed .deb does not exist or use the one from repository
if [[ ! -f "${DEB_STORAGE}"/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb ]]; then
if [[ -n "${ATFSOURCE}" && "${ATFSOURCE}" != "none" && "${REPOSITORY_INSTALL}" != *u-boot* ]]; then
LOG_SECTION="compile_atf" do_with_logging compile_atf
fi
# @TODO: refactor this construct. we use it too many times.
if [[ "${REPOSITORY_INSTALL}" != *u-boot* ]]; then
LOG_SECTION="compile_uboot" do_with_logging compile_uboot
fi
fi
fi
if [[ "${exit_after_uboot_build}" == "yes" ]]; then
display_alert "Exiting after u-boot build" "JUST_UBOOT=yes" "info"
exit 0
fi
fi
# Compile kernel if packed .deb does not exist or use the one from repository
if [[ "${do_build_kernel}" == "yes" ]]; then
if [[ ! -f ${DEB_STORAGE}/${CHOSEN_KERNEL}_${REVISION}_${ARCH}.deb ]]; then
export KDEB_CHANGELOG_DIST=$RELEASE
if [[ -n $KERNELSOURCE ]] && [[ "${REPOSITORY_INSTALL}" != *kernel* ]]; then
compile_kernel # This handles its own logging sections.
fi
fi
if [[ "${exit_after_kernel_build}" == "yes" ]]; then
display_alert "Only building kernel and exiting" "JUST_KERNEL=yes" "debug"
exit 0
fi
fi
# Compile armbian-config if packed .deb does not exist or use the one from repository
if [[ ! -f ${DEB_STORAGE}/armbian-config_${REVISION}_all.deb ]]; then
if [[ "${REPOSITORY_INSTALL}" != *armbian-config* ]]; then
LOG_SECTION="compile_armbian-config" do_with_logging compile_armbian-config
fi
fi
# Compile armbian-zsh if packed .deb does not exist or use the one from repository
if [[ ! -f ${DEB_STORAGE}/armbian-zsh_${REVISION}_all.deb ]]; then
if [[ "${REPOSITORY_INSTALL}" != *armbian-zsh* ]]; then
LOG_SECTION="compile_armbian-zsh" do_with_logging compile_armbian-zsh
fi
fi
# Compile plymouth-theme-armbian if packed .deb does not exist or use the one from repository
if [[ ! -f ${DEB_STORAGE}/plymouth-theme-armbian_${REVISION}_all.deb ]]; then
if [[ "${REPOSITORY_INSTALL}" != *plymouth-theme-armbian* ]]; then
compile_plymouth-theme-armbian
fi
fi
# Compile armbian-firmware if packed .deb does not exist or use the one from repository
if ! ls "${DEB_STORAGE}/armbian-firmware_${REVISION}_all.deb" 1> /dev/null 2>&1 || ! ls "${DEB_STORAGE}/armbian-firmware-full_${REVISION}_all.deb" 1> /dev/null 2>&1; then
if [[ "${REPOSITORY_INSTALL}" != *armbian-firmware* ]]; then
if [[ "${INSTALL_ARMBIAN_FIRMWARE:-yes}" == "yes" ]]; then # Build firmware by default.
# Build the light version of firmware package
FULL="" REPLACE="-full" LOG_SECTION="compile_firmware" do_with_logging compile_firmware
# Build the full version of firmware package
FULL="-full" REPLACE="" LOG_SECTION="compile_firmware_full" do_with_logging compile_firmware
fi
fi
fi
overlayfs_wrapper "cleanup"
# create board support package
if [[ -n "${RELEASE}" && ! -f "${DEB_STORAGE}/${BSP_CLI_PACKAGE_FULLNAME}.deb" && "${REPOSITORY_INSTALL}" != *armbian-bsp-cli* ]]; then
LOG_SECTION="create_board_package" do_with_logging create_board_package
fi
# create desktop package
if [[ -n "${RELEASE}" && "${DESKTOP_ENVIRONMENT}" && ! -f "${DEB_STORAGE}/$RELEASE/${CHOSEN_DESKTOP}_${REVISION}_all.deb" && "${REPOSITORY_INSTALL}" != *armbian-desktop* ]]; then
LOG_SECTION="create_desktop_package" do_with_logging create_desktop_package
fi
if [[ -n "${RELEASE}" && "${DESKTOP_ENVIRONMENT}" && ! -f "${DEB_STORAGE}/${RELEASE}/${BSP_DESKTOP_PACKAGE_FULLNAME}.deb" && "${REPOSITORY_INSTALL}" != *armbian-bsp-desktop* ]]; then
LOG_SECTION="create_bsp_desktop_package" do_with_logging create_bsp_desktop_package
fi
# skip image creation if exists. useful for CI when making a lot of images
if [ "$IMAGE_PRESENT" == yes ] && ls "${FINALDEST}/${VENDOR}_${REVISION}_${BOARD^}_${RELEASE}_${BRANCH}_${VER/-$LINUXFAMILY/}${DESKTOP_ENVIRONMENT:+_$DESKTOP_ENVIRONMENT}"*.xz 1> /dev/null 2>&1; then
display_alert "Skipping image creation" "image already made - IMAGE_PRESENT is set" "wrn"
exit
fi
# build additional packages
if [[ $EXTERNAL_NEW == compile ]]; then
LOG_SECTION="chroot_build_packages" do_with_logging chroot_build_packages
fi
# end of kernel-only, so display what was built.
if [[ $KERNEL_ONLY != yes ]]; then
display_alert "Kernel build done" "@host" "target-reached"
display_alert "Target directory" "${DEB_STORAGE}/" "info"
display_alert "File name" "${CHOSEN_KERNEL}_${REVISION}_${ARCH}.deb" "info"
fi
# build rootfs, if not only kernel.
if [[ $KERNEL_ONLY != yes ]]; then
display_alert "Building image" "${BOARD}" "target-started"
[[ $BSP_BUILD != yes ]] && build_rootfs_and_image # old debootstrap-ng. !!!LOGGING!!! handled inside, there are many sub-parts.
display_alert "Done building image" "${BOARD}" "target-reached"
fi
call_extension_method "run_after_build" <<- 'RUN_AFTER_BUILD'
*hook for function to run after build, i.e. to change owner of `$SRC`*
Really one of the last hooks ever called. The build has ended. Congratulations.
- *NOTE:* this will run only if there were no errors during build process.
RUN_AFTER_BUILD
end=$(date +%s)
runtime=$(((end - start) / 60))
display_alert "Runtime" "$runtime min" "info"
[ "$(systemd-detect-virt)" == 'docker' ] && BUILD_CONFIG='docker'
# Make it easy to repeat build by displaying build options used. Prepare array.
local -a repeat_args=("./compile.sh" "${BUILD_CONFIG}" " BRANCH=${BRANCH}")
[[ -n ${RELEASE} ]] && repeat_args+=("RELEASE=${RELEASE}")
[[ -n ${BUILD_MINIMAL} ]] && repeat_args+=("BUILD_MINIMAL=${BUILD_MINIMAL}")
[[ -n ${BUILD_DESKTOP} ]] && repeat_args+=("BUILD_DESKTOP=${BUILD_DESKTOP}")
[[ -n ${KERNEL_ONLY} ]] && repeat_args+=("KERNEL_ONLY=${KERNEL_ONLY}")
[[ -n ${KERNEL_CONFIGURE} ]] && repeat_args+=("KERNEL_CONFIGURE=${KERNEL_CONFIGURE}")
[[ -n ${DESKTOP_ENVIRONMENT} ]] && repeat_args+=("DESKTOP_ENVIRONMENT=${DESKTOP_ENVIRONMENT}")
[[ -n ${DESKTOP_ENVIRONMENT_CONFIG_NAME} ]] && repeat_args+=("DESKTOP_ENVIRONMENT_CONFIG_NAME=${DESKTOP_ENVIRONMENT_CONFIG_NAME}")
[[ -n ${DESKTOP_APPGROUPS_SELECTED} ]] && repeat_args+=("DESKTOP_APPGROUPS_SELECTED=\"${DESKTOP_APPGROUPS_SELECTED}\"")
[[ -n ${DESKTOP_APT_FLAGS_SELECTED} ]] && repeat_args+=("DESKTOP_APT_FLAGS_SELECTED=\"${DESKTOP_APT_FLAGS_SELECTED}\"")
[[ -n ${COMPRESS_OUTPUTIMAGE} ]] && repeat_args+=("COMPRESS_OUTPUTIMAGE=${COMPRESS_OUTPUTIMAGE}")
display_alert "Repeat Build Options" "${repeat_args[*]}" "ext" # * = expand array, space delimited, single-word.
}
function trap_handler_cleanup_workdir() {
display_alert "Cleanup WORKDIR: $WORKDIR" "trap_handler_cleanup_workdir" "cleanup"
unset TMPDIR
if [[ -d "${WORKDIR}" ]]; then
if [[ "${PRESERVE_WORKDIR}" != "yes" ]]; then
display_alert "Cleaning up WORKDIR" "$(du -h -s "$WORKDIR")" "debug"
rm -rf "${WORKDIR}"
else
display_alert "Preserving WORKDIR due to PRESERVE_WORKDIR=yes" "$(du -h -s "$WORKDIR")" "warn"
fi
fi
}

View File

@@ -1,122 +1,137 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# unmount_on_exit
#
unmount_on_exit() {
trap - INT TERM EXIT function build_rootfs_and_image() {
local stacktrace="$(get_extension_hook_stracktrace "${BASH_SOURCE[*]}" "${BASH_LINENO[*]}")"
display_alert "unmount_on_exit() called!" "$stacktrace" "err"
if [[ "${ERROR_DEBUG_SHELL}" == "yes" ]]; then
ERROR_DEBUG_SHELL=no # dont do it twice
display_alert "MOUNT" "${MOUNT}" "err"
display_alert "SDCARD" "${SDCARD}" "err"
display_alert "ERROR_DEBUG_SHELL=yes, starting a shell." "ERROR_DEBUG_SHELL" "err"
bash < /dev/tty || true
fi
umount_chroot "${SDCARD}/"
mountpoint -q "${SRC}"/cache/toolchain && umount -l "${SRC}"/cache/toolchain
mountpoint -q "${SRC}"/cache/rootfs && umount -l "${SRC}"/cache/rootfs
umount -l "${SDCARD}"/tmp > /dev/null 2>&1
umount -l "${SDCARD}" > /dev/null 2>&1
umount -l "${MOUNT}"/boot > /dev/null 2>&1
umount -l "${MOUNT}" > /dev/null 2>&1
[[ $CRYPTROOT_ENABLE == yes ]] && cryptsetup luksClose "${ROOT_MAPPER}"
losetup -d "${LOOP}" > /dev/null 2>&1
rm -rf --one-file-system "${SDCARD}"
exit_with_error "debootstrap-ng was interrupted" || true # don't trigger again
}
# debootstrap_ng
#
debootstrap_ng() {
display_alert "Checking for rootfs cache" "$(echo "${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL}" | tr -s " ")" "info" display_alert "Checking for rootfs cache" "$(echo "${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL}" | tr -s " ")" "info"
[[ $ROOTFS_TYPE != ext4 ]] && display_alert "Assuming $BOARD $BRANCH kernel supports $ROOTFS_TYPE" "" "wrn" [[ $ROOTFS_TYPE != ext4 ]] && display_alert "Assuming ${BOARD} ${BRANCH} kernel supports ${ROOTFS_TYPE}" "" "wrn"
# trap to unmount stuff in case of error/manual interruption # add handler to cleanup when done or if something fails or is interrupted.
trap unmount_on_exit INT TERM EXIT add_cleanup_handler trap_handler_cleanup_rootfs_and_image
# stage: clean and create directories # stage: clean and create directories
rm -rf $SDCARD $MOUNT rm -rf "${SDCARD}" "${MOUNT}"
mkdir -p $SDCARD $MOUNT $DEST/images $SRC/cache/rootfs mkdir -p "${SDCARD}" "${MOUNT}" "${DEST}/images" "${SRC}/cache/rootfs"
# bind mount rootfs if defined # bind mount rootfs if defined
if [[ -d "${ARMBIAN_CACHE_ROOTFS_PATH}" ]]; then if [[ -d "${ARMBIAN_CACHE_ROOTFS_PATH}" ]]; then
mountpoint -q "${SRC}"/cache/rootfs && umount -l "${SRC}"/cache/toolchain mountpoint -q "${SRC}"/cache/rootfs && umount "${SRC}"/cache/toolchain
mount --bind "${ARMBIAN_CACHE_ROOTFS_PATH}" "${SRC}"/cache/rootfs mount --bind "${ARMBIAN_CACHE_ROOTFS_PATH}" "${SRC}/cache/rootfs"
fi fi
# stage: verify tmpfs configuration and mount # stage: verify tmpfs configuration and mount
# CLI needs ~1.5GiB, desktop - ~3.5GiB # CLI needs ~2GiB, desktop ~5GiB
# calculate and set tmpfs mount to use 9/10 of available RAM+SWAP # vs 60% of available RAM (free + buffers + magic)
local phymem=$(((($(awk '/MemTotal/ {print $2}' /proc/meminfo) + $(awk '/SwapTotal/ {print $2}' /proc/meminfo))) / 1024 * 9 / 10)) # MiB local available_physical_memory_mib=$(($(awk '/MemAvailable/ {print $2}' /proc/meminfo) * 6 / 1024 / 10)) # MiB
if [[ $BUILD_DESKTOP == yes ]]; then local tmpfs_max_size=3500; else local tmpfs_max_size=1500; fi # MiB local tmpfs_estimated_size=2000 # MiB
if [[ $FORCE_USE_RAMDISK == no ]]; then [[ $BUILD_DESKTOP == yes ]] && tmpfs_estimated_size=5000 # MiB
local use_tmpfs=no
elif [[ $FORCE_USE_RAMDISK == yes || $phymem -gt $tmpfs_max_size ]]; then
local use_tmpfs=yes
fi
[[ -n $FORCE_TMPFS_SIZE ]] && phymem=$FORCE_TMPFS_SIZE
[[ $use_tmpfs == yes ]] && mount -t tmpfs -o size=${phymem}M tmpfs $SDCARD local use_tmpfs=no # by default
if [[ ${FORCE_USE_RAMDISK} == no ]]; then # do not use, even if it fits
:
elif [[ ${FORCE_USE_RAMDISK} == yes || ${available_physical_memory_mib} -gt ${tmpfs_estimated_size} ]]; then # use, either force or fits
use_tmpfs=yes
display_alert "Using tmpfs for rootfs" "RAM available: ${available_physical_memory_mib}MiB > ${tmpfs_estimated_size}MiB estimated" "debug"
else
display_alert "Not using tmpfs for rootfs" "RAM available: ${available_physical_memory_mib}MiB < ${tmpfs_estimated_size}MiB estimated" "debug"
fi
if [[ $use_tmpfs == yes ]]; then
mount -t tmpfs tmpfs "${SDCARD}" # do not specify size; we've calculated above that it should fit, and Linux will try its best if it doesn't.
fi
# stage: prepare basic rootfs: unpack cache or create from scratch # stage: prepare basic rootfs: unpack cache or create from scratch
create_rootfs_cache LOG_SECTION="get_or_create_rootfs_cache_chroot_sdcard" do_with_logging get_or_create_rootfs_cache_chroot_sdcard
call_extension_method "pre_install_distribution_specific" "config_pre_install_distribution_specific" << 'PRE_INSTALL_DISTRIBUTION_SPECIFIC' call_extension_method "pre_install_distribution_specific" "config_pre_install_distribution_specific" <<- 'PRE_INSTALL_DISTRIBUTION_SPECIFIC'
*give config a chance to act before install_distribution_specific* *give config a chance to act before install_distribution_specific*
Called after `create_rootfs_cache` (_prepare basic rootfs: unpack cache or create from scratch_) but before `install_distribution_specific` (_install distribution and board specific applications_). Called after `create_rootfs_cache` (_prepare basic rootfs: unpack cache or create from scratch_) but before `install_distribution_specific` (_install distribution and board specific applications_).
PRE_INSTALL_DISTRIBUTION_SPECIFIC PRE_INSTALL_DISTRIBUTION_SPECIFIC
# stage: install kernel and u-boot packages # stage: install kernel and u-boot packages
# install distribution and board specific applications # install distribution and board specific applications
install_distribution_specific LOG_SECTION="install_distribution_specific_${RELEASE}" do_with_logging install_distribution_specific
install_common LOG_SECTION="install_distribution_agnostic" do_with_logging install_distribution_agnostic
# install locally built packages # install locally built packages # @TODO: armbian-nextify this eventually
[[ $EXTERNAL_NEW == compile ]] && chroot_installpackages_local [[ $EXTERNAL_NEW == compile ]] && LOG_SECTION="packages_local" do_with_logging chroot_installpackages_local
# install from apt.armbian.com # install from apt.armbian.com # @TODO: armbian-nextify this eventually
[[ $EXTERNAL_NEW == prebuilt ]] && chroot_installpackages "yes" [[ $EXTERNAL_NEW == prebuilt ]] && LOG_SECTION="packages_prebuilt" do_with_logging chroot_installpackages "yes"
# stage: user customization script # stage: user customization script
# NOTE: installing too many packages may fill tmpfs mount # NOTE: installing too many packages may fill tmpfs mount
customize_image LOG_SECTION="customize_image" do_with_logging customize_image
# remove packages that are no longer needed. Since we have intrudoced uninstall feature, we might want to clean things that are no longer needed # remove packages that are no longer needed. rootfs cache + uninstall might have leftovers.
display_alert "No longer needed packages" "purge" "info" LOG_SECTION="apt_purge_unneeded_packages" do_with_logging apt_purge_unneeded_packages
chroot $SDCARD /bin/bash -c "apt-get autoremove -y" > /dev/null 2>&1
# create list of all installed packages for debug purposes # for reference, debugging / sanity checking
chroot $SDCARD /bin/bash -c "dpkg -l | grep ^ii | awk '{ print \$2\",\"\$3 }'" > $DEST/${LOG_SUBPATH}/installed-packages-${RELEASE}$([[ ${BUILD_MINIMAL} == yes ]] && LOG_SECTION="list_installed_packages" do_with_logging list_installed_packages
echo "-minimal")$([[ ${BUILD_DESKTOP} == yes ]] && echo "-desktop").list 2>&1
# clean up / prepare for making the image # clean up / prepare for making the image
umount_chroot "$SDCARD" umount_chroot "$SDCARD"
post_debootstrap_tweaks
LOG_SECTION="post_debootstrap_tweaks" do_with_logging post_debootstrap_tweaks
# ------------------------------------ UP HERE IT's 'rootfs' stuff -------------------------------
#------------------------------------ DOWN HERE IT's 'image' stuff -------------------------------
if [[ $ROOTFS_TYPE == fel ]]; then if [[ $ROOTFS_TYPE == fel ]]; then
FEL_ROOTFS=$SDCARD/ FEL_ROOTFS=$SDCARD/
display_alert "Starting FEL boot" "$BOARD" "info" display_alert "Starting FEL boot" "$BOARD" "info"
start_fel_boot start_fel_boot
else else
prepare_partitions LOG_SECTION="prepare_partitions" do_with_logging prepare_partitions
create_image LOG_SECTION="create_image_from_sdcard_rootfs" do_with_logging create_image_from_sdcard_rootfs
fi fi
# stage: unmount tmpfs # Completely and recursively unmount the directory. This will remove the tmpfs mount too
umount $SDCARD 2>&1 umount_chroot_recursive "${SDCARD}"
if [[ $use_tmpfs = yes ]]; then
while grep -qs "$SDCARD" /proc/mounts; do
umount $SDCARD
sleep 5
done
fi
rm -rf $SDCARD
# remove exit trap # Remove the dir
trap - INT TERM EXIT [[ -d "${SDCARD}" ]] && rm -rf --one-file-system "${SDCARD}"
# Run the cleanup handler. @TODO: this already does the above, so can be simpler.
execute_and_remove_cleanup_handler trap_handler_cleanup_rootfs_and_image
return 0
}
function list_installed_packages() {
display_alert "Recording list of installed packages" "asset log" "debug"
LOG_ASSET="installed_packages.txt" do_with_log_asset chroot_sdcard dpkg --get-selections "| grep -v deinstall | awk '{print \$1}' | cut -f1 -d':'"
}
function trap_handler_cleanup_rootfs_and_image() {
display_alert "Cleanup for rootfs and image" "trap_handler_cleanup_rootfs_and_image" "cleanup"
cd "${SRC}" || echo "Failed to cwd to ${SRC}" # Move pwd away, so unmounts work
# those will loop until they're unmounted.
umount_chroot_recursive "${SDCARD}" || true
umount_chroot_recursive "${MOUNT}" || true
# unmount tmpfs mounted on SDCARD if it exists.
mountpoint -q "${SDCARD}" && umount "${SDCARD}"
mountpoint -q "${SRC}"/cache/toolchain && umount -l "${SRC}"/cache/toolchain >&2 # @TODO: why does Igor uses lazy umounts? nfs?
mountpoint -q "${SRC}"/cache/rootfs && umount -l "${SRC}"/cache/rootfs >&2
[[ $CRYPTROOT_ENABLE == yes ]] && cryptsetup luksClose "${ROOT_MAPPER}" >&2
if [[ "${PRESERVE_SDCARD_MOUNT}" == "yes" ]]; then
display_alert "Preserving SD card mount" "trap_handler_cleanup_rootfs_and_image" "warn"
return 0
fi
# shellcheck disable=SC2153 # global var.
if [[ -b "${LOOP}" ]]; then
display_alert "Freeing loop" "trap_handler_cleanup_rootfs_and_image ${LOOP}" "wrn"
losetup -d "${LOOP}" >&2 || true
fi
[[ -d "${SDCARD}" ]] && rm -rf --one-file-system "${SDCARD}"
[[ -d "${MOUNT}" ]] && rm -rf --one-file-system "${MOUNT}"
return 0 # short-circuit above, so exit clean here
} }

View File

@@ -1,14 +1,24 @@
#!/usr/bin/env bash #!/usr/bin/env bash
install_deb_chroot() {
apt_purge_unneeded_packages() {
# remove packages that are no longer needed. rootfs cache + uninstall might have leftovers.
display_alert "No longer needed packages" "purge" "info"
chroot_sdcard_apt_get autoremove
}
install_deb_chroot() {
local package=$1 local package=$1
local variant=$2 local variant=$2
local transfer=$3 local transfer=$3
local name local name
local desc local desc
if [[ ${variant} != remote ]]; then if [[ ${variant} != remote ]]; then
# @TODO: this can be sped up significantly by mounting debs readonly directly in chroot /root/debs and installing from there
# also won't require cleanup later
name="/root/"$(basename "${package}") name="/root/"$(basename "${package}")
[[ ! -f "${SDCARD}${name}" ]] && cp "${package}" "${SDCARD}${name}" [[ ! -f "${SDCARD}${name}" ]] && run_host_command_logged cp -pv "${package}" "${SDCARD}${name}"
desc="" desc=""
else else
name=$1 name=$1
@@ -16,10 +26,14 @@ install_deb_chroot() {
fi fi
display_alert "Installing${desc}" "${name/\/root\//}" display_alert "Installing${desc}" "${name/\/root\//}"
[[ $NO_APT_CACHER != yes ]] && local apt_extra="-o Acquire::http::Proxy=\"http://${APT_PROXY_ADDR:-localhost:3142}\" -o Acquire::http::Proxy::localhost=\"DIRECT\""
# when building in bulk from remote, lets make sure we have up2date index
chroot "${SDCARD}" /bin/bash -c "DEBIAN_FRONTEND=noninteractive apt-get -yqq $apt_extra --no-install-recommends install $name" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
[[ $? -ne 0 ]] && exit_with_error "Installation of $name failed" "${BOARD} ${RELEASE} ${BUILD_DESKTOP} ${LINUXFAMILY}"
[[ ${variant} == remote && ${transfer} == yes ]] && rsync -rq "${SDCARD}"/var/cache/apt/archives/*.deb ${DEB_STORAGE}/
# install in chroot via apt-get, not dpkg, so dependencies are also installed from repo if needed.
export if_error_detail_message="Installation of $name failed ${BOARD} ${RELEASE} ${BUILD_DESKTOP} ${LINUXFAMILY}"
chroot_sdcard_apt_get --no-install-recommends install "${name}"
# @TODO: mysterious. store installed/downloaded packages in deb storage. only used for u-boot deb. why?
[[ ${variant} == remote && ${transfer} == yes ]] && run_host_command_logged rsync -r "${SDCARD}"/var/cache/apt/archives/*.deb "${DEB_STORAGE}"/
# IMPORTANT! Do not use short-circuit above as last statement in a function, since it determines the result of the function.
return 0
} }

View File

@@ -1,86 +1,59 @@
#!/usr/bin/env bash #!/usr/bin/env bash
install_ppa_prerequisites() {
# Myy : So... The whole idea is that, a good bunch of external sources
# are PPA.
# Adding PPA without add-apt-repository is poorly conveninent since
# you need to reconstruct the URL by hand, and find the GPG key yourself.
# add-apt-repository does that automatically, and in a way that allows you
# to remove it cleanly through the same tool.
# Myy : TODO Try to find a way to install this package only when
# we encounter a PPA.
run_on_sdcard "DEBIAN_FRONTEND=noninteractive apt-get -yqq update; apt-get install -yqq software-properties-common"
}
add_apt_sources() { add_apt_sources() {
local potential_paths="" local potential_paths=""
local sub_dirs_to_check=". " local sub_dirs_to_check=". "
if [[ ! -z "${SELECTED_CONFIGURATION+x}" ]]; then if [[ ! -z "${SELECTED_CONFIGURATION+x}" ]]; then
sub_dirs_to_check+="config_${SELECTED_CONFIGURATION}" sub_dirs_to_check+="config_${SELECTED_CONFIGURATION}"
fi fi
get_all_potential_paths "${DEBOOTSTRAP_SEARCH_RELATIVE_DIRS}" "${sub_dirs_to_check}" "sources/apt"
get_all_potential_paths "${CLI_SEARCH_RELATIVE_DIRS}" "${sub_dirs_to_check}" "sources/apt"
get_all_potential_paths "${DESKTOP_ENVIRONMENTS_SEARCH_RELATIVE_DIRS}" "." "sources/apt"
get_all_potential_paths "${DESKTOP_APPGROUPS_SEARCH_RELATIVE_DIRS}" "${DESKTOP_APPGROUPS_SELECTED}" "sources/apt"
display_alert "Adding additional apt sources" # @TODO: rpardini: The logic here is meant to be evolved over time. Originally, all of this only ran when BUILD_DESKTOP=yes.
# Igor had bumped it to run on all builds, but that adds external sources to cli and minimal.
# Here I'm tuning it down to 1/4th of the original, eg: no nala on my cli builds, thanks.
[[ "${BUILD_DESKTOP}" != "yes" ]] && get_all_potential_paths "${DEBOOTSTRAP_SEARCH_RELATIVE_DIRS}" "${sub_dirs_to_check}" "sources/apt"
[[ "${BUILD_DESKTOP}" == "yes" ]] && get_all_potential_paths "${CLI_SEARCH_RELATIVE_DIRS}" "${sub_dirs_to_check}" "sources/apt"
[[ "${BUILD_DESKTOP}" == "yes" ]] && get_all_potential_paths "${DESKTOP_ENVIRONMENTS_SEARCH_RELATIVE_DIRS}" "." "sources/apt"
[[ "${BUILD_DESKTOP}" == "yes" ]] && get_all_potential_paths "${DESKTOP_APPGROUPS_SEARCH_RELATIVE_DIRS}" "${DESKTOP_APPGROUPS_SELECTED}" "sources/apt"
display_alert "Adding additional apt sources" "add_apt_sources()" "debug"
for apt_sources_dirpath in ${potential_paths}; do for apt_sources_dirpath in ${potential_paths}; do
if [[ -d "${apt_sources_dirpath}" ]]; then if [[ -d "${apt_sources_dirpath}" ]]; then
for apt_source_filepath in "${apt_sources_dirpath}/"*.source; do for apt_source_filepath in "${apt_sources_dirpath}/"*.source; do
apt_source_filepath=$(echo $apt_source_filepath | sed -re 's/(^.*[^/])\.[^./]*$/\1/') apt_source_filepath=$(echo "${apt_source_filepath}" | sed -re 's/(^.*[^/])\.[^./]*$/\1/')
local new_apt_source="$(cat "${apt_source_filepath}.source")" local new_apt_source
local apt_source_gpg_filepath="${apt_source_filepath}.gpg" local apt_source_gpg_filepath
local apt_source_gpg_filename
local apt_source_filename
# extract filenames new_apt_source="$(cat "${apt_source_filepath}.source")"
local apt_source_gpg_filename="$(basename ${apt_source_gpg_filepath})" apt_source_gpg_filepath="${apt_source_filepath}.gpg"
local apt_source_filename="$(basename ${apt_source_filepath}).list" apt_source_gpg_filename="$(basename "${apt_source_gpg_filepath}")"
apt_source_filename="$(basename "${apt_source_filepath}").list"
display_alert "Adding APT Source ${new_apt_source}" display_alert "Adding APT Source" "${new_apt_source}" "info"
# @TODO: rpardini, why do PPAs get apt-key and others get keyrings GPG?
if [[ "${new_apt_source}" == ppa* ]]; then if [[ "${new_apt_source}" == ppa* ]]; then
install_ppa_prerequisites # @TODO: needs software-properties-common installed.
# ppa with software-common-properties chroot_sdcard add-apt-repository -y -n "${new_apt_source}" # -y -> Assume yes, -n -> no apt-get update
run_on_sdcard "add-apt-repository -y -n \"${new_apt_source}\""
# add list with apt-add
# -y -> Assumes yes to all queries
# -n -> Do not update package cache after adding
if [[ -f "${apt_source_gpg_filepath}" ]]; then if [[ -f "${apt_source_gpg_filepath}" ]]; then
display_alert "Adding GPG Key ${apt_source_gpg_filepath}" display_alert "Adding GPG Key" "via apt-key add (deprecated): ${apt_source_gpg_filename}"
cp "${apt_source_gpg_filepath}" "${SDCARD}/tmp/${apt_source_gpg_filename}" run_host_command_logged cp -pv "${apt_source_gpg_filepath}" "${SDCARD}/tmp/${apt_source_gpg_filename}"
run_on_sdcard "apt-key add \"/tmp/${apt_source_gpg_filename}\"" chroot_sdcard apt-key add "/tmp/${apt_source_gpg_filename}"
echo "APT Key returned : $?"
fi fi
else else
# installation without software-common-properties, sources.list + key.gpg # installation without software-common-properties, sources.list + key.gpg
echo "${new_apt_source}" > "${SDCARD}/etc/apt/sources.list.d/${apt_source_filename}" echo "${new_apt_source}" > "${SDCARD}/etc/apt/sources.list.d/${apt_source_filename}"
if [[ -f "${apt_source_gpg_filepath}" ]]; then if [[ -f "${apt_source_gpg_filepath}" ]]; then
display_alert "Adding GPG Key ${apt_source_gpg_filepath}" display_alert "Adding GPG Key" "via keyrings: ${apt_source_gpg_filename}"
# local apt_source_gpg_filename="$(basename ${apt_source_gpg_filepath})"
mkdir -p "${SDCARD}"/usr/share/keyrings/ mkdir -p "${SDCARD}"/usr/share/keyrings/
cp "${apt_source_gpg_filepath}" "${SDCARD}"/usr/share/keyrings/ run_host_command_logged cp -pv "${apt_source_gpg_filepath}" "${SDCARD}"/usr/share/keyrings/
fi fi
fi fi
done done
fi fi
done done
}
add_desktop_package_sources() {
# Myy : I see Snap and Flatpak coming up in the next releases
# so... let's prepare for that
add_apt_sources
ls -l "${SDCARD}/usr/share/keyrings" >> "${DEST}"/${LOG_SUBPATH}/install.log
ls -l "${SDCARD}/etc/apt/sources.list.d" >> "${DEST}"/${LOG_SUBPATH}/install.log
cat "${SDCARD}/etc/apt/sources.list" >> "${DEST}"/${LOG_SUBPATH}/install.log
} }

View File

@@ -14,7 +14,8 @@ function boot_logo() {
THROBBER_HEIGHT=$(identify $THROBBER | head -1 | cut -d " " -f 3 | cut -d x -f 2) THROBBER_HEIGHT=$(identify $THROBBER | head -1 | cut -d " " -f 3 | cut -d x -f 2)
convert -alpha remove -background "#000000" $LOGO "${SDCARD}"/tmp/logo.rgb convert -alpha remove -background "#000000" $LOGO "${SDCARD}"/tmp/logo.rgb
convert -alpha remove -background "#000000" $THROBBER "${SDCARD}"/tmp/throbber%02d.rgb convert -alpha remove -background "#000000" $THROBBER "${SDCARD}"/tmp/throbber%02d.rgb
$PKG_PREFIX${SRC}/packages/blobs/splash/bootsplash-packer \
run_host_x86_binary_logged "${SRC}/packages/blobs/splash/bootsplash-packer" \
--bg_red 0x00 \ --bg_red 0x00 \
--bg_green 0x00 \ --bg_green 0x00 \
--bg_blue 0x00 \ --bg_blue 0x00 \
@@ -106,14 +107,18 @@ function boot_logo() {
--blob "${SDCARD}"/tmp/throbber72.rgb \ --blob "${SDCARD}"/tmp/throbber72.rgb \
--blob "${SDCARD}"/tmp/throbber73.rgb \ --blob "${SDCARD}"/tmp/throbber73.rgb \
--blob "${SDCARD}"/tmp/throbber74.rgb \ --blob "${SDCARD}"/tmp/throbber74.rgb \
"${SDCARD}"/lib/firmware/bootsplash.armbian > /dev/null 2>&1 "${SDCARD}"/lib/firmware/bootsplash.armbian \
"| grep --line-buffered -v -e 'File header' -e 'Picture header' -e 'Blob header' -e 'length:' -e 'type:' -e 'picture_id:' -e 'bg_' -e 'num_' -e '^$'"
if [[ $BOOT_LOGO == yes || $BOOT_LOGO == desktop && $BUILD_DESKTOP == yes ]]; then if [[ $BOOT_LOGO == yes || $BOOT_LOGO == desktop && $BUILD_DESKTOP == yes ]]; then
[[ -f "${SDCARD}"/boot/armbianEnv.txt ]] && grep -q '^bootlogo' "${SDCARD}"/boot/armbianEnv.txt && [[ -f "${SDCARD}"/boot/armbianEnv.txt ]] && grep -q '^bootlogo' "${SDCARD}"/boot/armbianEnv.txt &&
sed -i 's/^bootlogo.*/bootlogo=true/' "${SDCARD}"/boot/armbianEnv.txt || echo 'bootlogo=true' >> "${SDCARD}"/boot/armbianEnv.txt sed -i 's/^bootlogo.*/bootlogo=true/' "${SDCARD}"/boot/armbianEnv.txt || echo 'bootlogo=true' >> "${SDCARD}"/boot/armbianEnv.txt
[[ -f "${SDCARD}"/boot/boot.ini ]] && sed -i 's/^setenv bootlogo.*/setenv bootlogo "true"/' "${SDCARD}"/boot/boot.ini [[ -f "${SDCARD}"/boot/boot.ini ]] && sed -i 's/^setenv bootlogo.*/setenv bootlogo "true"/' "${SDCARD}"/boot/boot.ini
# enable additional services. @TODO: rpardini: really wonder where do these come from?
chroot_sdcard "systemctl --no-reload enable bootsplash-ask-password-console.path || true"
chroot_sdcard "systemctl --no-reload enable bootsplash-hide-when-booted.service || true"
chroot_sdcard "systemctl --no-reload enable bootsplash-show-on-shutdown.service || true"
fi fi
# enable additional services return 0
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable bootsplash-ask-password-console.path >/dev/null 2>&1"
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable bootsplash-hide-when-booted.service >/dev/null 2>&1"
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable bootsplash-show-on-shutdown.service >/dev/null 2>&1"
} }

View File

@@ -1,42 +1,16 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# get_package_list_hash
#
# returns md5 hash for current package list and rootfs cache version
get_package_list_hash() { # this gets from cache or produces a new rootfs, and leaves a mounted chroot "$SDCARD" at the end.
local package_arr exclude_arr get_or_create_rootfs_cache_chroot_sdcard() {
local list_content # @TODO: this was moved from configuration to this stage, that way configuration can be offline
read -ra package_arr <<< "${DEBOOTSTRAP_LIST} ${PACKAGE_LIST}" # if variable not provided, check which is current version in the cache storage in GitHub.
read -ra exclude_arr <<< "${PACKAGE_LIST_EXCLUDE}" if [[ -z "${ROOTFSCACHE_VERSION}" ]]; then
( display_alert "ROOTFSCACHE_VERSION not set, getting remotely" "Github API and armbian/mirror " "debug"
printf "%s\n" "${package_arr[@]}" ROOTFSCACHE_VERSION=$(curl https://api.github.com/repos/armbian/cache/releases/latest -s --fail | jq .tag_name -r || true)
printf -- "-%s\n" "${exclude_arr[@]}" # anonymous API access is very limited which is why we need a fallback
) | sort -u | md5sum | cut -d' ' -f 1 ROOTFSCACHE_VERSION=${ROOTFSCACHE_VERSION:-$(curl -L --silent https://cache.armbian.com/rootfs/latest --fail)}
} fi
# get_rootfs_cache_list <cache_type> <packages_hash>
#
# return a list of versions of all avaiable cache from remote and local.
get_rootfs_cache_list() {
local cache_type=$1
local packages_hash=$2
{
curl --silent --fail -L "https://api.github.com/repos/armbian/cache/releases?per_page=3" | jq -r '.[].tag_name' \
|| curl --silent --fail -L https://cache.armbian.com/rootfs/list
find ${SRC}/cache/rootfs/ -mtime -7 -name "${ARCH}-${RELEASE}-${cache_type}-${packages_hash}-*.tar.zst" |
sed -e 's#^.*/##' |
sed -e 's#\..*$##' |
awk -F'-' '{print $5}'
} | sort | uniq
}
# create_rootfs_cache
#
# unpacks cached rootfs for $RELEASE or creates one
#
create_rootfs_cache() {
local packages_hash=$(get_package_list_hash) local packages_hash=$(get_package_list_hash)
local packages_hash=${packages_hash:0:8} local packages_hash=${packages_hash:0:8}
@@ -67,235 +41,36 @@ create_rootfs_cache() {
[[ -f $cache_fname && ! -f ${cache_fname}.aria2 ]] && break [[ -f $cache_fname && ! -f ${cache_fname}.aria2 ]] && break
done done
##PRESERVE## # check if cache exists and we want to make it
##PRESERVE## if [[ -f ${cache_fname} && "$ROOT_FS_CREATE_ONLY" == "yes" ]]; then
##PRESERVE## display_alert "Checking cache integrity" "$display_name" "info"
##PRESERVE## zstd -tqq ${cache_fname} || {
##PRESERVE## rm $cache_fname
##PRESERVE## exit_with_error "Cache $cache_fname is corrupted and was deleted. Please restart!"
##PRESERVE## }
##PRESERVE## fi
# if aria2 file exists download didn't succeeded # if aria2 file exists download didn't succeeded
if [[ "$ROOT_FS_CREATE_ONLY" != "yes" && -f $cache_fname && ! -f $cache_fname.aria2 ]]; then if [[ "$ROOT_FS_CREATE_ONLY" != "yes" && -f $cache_fname && ! -f $cache_fname.aria2 ]]; then
local date_diff=$((($(date +%s) - $(stat -c %Y $cache_fname)) / 86400)) local date_diff=$((($(date +%s) - $(stat -c %Y $cache_fname)) / 86400))
display_alert "Extracting $cache_name" "$date_diff days old" "info" display_alert "Extracting $cache_name" "$date_diff days old" "info"
pv -p -b -r -c -N "[ .... ] $cache_name" "$cache_fname" | zstdmt -dc | tar xp --xattrs -C $SDCARD/ pv -p -b -r -c -N "$(logging_echo_prefix_for_pv "extract_rootfs") $cache_name" "$cache_fname" | zstdmt -dc | tar xp --xattrs -C $SDCARD/
[[ $? -ne 0 ]] && rm $cache_fname && exit_with_error "Cache $cache_fname is corrupted and was deleted. Restart." [[ $? -ne 0 ]] && rm $cache_fname && exit_with_error "Cache $cache_fname is corrupted and was deleted. Restart."
rm $SDCARD/etc/resolv.conf rm $SDCARD/etc/resolv.conf
echo "nameserver $NAMESERVER" >> $SDCARD/etc/resolv.conf echo "nameserver $NAMESERVER" >> $SDCARD/etc/resolv.conf
create_sources_list "$RELEASE" "$SDCARD/" create_sources_list "$RELEASE" "$SDCARD/"
else else
local ROOT_FS_CREATE_VERSION=${ROOT_FS_CREATE_VERSION:-$(date --utc +"%Y%m%d")} local ROOT_FS_CREATE_VERSION=${ROOT_FS_CREATE_VERSION:-$(date --utc +"%Y%m%d")}
local cache_name=${ARCH}-${RELEASE}-${cache_type}-${packages_hash}-${ROOT_FS_CREATE_VERSION}.tar.zst local cache_name=${ARCH}-${RELEASE}-${cache_type}-${packages_hash}-${ROOT_FS_CREATE_VERSION}.tar.zst
local cache_fname=${SRC}/cache/rootfs/${cache_name} local cache_fname=${SRC}/cache/rootfs/${cache_name}
display_alert "Creating new rootfs cache for" "$RELEASE" "info" display_alert "Creating new rootfs cache for" "$RELEASE" "info"
# stage: debootstrap base system create_new_rootfs_cache
if [[ $NO_APT_CACHER != yes ]]; then
# apt-cacher-ng apt-get proxy parameter
local apt_extra="-o Acquire::http::Proxy=\"http://${APT_PROXY_ADDR:-localhost:3142}\""
local apt_mirror="http://${APT_PROXY_ADDR:-localhost:3142}/$APT_MIRROR"
else
local apt_mirror="http://$APT_MIRROR"
fi
# fancy progress bars # needed for backend to keep current only
[[ -z $OUTPUT_DIALOG ]] && local apt_extra_progress="--show-progress -o DPKG::Progress-Fancy=1" echo "$cache_fname" > $cache_fname.current
# Ok so for eval+PIPESTATUS.
# Try this on your bash shell:
# ONEVAR="testing" eval 'bash -c "echo value once $ONEVAR && false && echo value twice $ONEVAR"' '| grep value' '| grep value' ; echo ${PIPESTATUS[*]}
# Notice how PIPESTATUS has only one element. and it is always true, although we failed explicitly with false in the middle of the bash.
# That is because eval itself is considered a single command, no matter how many pipes you put in there, you'll get a single value, the return code of the LAST pipe.
# Lets export the value of the pipe inside eval so we know outside what happened:
# ONEVAR="testing" eval 'bash -e -c "echo value once $ONEVAR && false && echo value twice $ONEVAR"' '| grep value' '| grep value' ';EVALPIPE=(${PIPESTATUS[@]})' ; echo ${EVALPIPE[*]}
display_alert "Installing base system" "Stage 1/2" "info"
cd $SDCARD # this will prevent error sh: 0: getcwd() failed
eval 'debootstrap --variant=minbase --include=${DEBOOTSTRAP_LIST// /,} ${PACKAGE_LIST_EXCLUDE:+ --exclude=${PACKAGE_LIST_EXCLUDE// /,}} \
--arch=$ARCH --components=${DEBOOTSTRAP_COMPONENTS} $DEBOOTSTRAP_OPTION --foreign $RELEASE $SDCARD/ $apt_mirror' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Debootstrap (stage 1/2)..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 || ! -f $SDCARD/debootstrap/debootstrap ]] && exit_with_error "Debootstrap base system for ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL} first stage failed"
cp /usr/bin/$QEMU_BINARY $SDCARD/usr/bin/
mkdir -p $SDCARD/usr/share/keyrings/
cp /usr/share/keyrings/*-archive-keyring.gpg $SDCARD/usr/share/keyrings/
display_alert "Installing base system" "Stage 2/2" "info"
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "/debootstrap/debootstrap --second-stage"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Debootstrap (stage 2/2)..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 || ! -f $SDCARD/bin/bash ]] && exit_with_error "Debootstrap base system for ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL} second stage failed"
mount_chroot "$SDCARD"
display_alert "Diverting" "initctl/start-stop-daemon" "info"
# policy-rc.d script prevents starting or reloading services during image creation
printf '#!/bin/sh\nexit 101' > $SDCARD/usr/sbin/policy-rc.d
LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "dpkg-divert --quiet --local --rename --add /sbin/initctl" &> /dev/null
LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "dpkg-divert --quiet --local --rename --add /sbin/start-stop-daemon" &> /dev/null
printf '#!/bin/sh\necho "Warning: Fake start-stop-daemon called, doing nothing"' > $SDCARD/sbin/start-stop-daemon
printf '#!/bin/sh\necho "Warning: Fake initctl called, doing nothing"' > $SDCARD/sbin/initctl
chmod 755 $SDCARD/usr/sbin/policy-rc.d
chmod 755 $SDCARD/sbin/initctl
chmod 755 $SDCARD/sbin/start-stop-daemon
# stage: configure language and locales
display_alert "Generatining default locale" "info"
if [[ -f $SDCARD/etc/locale.gen ]]; then
sed -i '/ C.UTF-8/s/^# //g' $SDCARD/etc/locale.gen
sed -i '/en_US.UTF-8/s/^# //g' $SDCARD/etc/locale.gen
fi
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "locale-gen"' ${OUTPUT_VERYSILENT:+' >/dev/null 2>&1'}
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "update-locale --reset LANG=en_US.UTF-8"' \
${OUTPUT_VERYSILENT:+' >/dev/null 2>&1'}
if [[ -f $SDCARD/etc/default/console-setup ]]; then
sed -e 's/CHARMAP=.*/CHARMAP="UTF-8"/' -e 's/FONTSIZE=.*/FONTSIZE="8x16"/' \
-e 's/CODESET=.*/CODESET="guess"/' -i $SDCARD/etc/default/console-setup
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "setupcon --save --force"'
fi
# stage: create apt-get sources list
create_sources_list "$RELEASE" "$SDCARD/"
# add armhf arhitecture to arm64, unless configured not to do so.
if [[ "a${ARMHF_ARCH}" != "askip" ]]; then
[[ $ARCH == arm64 ]] && eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "dpkg --add-architecture armhf"'
fi
# this should fix resolvconf installation failure in some cases
chroot $SDCARD /bin/bash -c 'echo "resolvconf resolvconf/linkify-resolvconf boolean false" | debconf-set-selections'
# TODO change name of the function from "desktop" and move to appropriate location
add_desktop_package_sources
# stage: update packages list
display_alert "Updating package list" "$RELEASE" "info"
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "apt-get -q -y $apt_extra update"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Updating package lists..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 ]] && display_alert "Updating package lists" "failed" "wrn"
# stage: upgrade base packages from xxx-updates and xxx-backports repository branches
display_alert "Upgrading base packages" "Armbian" "info"
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "DEBIAN_FRONTEND=noninteractive apt-get -y -q \
$apt_extra $apt_extra_progress upgrade"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Upgrading base packages..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 ]] && display_alert "Upgrading base packages" "failed" "wrn"
# stage: install additional packages
display_alert "Installing the main packages for" "Armbian" "info"
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "DEBIAN_FRONTEND=noninteractive apt-get -y -q \
$apt_extra $apt_extra_progress --no-install-recommends install $PACKAGE_MAIN_LIST"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Installing Armbian main packages..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 ]] && exit_with_error "Installation of Armbian main packages for ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL} failed"
if [[ $BUILD_DESKTOP == "yes" ]]; then
local apt_desktop_install_flags=""
if [[ ! -z ${DESKTOP_APT_FLAGS_SELECTED+x} ]]; then
for flag in ${DESKTOP_APT_FLAGS_SELECTED}; do
apt_desktop_install_flags+=" --install-${flag}"
done
else
# Myy : Using the previous default option, if the variable isn't defined
# And ONLY if it's not defined !
apt_desktop_install_flags+=" --no-install-recommends"
fi
display_alert "Installing the desktop packages for" "Armbian" "info"
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "DEBIAN_FRONTEND=noninteractive apt-get -y -q \
$apt_extra $apt_extra_progress install ${apt_desktop_install_flags} $PACKAGE_LIST_DESKTOP"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Installing Armbian desktop packages..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 ]] && exit_with_error "Installation of Armbian desktop packages for ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL} failed"
fi
# stage: check md5 sum of installed packages. Just in case.
display_alert "Checking MD5 sum of installed packages" "debsums" "info"
chroot $SDCARD /bin/bash -e -c "debsums -s"
[[ $? -ne 0 ]] && exit_with_error "MD5 sums check of installed packages failed"
# Remove packages from packages.uninstall
display_alert "Uninstall packages" "$PACKAGE_LIST_UNINSTALL" "info"
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "DEBIAN_FRONTEND=noninteractive apt-get -y -qq \
$apt_extra $apt_extra_progress purge $PACKAGE_LIST_UNINSTALL"' \
${PROGRESS_LOG_TO_FILE:+' >> $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Removing packages.uninstall packages..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 ]] && exit_with_error "Installation of Armbian packages failed"
# stage: purge residual packages
display_alert "Purging residual packages for" "Armbian" "info"
PURGINGPACKAGES=$(chroot $SDCARD /bin/bash -c "dpkg -l | grep \"^rc\" | awk '{print \$2}' | tr \"\n\" \" \"")
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -e -c "DEBIAN_FRONTEND=noninteractive apt-get -y -q \
$apt_extra $apt_extra_progress remove --purge $PURGINGPACKAGES"' \
${PROGRESS_LOG_TO_FILE:+' | tee -a $DEST/${LOG_SUBPATH}/debootstrap.log'} \
${OUTPUT_DIALOG:+' | dialog --backtitle "$backtitle" --progressbox "Purging residual Armbian packages..." $TTY_Y $TTY_X'} \
${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'} ';EVALPIPE=(${PIPESTATUS[@]})'
[[ ${EVALPIPE[0]} -ne 0 ]] && exit_with_error "Purging of residual Armbian packages failed"
# stage: remove downloaded packages
chroot $SDCARD /bin/bash -c "apt-get -y autoremove; apt-get clean"
# DEBUG: print free space
local freespace=$(LC_ALL=C df -h)
echo -e "$freespace" >> $DEST/${LOG_SUBPATH}/debootstrap.log
display_alert "Free SD cache" "$(echo -e "$freespace" | awk -v mp="${SDCARD}" '$6==mp {print $5}')" "info"
display_alert "Mount point" "$(echo -e "$freespace" | awk -v mp="${MOUNT}" '$6==mp {print $5}')" "info"
# create list of installed packages for debug purposes
chroot $SDCARD /bin/bash -c "dpkg -l | grep ^ii | awk '{ print \$2\",\"\$3 }'" > ${cache_fname}.list 2>&1
# creating xapian index that synaptic runs faster
if [[ $BUILD_DESKTOP == yes ]]; then
display_alert "Recreating Synaptic search index" "Please wait" "info"
chroot $SDCARD /bin/bash -c "[[ -f /usr/sbin/update-apt-xapian-index ]] && /usr/sbin/update-apt-xapian-index -u"
fi
# this is needed for the build process later since resolvconf generated file in /run is not saved
rm $SDCARD/etc/resolv.conf
echo "nameserver $NAMESERVER" >> $SDCARD/etc/resolv.conf
# Remove `machine-id` (https://www.freedesktop.org/software/systemd/man/machine-id.html)
# Note: This will mark machine `firstboot`
echo "uninitialized" > "${SDCARD}/etc/machine-id"
rm "${SDCARD}/var/lib/dbus/machine-id"
# Mask `systemd-firstboot.service` which will prompt locale, timezone and root-password too early.
# `armbian-first-run` will do the same thing later
chroot $SDCARD /bin/bash -c "systemctl mask systemd-firstboot.service >/dev/null 2>&1"
# stage: make rootfs cache archive
display_alert "Ending debootstrap process and preparing cache" "$RELEASE" "info"
sync
# the only reason to unmount here is compression progress display
# based on rootfs size calculation
umount_chroot "$SDCARD"
tar cp --xattrs --directory=$SDCARD/ --exclude='./dev/*' --exclude='./proc/*' --exclude='./run/*' --exclude='./tmp/*' \
--exclude='./sys/*' --exclude='./home/*' --exclude='./root/*' . | pv -p -b -r -s $(du -sb $SDCARD/ | cut -f1) -N "$cache_name" | zstdmt -19 -c > $cache_fname
# sign rootfs cache archive that it can be used for web cache once. Internal purposes
if [[ -n "${GPG_PASS}" && "${SUDO_USER}" ]]; then
[[ -n ${SUDO_USER} ]] && sudo chown -R ${SUDO_USER}:${SUDO_USER} "${DEST}"/images/
echo "${GPG_PASS}" | sudo -H -u ${SUDO_USER} bash -c "gpg --passphrase-fd 0 --armor --detach-sign --pinentry-mode loopback --batch --yes ${cache_fname}" || exit 1
fi
fi fi
@@ -304,9 +79,229 @@ create_rootfs_cache() {
umount --lazy "$SDCARD" umount --lazy "$SDCARD"
rm -rf $SDCARD rm -rf $SDCARD
# remove exit trap # remove exit trap
trap - INT TERM EXIT remove_all_trap_handlers INT TERM EXIT
exit exit
fi fi
mount_chroot "$SDCARD" mount_chroot "${SDCARD}"
}
function create_new_rootfs_cache() {
# this is different between debootstrap and regular apt-get; here we use acng as a prefix to the real repo
local debootstrap_apt_mirror="http://${APT_MIRROR}"
if [[ $NO_APT_CACHER != yes ]]; then
local debootstrap_apt_mirror="http://${APT_PROXY_ADDR:-localhost:3142}/${APT_MIRROR}"
acng_check_status_or_restart
fi
display_alert "Installing base system" "Stage 1/2" "info"
cd "${SDCARD}" || exit_with_error "cray-cray about SDCARD" "${SDCARD}" # this will prevent error sh: 0: getcwd() failed
local -a deboostrap_arguments=(
"--variant=minbase" # minimal base variant. go ask Debian about it.
"--include=${DEBOOTSTRAP_LIST// /,}" # from aggregation?
${PACKAGE_LIST_EXCLUDE:+ --exclude="${PACKAGE_LIST_EXCLUDE// /,}"} # exclude some
"--arch=${ARCH}" # the arch
"--components=${DEBOOTSTRAP_COMPONENTS}" # from aggregation?
"--foreign" "${RELEASE}" "${SDCARD}/" "${debootstrap_apt_mirror}" # path and mirror
)
run_host_command_logged debootstrap "${deboostrap_arguments[@]}" || {
exit_with_error "Debootstrap first stage failed" "${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL}"
}
[[ ! -f ${SDCARD}/debootstrap/debootstrap ]] && exit_with_error "Debootstrap first stage did not produce marker file"
deploy_qemu_binary_to_chroot "${SDCARD}" # this is cleaned-up later by post_debootstrap_tweaks()
mkdir -p "${SDCARD}/usr/share/keyrings/"
run_host_command_logged cp -pv /usr/share/keyrings/*-archive-keyring.gpg "${SDCARD}/usr/share/keyrings/"
display_alert "Installing base system" "Stage 2/2" "info"
export if_error_detail_message="Debootstrap second stage failed ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL}"
chroot_sdcard LC_ALL=C LANG=C /debootstrap/debootstrap --second-stage
[[ ! -f "${SDCARD}/bin/bash" ]] && exit_with_error "Debootstrap first stage did not produce /bin/bash"
mount_chroot "${SDCARD}"
display_alert "Diverting" "initctl/start-stop-daemon" "info"
# policy-rc.d script prevents starting or reloading services during image creation
printf '#!/bin/sh\nexit 101' > $SDCARD/usr/sbin/policy-rc.d
chroot_sdcard LC_ALL=C LANG=C dpkg-divert --quiet --local --rename --add /sbin/initctl
chroot_sdcard LC_ALL=C LANG=C dpkg-divert --quiet --local --rename --add /sbin/start-stop-daemon
printf '#!/bin/sh\necho "Warning: Fake start-stop-daemon called, doing nothing"' > "$SDCARD/sbin/start-stop-daemon"
printf '#!/bin/sh\necho "Warning: Fake initctl called, doing nothing"' > "$SDCARD/sbin/initctl"
chmod 755 "$SDCARD/usr/sbin/policy-rc.d"
chmod 755 "$SDCARD/sbin/initctl"
chmod 755 "$SDCARD/sbin/start-stop-daemon"
# stage: configure language and locales
display_alert "Configuring locales" "$DEST_LANG" "info"
[[ -f $SDCARD/etc/locale.gen ]] && sed -i "s/^# $DEST_LANG/$DEST_LANG/" $SDCARD/etc/locale.gen
chroot_sdcard LC_ALL=C LANG=C locale-gen "$DEST_LANG"
chroot_sdcard LC_ALL=C LANG=C update-locale "LANG=$DEST_LANG" "LANGUAGE=$DEST_LANG" "LC_MESSAGES=$DEST_LANG"
if [[ -f $SDCARD/etc/default/console-setup ]]; then
# @TODO: Should be configurable.
sed -e 's/CHARMAP=.*/CHARMAP="UTF-8"/' -e 's/FONTSIZE=.*/FONTSIZE="8x16"/' \
-e 's/CODESET=.*/CODESET="guess"/' -i "$SDCARD/etc/default/console-setup"
chroot_sdcard LC_ALL=C LANG=C setupcon --save --force
fi
# stage: create apt-get sources list (basic Debian/Ubuntu apt sources, no external nor PPAS)
create_sources_list "$RELEASE" "$SDCARD/"
# add armhf arhitecture to arm64, unless configured not to do so.
if [[ "a${ARMHF_ARCH}" != "askip" ]]; then
[[ $ARCH == arm64 ]] && chroot_sdcard LC_ALL=C LANG=C dpkg --add-architecture armhf
fi
# this should fix resolvconf installation failure in some cases
chroot_sdcard 'echo "resolvconf resolvconf/linkify-resolvconf boolean false" | debconf-set-selections'
# Add external / PPAs to apt sources; decides internally based on minimal/cli/desktop dir/file structure
add_apt_sources
# uset asset logging for this; actually log contents of the files too
run_host_command_logged ls -l "${SDCARD}/usr/share/keyrings"
run_host_command_logged ls -l "${SDCARD}/etc/apt/sources.list.d"
run_host_command_logged cat "${SDCARD}/etc/apt/sources.list"
# stage: update packages list
display_alert "Updating package list" "$RELEASE" "info"
do_with_retries 3 chroot_sdcard_apt_get update
# stage: upgrade base packages from xxx-updates and xxx-backports repository branches
display_alert "Upgrading base packages" "Armbian" "info"
do_with_retries 3 chroot_sdcard_apt_get upgrade
# stage: install additional packages
display_alert "Installing the main packages for" "Armbian" "info"
export if_error_detail_message="Installation of Armbian main packages for ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL} failed"
# First, try to download-only up to 3 times, to work around network/proxy problems.
do_with_retries 3 chroot_sdcard_apt_get_install_download_only "$PACKAGE_MAIN_LIST"
# Now do the install, all packages should have been downloaded by now
chroot_sdcard_apt_get_install "$PACKAGE_MAIN_LIST"
if [[ $BUILD_DESKTOP == "yes" ]]; then
local apt_desktop_install_flags=""
if [[ ! -z ${DESKTOP_APT_FLAGS_SELECTED+x} ]]; then
for flag in ${DESKTOP_APT_FLAGS_SELECTED}; do
apt_desktop_install_flags+=" --install-${flag}"
done
else
# Myy : Using the previous default option, if the variable isn't defined
# And ONLY if it's not defined !
apt_desktop_install_flags+=" --no-install-recommends"
fi
display_alert "Installing the desktop packages for" "Armbian" "info"
# Retry download-only 3 times first.
do_with_retries 3 chroot_sdcard_apt_get_install_download_only ${apt_desktop_install_flags} $PACKAGE_LIST_DESKTOP
# Then do the actual install.
export if_error_detail_message="Installation of Armbian desktop packages for ${BRANCH} ${BOARD} ${RELEASE} ${DESKTOP_APPGROUPS_SELECTED} ${DESKTOP_ENVIRONMENT} ${BUILD_MINIMAL} failed"
chroot_sdcard_apt_get install ${apt_desktop_install_flags} $PACKAGE_LIST_DESKTOP
fi
# stage: check md5 sum of installed packages. Just in case.
display_alert "Checking MD5 sum of installed packages" "debsums" "info"
export if_error_detail_message="Check MD5 sum of installed packages failed"
chroot_sdcard debsums --silent
# Remove packages from packages.uninstall
display_alert "Uninstall packages" "$PACKAGE_LIST_UNINSTALL" "info"
# shellcheck disable=SC2086
chroot_sdcard_apt_get purge $PACKAGE_LIST_UNINSTALL
# stage: purge residual packages
display_alert "Purging residual packages for" "Armbian" "info"
PURGINGPACKAGES=$(chroot $SDCARD /bin/bash -c "dpkg -l | grep \"^rc\" | awk '{print \$2}' | tr \"\n\" \" \"")
chroot_sdcard_apt_get remove --purge $PURGINGPACKAGES
# stage: remove downloaded packages
chroot_sdcard_apt_get autoremove
chroot_sdcard_apt_get clean
# DEBUG: print free space
local freespace=$(LC_ALL=C df -h)
display_alert "Free SD cache" "$(echo -e "$freespace" | awk -v mp="${SDCARD}" '$6==mp {print $5}')" "info"
[[ -d "${MOUNT}" ]] &&
display_alert "Mount point" "$(echo -e "$freespace" | awk -v mp="${MOUNT}" '$6==mp {print $5}')" "info"
# create list of installed packages for debug purposes - this captures it's own stdout.
chroot "${SDCARD}" /bin/bash -c "dpkg -l | grep ^ii | awk '{ print \$2\",\"\$3 }' > '${cache_fname}.list'"
# creating xapian index that synaptic runs faster
if [[ $BUILD_DESKTOP == yes ]]; then
display_alert "Recreating Synaptic search index" "Please wait" "info"
chroot_sdcard "[[ -f /usr/sbin/update-apt-xapian-index ]] && /usr/sbin/update-apt-xapian-index -u || true"
fi
# this is needed for the build process later since resolvconf generated file in /run is not saved
rm $SDCARD/etc/resolv.conf
echo "nameserver $NAMESERVER" >> $SDCARD/etc/resolv.conf
# Remove `machine-id` (https://www.freedesktop.org/software/systemd/man/machine-id.html)
# Note: This will mark machine `firstboot`
echo "uninitialized" > "${SDCARD}/etc/machine-id"
rm "${SDCARD}/var/lib/dbus/machine-id"
# Mask `systemd-firstboot.service` which will prompt locale, timezone and root-password too early.
# `armbian-first-run` will do the same thing later
chroot $SDCARD /bin/bash -c "systemctl mask systemd-firstboot.service >/dev/null 2>&1"
# stage: make rootfs cache archive
display_alert "Ending debootstrap process and preparing cache" "$RELEASE" "info"
sync
# the only reason to unmount here is compression progress display
# based on rootfs size calculation
umount_chroot "$SDCARD"
tar cp --xattrs --directory=$SDCARD/ --exclude='./dev/*' --exclude='./proc/*' --exclude='./run/*' --exclude='./tmp/*' \
--exclude='./sys/*' --exclude='./home/*' --exclude='./root/*' . | pv -p -b -r -s "$(du -sb $SDCARD/ | cut -f1)" -N "$(logging_echo_prefix_for_pv "store_rootfs") $cache_name" | zstdmt -5 -c > "${cache_fname}"
# sign rootfs cache archive that it can be used for web cache once. Internal purposes
if [[ -n "${GPG_PASS}" && "${SUDO_USER}" ]]; then
[[ -n ${SUDO_USER} ]] && sudo chown -R ${SUDO_USER}:${SUDO_USER} "${DEST}"/images/
echo "${GPG_PASS}" | sudo -H -u ${SUDO_USER} bash -c "gpg --passphrase-fd 0 --armor --detach-sign --pinentry-mode loopback --batch --yes ${cache_fname}" || exit 1
fi
# needed for backend to keep current only
echo "$cache_fname" > $cache_fname.current
return 0 # protect against possible future short-circuiting above this
}
# get_package_list_hash
#
# returns md5 hash for current package list and rootfs cache version
get_package_list_hash() {
local package_arr exclude_arr
local list_content
read -ra package_arr <<< "${DEBOOTSTRAP_LIST} ${PACKAGE_LIST}"
read -ra exclude_arr <<< "${PACKAGE_LIST_EXCLUDE}"
(
printf "%s\n" "${package_arr[@]}"
printf -- "-%s\n" "${exclude_arr[@]}"
) | sort -u | md5sum | cut -d' ' -f 1
}
# get_rootfs_cache_list <cache_type> <packages_hash>
#
# return a list of versions of all avaiable cache from remote and local.
get_rootfs_cache_list() {
local cache_type=$1
local packages_hash=$2
{
curl --silent --fail -L "https://api.github.com/repos/armbian/cache/releases?per_page=3" | jq -r '.[].tag_name' ||
curl --silent --fail -L https://cache.armbian.com/rootfs/list
find ${SRC}/cache/rootfs/ -mtime -7 -name "${ARCH}-${RELEASE}-${cache_type}-${packages_hash}-*.tar.zst" |
sed -e 's#^.*/##' |
sed -e 's#\..*$##' |
awk -F'-' '{print $5}'
} | sort | uniq
} }

View File

@@ -4,28 +4,35 @@ customize_image() {
# for users that need to prepare files at host # for users that need to prepare files at host
[[ -f $USERPATCHES_PATH/customize-image-host.sh ]] && source "$USERPATCHES_PATH"/customize-image-host.sh [[ -f $USERPATCHES_PATH/customize-image-host.sh ]] && source "$USERPATCHES_PATH"/customize-image-host.sh
call_extension_method "pre_customize_image" "image_tweaks_pre_customize" << 'PRE_CUSTOMIZE_IMAGE' call_extension_method "pre_customize_image" "image_tweaks_pre_customize" <<- 'PRE_CUSTOMIZE_IMAGE'
*run before customize-image.sh* *run before customize-image.sh*
This hook is called after `customize-image-host.sh` is called, but before the overlay is mounted. This hook is called after `customize-image-host.sh` is called, but before the overlay is mounted.
It thus can be used for the same purposes as `customize-image-host.sh`. It thus can be used for the same purposes as `customize-image-host.sh`.
PRE_CUSTOMIZE_IMAGE PRE_CUSTOMIZE_IMAGE
cp "$USERPATCHES_PATH"/customize-image.sh "${SDCARD}"/tmp/customize-image.sh cp "$USERPATCHES_PATH"/customize-image.sh "${SDCARD}"/tmp/customize-image.sh
chmod +x "${SDCARD}"/tmp/customize-image.sh chmod +x "${SDCARD}"/tmp/customize-image.sh
mkdir -p "${SDCARD}"/tmp/overlay mkdir -p "${SDCARD}"/tmp/overlay
# util-linux >= 2.27 required # util-linux >= 2.27 required
mount -o bind,ro "$USERPATCHES_PATH"/overlay "${SDCARD}"/tmp/overlay [[ -d "${USERPATCHES_PATH}"/overlay ]] && mount -o bind,ro "${USERPATCHES_PATH}"/overlay "${SDCARD}"/tmp/overlay
display_alert "Calling image customization script" "customize-image.sh" "info" display_alert "Calling image customization script" "customize-image.sh" "info"
chroot "${SDCARD}" /bin/bash -c "/tmp/customize-image.sh $RELEASE $LINUXFAMILY $BOARD $BUILD_DESKTOP $ARCH"
set +e # disable error control
chroot_sdcard /tmp/customize-image.sh "${RELEASE}" "$LINUXFAMILY" "$BOARD" "$BUILD_DESKTOP" "$ARCH"
CUSTOMIZE_IMAGE_RC=$? CUSTOMIZE_IMAGE_RC=$?
umount -i "${SDCARD}"/tmp/overlay > /dev/null 2>&1 set -e # back to normal error control
mountpoint -q "${SDCARD}"/tmp/overlay && umount "${SDCARD}"/tmp/overlay
mountpoint -q "${SDCARD}"/tmp/overlay || rm -r "${SDCARD}"/tmp/overlay mountpoint -q "${SDCARD}"/tmp/overlay || rm -r "${SDCARD}"/tmp/overlay
if [[ $CUSTOMIZE_IMAGE_RC != 0 ]]; then if [[ $CUSTOMIZE_IMAGE_RC != 0 ]]; then
exit_with_error "customize-image.sh exited with error (rc: $CUSTOMIZE_IMAGE_RC)" exit_with_error "customize-image.sh exited with error (rc: $CUSTOMIZE_IMAGE_RC)"
fi fi
call_extension_method "post_customize_image" "image_tweaks_post_customize" << 'POST_CUSTOMIZE_IMAGE' call_extension_method "post_customize_image" "image_tweaks_post_customize" <<- 'POST_CUSTOMIZE_IMAGE'
*post customize-image.sh hook* *post customize-image.sh hook*
Run after the customize-image.sh script is run, and the overlay is unmounted. Run after the customize-image.sh script is run, and the overlay is unmounted.
POST_CUSTOMIZE_IMAGE POST_CUSTOMIZE_IMAGE
return 0
} }

View File

@@ -1,16 +1,16 @@
#!/usr/bin/env bash #!/usr/bin/env bash
install_common() {
display_alert "Applying common tweaks" "" "info" function install_distribution_agnostic() {
display_alert "Installing distro-agnostic part of rootfs" "install_distribution_agnostic" "debug"
# install rootfs encryption related packages separate to not break packages cache # install rootfs encryption related packages separate to not break packages cache
# @TODO: terrible, this does not use apt-cacher, extract to extension and fix
if [[ $CRYPTROOT_ENABLE == yes ]]; then if [[ $CRYPTROOT_ENABLE == yes ]]; then
display_alert "Installing rootfs encryption related packages" "cryptsetup" "info" display_alert "Installing rootfs encryption related packages" "cryptsetup" "info"
chroot "${SDCARD}" /bin/bash -c "apt-get -y -qq --no-install-recommends install cryptsetup" \ chroot_sdcard_apt_get_install cryptsetup
>> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
if [[ $CRYPTROOT_SSH_UNLOCK == yes ]]; then if [[ $CRYPTROOT_SSH_UNLOCK == yes ]]; then
display_alert "Installing rootfs encryption related packages" "dropbear-initramfs" "info" display_alert "Installing rootfs encryption related packages" "dropbear-initramfs" "info"
chroot "${SDCARD}" /bin/bash -c "apt-get -y -qq --no-install-recommends install dropbear-initramfs cryptsetup-initramfs" \ chroot_sdcard_apt_get_install dropbear-initramfs cryptsetup-initramfs
>> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
fi fi
fi fi
@@ -20,6 +20,7 @@ install_common() {
# required for initramfs-tools-core on Stretch since it ignores the / fstab entry # required for initramfs-tools-core on Stretch since it ignores the / fstab entry
echo "/dev/mmcblk0p2 /usr $ROOTFS_TYPE defaults 0 2" >> "${SDCARD}"/etc/fstab echo "/dev/mmcblk0p2 /usr $ROOTFS_TYPE defaults 0 2" >> "${SDCARD}"/etc/fstab
# @TODO: refacctor this into cryptroot extension
# adjust initramfs dropbear configuration # adjust initramfs dropbear configuration
# needs to be done before kernel installation, else it won't be in the initrd image # needs to be done before kernel installation, else it won't be in the initrd image
if [[ $CRYPTROOT_ENABLE == yes && $CRYPTROOT_SSH_UNLOCK == yes ]]; then if [[ $CRYPTROOT_ENABLE == yes && $CRYPTROOT_SSH_UNLOCK == yes ]]; then
@@ -37,7 +38,7 @@ install_common() {
# this key should be changed by the user on first login # this key should be changed by the user on first login
display_alert "Generating a new SSH key pair for dropbear (initramfs)" "" "" display_alert "Generating a new SSH key pair for dropbear (initramfs)" "" ""
ssh-keygen -t ecdsa -f "${SDCARD}"/etc/dropbear-initramfs/id_ecdsa \ ssh-keygen -t ecdsa -f "${SDCARD}"/etc/dropbear-initramfs/id_ecdsa \
-N '' -O force-command=cryptroot-unlock -C 'AUTOGENERATED_BY_ARMBIAN_BUILD' >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 -N '' -O force-command=cryptroot-unlock -C 'AUTOGENERATED_BY_ARMBIAN_BUILD' 2>&1
# /usr/share/initramfs-tools/hooks/dropbear will automatically add 'id_ecdsa.pub' to authorized_keys file # /usr/share/initramfs-tools/hooks/dropbear will automatically add 'id_ecdsa.pub' to authorized_keys file
# during mkinitramfs of update-initramfs # during mkinitramfs of update-initramfs
@@ -99,24 +100,26 @@ install_common() {
# add the /dev/urandom path to the rng config file # add the /dev/urandom path to the rng config file
echo "HRNGDEVICE=/dev/urandom" >> "${SDCARD}"/etc/default/rng-tools echo "HRNGDEVICE=/dev/urandom" >> "${SDCARD}"/etc/default/rng-tools
# @TODO: security problem?
# ping needs privileged action to be able to create raw network socket # ping needs privileged action to be able to create raw network socket
# this is working properly but not with (at least) Debian Buster # this is working properly but not with (at least) Debian Buster
chroot "${SDCARD}" /bin/bash -c "chmod u+s /bin/ping" chroot "${SDCARD}" /bin/bash -c "chmod u+s /bin/ping" 2>&1
# change time zone data # change time zone data
echo "${TZDATA}" > "${SDCARD}"/etc/timezone echo "${TZDATA}" > "${SDCARD}"/etc/timezone
chroot "${SDCARD}" /bin/bash -c "dpkg-reconfigure -f noninteractive tzdata >/dev/null 2>&1" # @TODO: a more generic logging helper needed
chroot "${SDCARD}" /bin/bash -c "dpkg-reconfigure -f noninteractive tzdata" 2>&1
# set root password # set root password
chroot "${SDCARD}" /bin/bash -c "(echo $ROOTPWD;echo $ROOTPWD;) | passwd root >/dev/null 2>&1" chroot "${SDCARD}" /bin/bash -c "(echo $ROOTPWD;echo $ROOTPWD;) | passwd root >/dev/null 2>&1"
# enable automated login to console(s)
if [[ $CONSOLE_AUTOLOGIN == yes ]]; then if [[ $CONSOLE_AUTOLOGIN == yes ]]; then
# enable automated login to console(s)
mkdir -p "${SDCARD}"/etc/systemd/system/getty@.service.d/ mkdir -p "${SDCARD}"/etc/systemd/system/getty@.service.d/
mkdir -p "${SDCARD}"/etc/systemd/system/serial-getty@.service.d/ mkdir -p "${SDCARD}"/etc/systemd/system/serial-getty@.service.d/
# @TODO: check why there was a sleep 10s in ExecStartPre
cat <<- EOF > "${SDCARD}"/etc/systemd/system/serial-getty@.service.d/override.conf cat <<- EOF > "${SDCARD}"/etc/systemd/system/serial-getty@.service.d/override.conf
[Service] [Service]
ExecStartPre=/bin/sh -c 'exec /bin/sleep 10'
ExecStart= ExecStart=
ExecStart=-/sbin/agetty --noissue --autologin root %I \$TERM ExecStart=-/sbin/agetty --noissue --autologin root %I \$TERM
Type=idle Type=idle
@@ -139,7 +142,7 @@ install_common() {
# root user is already there. Copy bashrc there as well # root user is already there. Copy bashrc there as well
cp "${SDCARD}"/etc/skel/.bashrc "${SDCARD}"/root cp "${SDCARD}"/etc/skel/.bashrc "${SDCARD}"/root
# display welcome message at first root login # display welcome message at first root login @TODO: what reads this?
touch "${SDCARD}"/root/.not_logged_in_yet touch "${SDCARD}"/root/.not_logged_in_yet
if [[ ${DESKTOP_AUTOLOGIN} == yes ]]; then if [[ ${DESKTOP_AUTOLOGIN} == yes ]]; then
@@ -151,8 +154,9 @@ install_common() {
local bootscript_src=${BOOTSCRIPT%%:*} local bootscript_src=${BOOTSCRIPT%%:*}
local bootscript_dst=${BOOTSCRIPT##*:} local bootscript_dst=${BOOTSCRIPT##*:}
# create extlinux config file # create extlinux config file @TODO: refactor into extensions u-boot, extlinux
if [[ $SRC_EXTLINUX == yes ]]; then if [[ $SRC_EXTLINUX == yes ]]; then
display_alert "Using extlinux, SRC_EXTLINUX: ${SRC_EXTLINUX}" "image will be incompatible with nand-sata-install" "warn"
mkdir -p $SDCARD/boot/extlinux mkdir -p $SDCARD/boot/extlinux
local bootpart_prefix local bootpart_prefix
if [[ -n $BOOTFS_TYPE ]]; then if [[ -n $BOOTFS_TYPE ]]; then
@@ -174,19 +178,19 @@ install_common() {
fi fi
else else
if [[ -n "${BOOTSCRIPT}" ]]; then if [[ -n "${BOOTSCRIPT}" ]]; then # @TODO: this used to check BOOTCONFIG not being 'none'
if [ -f "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" ]; then if [ -f "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" ]; then
cp "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" "${SDCARD}/boot/${bootscript_dst}" run_host_command_logged cp -pv "${USERPATCHES_PATH}/bootscripts/${bootscript_src}" "${SDCARD}/boot/${bootscript_dst}"
else else
cp "${SRC}/config/bootscripts/${bootscript_src}" "${SDCARD}/boot/${bootscript_dst}" run_host_command_logged cp -pv "${SRC}/config/bootscripts/${bootscript_src}" "${SDCARD}/boot/${bootscript_dst}"
fi fi
fi fi
if [[ -n $BOOTENV_FILE ]]; then if [[ -n $BOOTENV_FILE ]]; then
if [[ -f $USERPATCHES_PATH/bootenv/$BOOTENV_FILE ]]; then if [[ -f $USERPATCHES_PATH/bootenv/$BOOTENV_FILE ]]; then
cp "$USERPATCHES_PATH/bootenv/${BOOTENV_FILE}" "${SDCARD}"/boot/armbianEnv.txt run_host_command_logged cp -pv "$USERPATCHES_PATH/bootenv/${BOOTENV_FILE}" "${SDCARD}"/boot/armbianEnv.txt
elif [[ -f $SRC/config/bootenv/$BOOTENV_FILE ]]; then elif [[ -f $SRC/config/bootenv/$BOOTENV_FILE ]]; then
cp "${SRC}/config/bootenv/${BOOTENV_FILE}" "${SDCARD}"/boot/armbianEnv.txt run_host_command_logged cp -pv "${SRC}/config/bootenv/${BOOTENV_FILE}" "${SDCARD}"/boot/armbianEnv.txt
fi fi
fi fi
@@ -195,9 +199,9 @@ install_common() {
if [[ $ROOTFS_TYPE == nfs ]]; then if [[ $ROOTFS_TYPE == nfs ]]; then
display_alert "Copying NFS boot script template" display_alert "Copying NFS boot script template"
if [[ -f $USERPATCHES_PATH/nfs-boot.cmd ]]; then if [[ -f $USERPATCHES_PATH/nfs-boot.cmd ]]; then
cp "$USERPATCHES_PATH"/nfs-boot.cmd "${SDCARD}"/boot/boot.cmd run_host_command_logged cp -pv "$USERPATCHES_PATH"/nfs-boot.cmd "${SDCARD}"/boot/boot.cmd
else else
cp "${SRC}"/config/templates/nfs-boot.cmd.template "${SDCARD}"/boot/boot.cmd run_host_command_logged cp -pv "${SRC}"/config/templates/nfs-boot.cmd.template "${SDCARD}"/boot/boot.cmd
fi fi
fi fi
@@ -228,48 +232,61 @@ install_common() {
ff02::2 ip6-allrouters ff02::2 ip6-allrouters
EOF EOF
cd $SRC cd "${SRC}" || exit_with_error "cray-cray about ${SRC}"
# Prepare and export caching-related params common to all apt calls below, to maximize apt-cacher-ng usage # LOGGING: we're running under the logger framework here.
export APT_EXTRA_DIST_PARAMS="" # LOGGING: so we just log directly to stdout and let it handle it.
[[ $NO_APT_CACHER != yes ]] && APT_EXTRA_DIST_PARAMS="-o Acquire::http::Proxy=\"http://${APT_PROXY_ADDR:-localhost:3142}\" -o Acquire::http::Proxy::localhost=\"DIRECT\"" # LOGGING: redirect commands' stderr to stdout so it goes into the log, not screen.
display_alert "Cleaning" "package lists"
chroot "${SDCARD}" /bin/bash -c "apt-get clean"
display_alert "Updating" "package lists"
chroot "${SDCARD}" /bin/bash -c "apt-get ${APT_EXTRA_DIST_PARAMS} update" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
display_alert "Temporarily disabling" "initramfs-tools hook for kernel" display_alert "Temporarily disabling" "initramfs-tools hook for kernel"
chroot "${SDCARD}" /bin/bash -c "chmod -v -x /etc/kernel/postinst.d/initramfs-tools" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot_sdcard chmod -v -x /etc/kernel/postinst.d/initramfs-tools
display_alert "Cleaning" "package lists"
APT_OPTS="y" chroot_sdcard_apt_get clean
display_alert "Updating" "apt package lists"
APT_OPTS="y" chroot_sdcard_apt_get update
# install family packages # install family packages
if [[ -n ${PACKAGE_LIST_FAMILY} ]]; then if [[ -n ${PACKAGE_LIST_FAMILY} ]]; then
display_alert "Installing PACKAGE_LIST_FAMILY packages" "${PACKAGE_LIST_FAMILY}" _pkg_list=${PACKAGE_LIST_FAMILY}
chroot "${SDCARD}" /bin/bash -c "DEBIAN_FRONTEND=noninteractive apt-get ${APT_EXTRA_DIST_PARAMS} -yqq --no-install-recommends install $PACKAGE_LIST_FAMILY" >> "${DEST}"/${LOG_SUBPATH}/install.log display_alert "Installing PACKAGE_LIST_FAMILY packages" "${_pkg_list}"
# shellcheck disable=SC2086 # we need to expand here.
chroot_sdcard_apt_get_install $_pkg_list
fi fi
# install board packages # install board packages
if [[ -n ${PACKAGE_LIST_BOARD} ]]; then if [[ -n ${PACKAGE_LIST_BOARD} ]]; then
display_alert "Installing PACKAGE_LIST_BOARD packages" "${PACKAGE_LIST_BOARD}" _pkg_list=${PACKAGE_LIST_BOARD}
chroot "${SDCARD}" /bin/bash -c "DEBIAN_FRONTEND=noninteractive apt-get ${APT_EXTRA_DIST_PARAMS} -yqq --no-install-recommends install $PACKAGE_LIST_BOARD" >> "${DEST}"/${LOG_SUBPATH}/install.log || { display_alert "Installing PACKAGE_LIST_BOARD packages" "${_pkg_list}"
display_alert "Failed to install PACKAGE_LIST_BOARD" "${PACKAGE_LIST_BOARD}" "err" # shellcheck disable=SC2086 # we need to expand here. retry 3 times download-only to counter apt-cacher-ng failures.
exit 2 do_with_retries 3 chroot_sdcard_apt_get_install_download_only ${_pkg_list}
}
# shellcheck disable=SC2086 # we need to expand.
chroot_sdcard_apt_get_install ${_pkg_list}
fi fi
# remove family packages # remove family packages
if [[ -n ${PACKAGE_LIST_FAMILY_REMOVE} ]]; then if [[ -n ${PACKAGE_LIST_FAMILY_REMOVE} ]]; then
display_alert "Removing PACKAGE_LIST_FAMILY_REMOVE packages" "${PACKAGE_LIST_FAMILY_REMOVE}" _pkg_list=${PACKAGE_LIST_FAMILY_REMOVE}
chroot "${SDCARD}" /bin/bash -c "DEBIAN_FRONTEND=noninteractive apt-get ${APT_EXTRA_DIST_PARAMS} -yqq remove --auto-remove $PACKAGE_LIST_FAMILY_REMOVE" >> "${DEST}"/${LOG_SUBPATH}/install.log display_alert "Removing PACKAGE_LIST_FAMILY_REMOVE packages" "${_pkg_list}"
chroot_sdcard_apt_get remove --auto-remove ${_pkg_list}
fi fi
# remove board packages # remove board packages. loop over the list to remove, check if they're actually installed, then remove individually.
if [[ -n ${PACKAGE_LIST_BOARD_REMOVE} ]]; then if [[ -n ${PACKAGE_LIST_BOARD_REMOVE} ]]; then
display_alert "Removing PACKAGE_LIST_BOARD_REMOVE packages" "${PACKAGE_LIST_BOARD_REMOVE}" _pkg_list=${PACKAGE_LIST_BOARD_REMOVE}
for PKG_REMOVE in ${PACKAGE_LIST_BOARD_REMOVE}; do declare -a currently_installed_packages
chroot "${SDCARD}" /bin/bash -c "DEBIAN_FRONTEND=noninteractive apt-get ${APT_EXTRA_DIST_PARAMS} -yqq remove --auto-remove ${PKG_REMOVE}" >> "${DEST}"/${LOG_SUBPATH}/install.log # shellcheck disable=SC2207 # I wanna split, thanks.
currently_installed_packages=($(chroot_sdcard_with_stdout dpkg-query --show --showformat='${Package} '))
for PKG_REMOVE in ${_pkg_list}; do
# shellcheck disable=SC2076 # I wanna match literally, thanks.
if [[ " ${currently_installed_packages[*]} " =~ " ${PKG_REMOVE} " ]]; then
display_alert "Removing PACKAGE_LIST_BOARD_REMOVE package" "${PKG_REMOVE}"
chroot_sdcard_apt_get remove --auto-remove "${PKG_REMOVE}"
fi
done done
unset currently_installed_packages
fi fi
# install u-boot # install u-boot
@@ -280,14 +297,17 @@ install_common() {
install_deb_chroot "${DEB_STORAGE}/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb" install_deb_chroot "${DEB_STORAGE}/${CHOSEN_UBOOT}_${REVISION}_${ARCH}.deb"
else else
install_deb_chroot "linux-u-boot-${BOARD}-${BRANCH}" "remote" "yes" install_deb_chroot "linux-u-boot-${BOARD}-${BRANCH}" "remote" "yes"
UPSTREM_VER=$(dpkg-deb -f "${SDCARD}"/var/cache/apt/archives/linux-u-boot-${BOARD}-${BRANCH}*_${ARCH}.deb Version) UBOOT_REPO_VERSION=$(dpkg-deb -f "${SDCARD}"/var/cache/apt/archives/linux-u-boot-${BOARD}-${BRANCH}*_${ARCH}.deb Version)
fi fi
} }
call_extension_method "pre_install_kernel_debs" << 'PRE_INSTALL_KERNEL_DEBS' call_extension_method "pre_install_kernel_debs" <<- 'PRE_INSTALL_KERNEL_DEBS'
*called before installing the Armbian-built kernel deb packages* *called before installing the Armbian-built kernel deb packages*
It is not too late to `unset KERNELSOURCE` here and avoid kernel install. It is not too late to `unset KERNELSOURCE` here and avoid kernel install.
PRE_INSTALL_KERNEL_DEBS PRE_INSTALL_KERNEL_DEBS
# default VER, will be parsed from Kernel version in the installed deb package.
VER="linux"
# install kernel # install kernel
[[ -n $KERNELSOURCE ]] && { [[ -n $KERNELSOURCE ]] && {
@@ -309,23 +329,22 @@ PRE_INSTALL_KERNEL_DEBS
VER=$(dpkg-deb -f "${SDCARD}"/var/cache/apt/archives/linux-image-${BRANCH}-${LINUXFAMILY}*_${ARCH}.deb Source) VER=$(dpkg-deb -f "${SDCARD}"/var/cache/apt/archives/linux-image-${BRANCH}-${LINUXFAMILY}*_${ARCH}.deb Source)
VER="${VER/-$LINUXFAMILY/}" VER="${VER/-$LINUXFAMILY/}"
VER="${VER/linux-/}" VER="${VER/linux-/}"
if [[ "${ARCH}" != "amd64" && "${LINUXFAMILY}" != "media" ]]; then # amd64 does not have dtb package, see packages/armbian/builddeb:355
if [[ "${ARCH}" != "amd64" && "${LINUXFAMILY}" != "media" && "${LINUXFAMILY}" != station* ]]; then # amd64 does not have dtb package, see packages/armbian/builddeb:355
install_deb_chroot "linux-dtb-${BRANCH}-${LINUXFAMILY}" "remote" install_deb_chroot "linux-dtb-${BRANCH}-${LINUXFAMILY}" "remote"
fi fi
[[ $INSTALL_HEADERS == yes ]] && install_deb_chroot "linux-headers-${BRANCH}-${LINUXFAMILY}" "remote" [[ $INSTALL_HEADERS == yes ]] && install_deb_chroot "linux-headers-${BRANCH}-${LINUXFAMILY}" "remote"
fi fi
} }
call_extension_method "post_install_kernel_debs" << 'POST_INSTALL_KERNEL_DEBS' call_extension_method "post_install_kernel_debs" <<- 'POST_INSTALL_KERNEL_DEBS'
*allow config to do more with the installed kernel/headers* *allow config to do more with the installed kernel/headers*
Called after packages, u-boot, kernel and headers installed in the chroot, but before the BSP is installed. Called after packages, u-boot, kernel and headers installed in the chroot, but before the BSP is installed.
If `KERNELSOURCE` is (still?) unset after this, Armbian-built firmware will not be installed. If `KERNELSOURCE` is (still?) unset after this, Armbian-built firmware will not be installed.
POST_INSTALL_KERNEL_DEBS POST_INSTALL_KERNEL_DEBS
# install board support packages # install board support packages
if [[ "${REPOSITORY_INSTALL}" != *bsp* ]]; then if [[ "${REPOSITORY_INSTALL}" != *bsp* ]]; then
install_deb_chroot "${DEB_STORAGE}/${BSP_CLI_PACKAGE_FULLNAME}.deb" | tee -a "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 install_deb_chroot "${DEB_STORAGE}/${BSP_CLI_PACKAGE_FULLNAME}.deb"
else else
install_deb_chroot "${CHOSEN_ROOTFS}" "remote" install_deb_chroot "${CHOSEN_ROOTFS}" "remote"
fi fi
@@ -399,14 +418,14 @@ POST_INSTALL_KERNEL_DEBS
# install wireguard tools # install wireguard tools
if [[ $WIREGUARD == yes ]]; then if [[ $WIREGUARD == yes ]]; then
install_deb_chroot "wireguard-tools --no-install-recommends" "remote" install_deb_chroot "wireguard-tools" "remote"
fi fi
# freeze armbian packages # freeze armbian packages
if [[ $BSPFREEZE == yes ]]; then if [[ $BSPFREEZE == yes ]]; then
display_alert "Freezing Armbian packages" "$BOARD" "info" display_alert "Freezing Armbian packages" "$BOARD" "info"
chroot "${SDCARD}" /bin/bash -c "apt-mark hold ${CHOSEN_KERNEL} ${CHOSEN_KERNEL/image/headers} \ chroot "${SDCARD}" /bin/bash -c "apt-mark hold ${CHOSEN_KERNEL} ${CHOSEN_KERNEL/image/headers} \
linux-u-boot-${BOARD}-${BRANCH} ${CHOSEN_KERNEL/image/dtb}" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 linux-u-boot-${BOARD}-${BRANCH} ${CHOSEN_KERNEL/image/dtb}" 2>&1
fi fi
# remove deb files # remove deb files
@@ -416,23 +435,28 @@ POST_INSTALL_KERNEL_DEBS
cp "${SRC}"/packages/blobs/splash/armbian-u-boot.bmp "${SDCARD}"/boot/boot.bmp cp "${SRC}"/packages/blobs/splash/armbian-u-boot.bmp "${SDCARD}"/boot/boot.bmp
# execute $LINUXFAMILY-specific tweaks # execute $LINUXFAMILY-specific tweaks
[[ $(type -t family_tweaks) == function ]] && family_tweaks if [[ $(type -t family_tweaks) == function ]]; then
display_alert "Running family_tweaks" "$BOARD :: $LINUXFAMILY" "debug"
family_tweaks
display_alert "Done with family_tweaks" "$BOARD :: $LINUXFAMILY" "debug"
fi
call_extension_method "post_family_tweaks" << 'FAMILY_TWEAKS' call_extension_method "post_family_tweaks" <<- 'FAMILY_TWEAKS'
*customize the tweaks made by $LINUXFAMILY-specific family_tweaks* *customize the tweaks made by $LINUXFAMILY-specific family_tweaks*
It is run after packages are installed in the rootfs, but before enabling additional services. It is run after packages are installed in the rootfs, but before enabling additional services.
It allows implementors access to the rootfs (`${SDCARD}`) in its pristine state after packages are installed. It allows implementors access to the rootfs (`${SDCARD}`) in its pristine state after packages are installed.
FAMILY_TWEAKS FAMILY_TWEAKS
# enable additional services # enable additional services, if they exist.
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-firstrun.service >/dev/null 2>&1" display_alert "Enabling Armbian services" "systemd" "info"
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-firstrun-config.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-firstrun.service ]] && chroot_sdcard systemctl --no-reload enable armbian-firstrun.service
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-zram-config.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-firstrun-config.service ]] && chroot_sdcard systemctl --no-reload enable armbian-firstrun-config.service
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-hardware-optimize.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-zram-config.service ]] && chroot_sdcard systemctl --no-reload enable armbian-zram-config.service
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-ramlog.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-hardware-optimize.service ]] && chroot_sdcard systemctl --no-reload enable armbian-hardware-optimize.service
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-resize-filesystem.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-ramlog.service ]] && chroot_sdcard systemctl --no-reload enable armbian-ramlog.service
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-hardware-monitor.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-resize-filesystem.service ]] && chroot_sdcard systemctl --no-reload enable armbian-resize-filesystem.service
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable armbian-led-state.service >/dev/null 2>&1" [[ -f "${SDCARD}"/lib/systemd/system/armbian-hardware-monitor.service ]] && chroot_sdcard systemctl --no-reload enable armbian-hardware-monitor.service
[[ -f "${SDCARD}"/lib/systemd/system/armbian-led-state.service ]] && chroot_sdcard systemctl --no-reload enable armbian-led-state.service
# copy "first run automated config, optional user configured" # copy "first run automated config, optional user configured"
cp "${SRC}"/packages/bsp/armbian_first_run.txt.template "${SDCARD}"/boot/armbian_first_run.txt.template cp "${SRC}"/packages/bsp/armbian_first_run.txt.template "${SDCARD}"/boot/armbian_first_run.txt.template
@@ -440,15 +464,7 @@ FAMILY_TWEAKS
# switch to beta repository at this stage if building nightly images # switch to beta repository at this stage if building nightly images
[[ $IMAGE_TYPE == nightly ]] && sed -i 's/apt/beta/' "${SDCARD}"/etc/apt/sources.list.d/armbian.list [[ $IMAGE_TYPE == nightly ]] && sed -i 's/apt/beta/' "${SDCARD}"/etc/apt/sources.list.d/armbian.list
# Cosmetic fix [FAILED] Failed to start Set console font and keymap at first boot # fix for https://bugs.launchpad.net/ubuntu/+source/blueman/+bug/1542723 @TODO: from ubuntu 15. maybe gone?
[[ -f "${SDCARD}"/etc/console-setup/cached_setup_font.sh ]] &&
sed -i "s/^printf '.*/printf '\\\033\%\%G'/g" "${SDCARD}"/etc/console-setup/cached_setup_font.sh
[[ -f "${SDCARD}"/etc/console-setup/cached_setup_terminal.sh ]] &&
sed -i "s/^printf '.*/printf '\\\033\%\%G'/g" "${SDCARD}"/etc/console-setup/cached_setup_terminal.sh
[[ -f "${SDCARD}"/etc/console-setup/cached_setup_keyboard.sh ]] &&
sed -i "s/-u/-x'/g" "${SDCARD}"/etc/console-setup/cached_setup_keyboard.sh
# fix for https://bugs.launchpad.net/ubuntu/+source/blueman/+bug/1542723
chroot "${SDCARD}" /bin/bash -c "chown root:messagebus /usr/lib/dbus-1.0/dbus-daemon-launch-helper" chroot "${SDCARD}" /bin/bash -c "chown root:messagebus /usr/lib/dbus-1.0/dbus-daemon-launch-helper"
chroot "${SDCARD}" /bin/bash -c "chmod u+s /usr/lib/dbus-1.0/dbus-daemon-launch-helper" chroot "${SDCARD}" /bin/bash -c "chmod u+s /usr/lib/dbus-1.0/dbus-daemon-launch-helper"
@@ -488,9 +504,8 @@ FAMILY_TWEAKS
sed -i "s/--keep-baud 115200/--keep-baud ${array[1]},115200/" \ sed -i "s/--keep-baud 115200/--keep-baud ${array[1]},115200/" \
"${SDCARD}/lib/systemd/system/serial-getty@${array[0]}.service" "${SDCARD}/lib/systemd/system/serial-getty@${array[0]}.service"
fi fi
chroot "${SDCARD}" /bin/bash -c "systemctl daemon-reload" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot_sdcard systemctl daemon-reload
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload enable serial-getty@${array[0]}.service" \ chroot_sdcard systemctl --no-reload enable "serial-getty@${array[0]}.service"
>> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
if [[ "${array[0]}" == "ttyGS0" && $LINUXFAMILY == sun8i && $BRANCH == default ]]; then if [[ "${array[0]}" == "ttyGS0" && $LINUXFAMILY == sun8i && $BRANCH == default ]]; then
mkdir -p "${SDCARD}"/etc/systemd/system/serial-getty@ttyGS0.service.d mkdir -p "${SDCARD}"/etc/systemd/system/serial-getty@ttyGS0.service.d
cat <<- EOF > "${SDCARD}"/etc/systemd/system/serial-getty@ttyGS0.service.d/10-switch-role.conf cat <<- EOF > "${SDCARD}"/etc/systemd/system/serial-getty@ttyGS0.service.d/10-switch-role.conf
@@ -529,7 +544,7 @@ FAMILY_TWEAKS
# configure network manager # configure network manager
sed "s/managed=\(.*\)/managed=true/g" -i "${SDCARD}"/etc/NetworkManager/NetworkManager.conf sed "s/managed=\(.*\)/managed=true/g" -i "${SDCARD}"/etc/NetworkManager/NetworkManager.conf
# remove network manager defaults to handle eth by default ## remove network manager defaults to handle eth by default @TODO: why?
rm -f "${SDCARD}"/usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf rm -f "${SDCARD}"/usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf
# `systemd-networkd.service` will be enabled by `/lib/systemd/system-preset/90-systemd.preset` during first-run. # `systemd-networkd.service` will be enabled by `/lib/systemd/system-preset/90-systemd.preset` during first-run.
@@ -537,11 +552,12 @@ FAMILY_TWEAKS
chroot "${SDCARD}" /bin/bash -c "systemctl mask systemd-networkd.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot "${SDCARD}" /bin/bash -c "systemctl mask systemd-networkd.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
# most likely we don't need to wait for nm to get online # most likely we don't need to wait for nm to get online
chroot "${SDCARD}" /bin/bash -c "systemctl disable NetworkManager-wait-online.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot_sdcard systemctl disable NetworkManager-wait-online.service
# Just regular DNS and maintain /etc/resolv.conf as a file # Just regular DNS and maintain /etc/resolv.conf as a file @TODO: this does not apply as of impish at least
sed "/dns/d" -i "${SDCARD}"/etc/NetworkManager/NetworkManager.conf sed "/dns/d" -i "${SDCARD}"/etc/NetworkManager/NetworkManager.conf
sed "s/\[main\]/\[main\]\ndns=default\nrc-manager=file/g" -i "${SDCARD}"/etc/NetworkManager/NetworkManager.conf sed "s/\[main\]/\[main\]\ndns=default\nrc-manager=file/g" -i "${SDCARD}"/etc/NetworkManager/NetworkManager.conf
if [[ -n $NM_IGNORE_DEVICES ]]; then if [[ -n $NM_IGNORE_DEVICES ]]; then
mkdir -p "${SDCARD}"/etc/NetworkManager/conf.d/ mkdir -p "${SDCARD}"/etc/NetworkManager/conf.d/
cat <<- EOF > "${SDCARD}"/etc/NetworkManager/conf.d/10-ignore-interfaces.conf cat <<- EOF > "${SDCARD}"/etc/NetworkManager/conf.d/10-ignore-interfaces.conf
@@ -556,13 +572,13 @@ FAMILY_TWEAKS
ln -s /run/systemd/resolve/resolv.conf "${SDCARD}"/etc/resolv.conf ln -s /run/systemd/resolve/resolv.conf "${SDCARD}"/etc/resolv.conf
# enable services # enable services
chroot "${SDCARD}" /bin/bash -c "systemctl enable systemd-networkd.service systemd-resolved.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot_sdcard systemctl enable systemd-networkd.service systemd-resolved.service
# Mask `NetworkManager.service` to avoid conflict # Mask `NetworkManager.service` to avoid conflict
chroot "${SDCARD}" /bin/bash -c "systemctl mask NetworkManager.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot "${SDCARD}" /bin/bash -c "systemctl mask NetworkManager.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1
if [ -e /etc/systemd/timesyncd.conf ]; then if [ -e /etc/systemd/timesyncd.conf ]; then
chroot "${SDCARD}" /bin/bash -c "systemctl enable systemd-timesyncd.service" >> "${DEST}"/${LOG_SUBPATH}/install.log 2>&1 chroot_sdcard systemctl enable systemd-timesyncd.service
fi fi
umask 022 umask 022
cat > "${SDCARD}"/etc/systemd/network/eth0.network <<- __EOF__ cat > "${SDCARD}"/etc/systemd/network/eth0.network <<- __EOF__
@@ -609,10 +625,10 @@ FAMILY_TWEAKS
# disable MOTD for first boot - we want as clean 1st run as possible # disable MOTD for first boot - we want as clean 1st run as possible
chmod -x "${SDCARD}"/etc/update-motd.d/* chmod -x "${SDCARD}"/etc/update-motd.d/*
return 0 # make sure to exit with success
} }
install_rclocal() { install_rclocal() {
cat <<- EOF > "${SDCARD}"/etc/rc.local cat <<- EOF > "${SDCARD}"/etc/rc.local
#!/bin/sh -e #!/bin/sh -e
# #
@@ -630,5 +646,4 @@ install_rclocal() {
exit 0 exit 0
EOF EOF
chmod +x "${SDCARD}"/etc/rc.local chmod +x "${SDCARD}"/etc/rc.local
} }

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env bash #!/usr/bin/env bash
install_distribution_specific() { install_distribution_specific() {
display_alert "Applying distribution specific tweaks for" "$RELEASE" "info" display_alert "Applying distribution specific tweaks for" "$RELEASE" "info"
# disable broken service # disable broken service
@@ -18,7 +17,7 @@ install_distribution_specific() {
# by using default lz4 initrd compression leads to corruption, go back to proven method # by using default lz4 initrd compression leads to corruption, go back to proven method
sed -i "s/^COMPRESS=.*/COMPRESS=gzip/" "${SDCARD}"/etc/initramfs-tools/initramfs.conf sed -i "s/^COMPRESS=.*/COMPRESS=gzip/" "${SDCARD}"/etc/initramfs-tools/initramfs.conf
rm -f "${SDCARD}"/etc/update-motd.d/{10-uname,10-help-text,50-motd-news,80-esm,80-livepatch,90-updates-available,91-release-upgrade,95-hwe-eol} run_host_command_logged rm -fv "${SDCARD}"/etc/update-motd.d/{10-uname,10-help-text,50-motd-news,80-esm,80-livepatch,90-updates-available,91-release-upgrade,95-hwe-eol}
if [ -d "${SDCARD}"/etc/NetworkManager ]; then if [ -d "${SDCARD}"/etc/NetworkManager ]; then
local RENDERER=NetworkManager local RENDERER=NetworkManager
@@ -38,36 +37,23 @@ install_distribution_specific() {
sed -i "s/#RateLimitBurst=.*/RateLimitBurst=10000/g" "${SDCARD}"/etc/systemd/journald.conf sed -i "s/#RateLimitBurst=.*/RateLimitBurst=10000/g" "${SDCARD}"/etc/systemd/journald.conf
# Chrony temporal fix https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/1878005 # Chrony temporal fix https://bugs.launchpad.net/ubuntu/+source/chrony/+bug/1878005
sed -i '/DAEMON_OPTS=/s/"-F -1"/"-F 0"/' "${SDCARD}"/etc/default/chrony [[ -f "${SDCARD}"/etc/default/chrony ]] && sed -i '/DAEMON_OPTS=/s/"-F -1"/"-F 0"/' "${SDCARD}"/etc/default/chrony
# disable conflicting services # disable conflicting services
chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload mask ondemand.service >/dev/null 2>&1" chroot "${SDCARD}" /bin/bash -c "systemctl --no-reload mask ondemand.service >/dev/null 2>&1"
;; ;;
esac esac
# configure language and locales
display_alert "Configuring locales" "$DEST_LANG" "info"
if [[ -f $SDCARD/etc/locale.gen ]]; then
[ -n "$DEST_LANG" ] && sed -i "s/^# $DEST_LANG/$DEST_LANG/" $SDCARD/etc/locale.gen
sed -i '/ C.UTF-8/s/^# //g' $SDCARD/etc/locale.gen
sed -i '/en_US.UTF-8/s/^# //g' $SDCARD/etc/locale.gen
fi
eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c "locale-gen"' ${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'}
[ -n "$DEST_LANG" ] && eval 'LC_ALL=C LANG=C chroot $SDCARD /bin/bash -c \
"update-locale --reset LANG=$DEST_LANG LANGUAGE=$DEST_LANG LC_ALL=$DEST_LANG"' ${OUTPUT_VERYSILENT:+' >/dev/null 2>/dev/null'}
# Basic Netplan config. Let NetworkManager/networkd manage all devices on this system # Basic Netplan config. Let NetworkManager/networkd manage all devices on this system
[[ -d "${SDCARD}"/etc/netplan ]] && cat <<- EOF > "${SDCARD}"/etc/netplan/armbian-default.yaml [[ -d "${SDCARD}"/etc/netplan ]] && cat <<- EOF > "${SDCARD}"/etc/netplan/armbian-default.yaml
network: network:
version: 2 version: 2
renderer: $RENDERER renderer: $RENDERER
EOF EOF
# cleanup motd services and related files # cleanup motd services and related files
chroot "${SDCARD}" /bin/bash -c "systemctl disable motd-news.service >/dev/null 2>&1" chroot_sdcard systemctl disable motd-news.service
chroot "${SDCARD}" /bin/bash -c "systemctl disable motd-news.timer >/dev/null 2>&1" chroot_sdcard systemctl disable motd-news.timer
# remove motd news from motd.ubuntu.com # remove motd news from motd.ubuntu.com
[[ -f "${SDCARD}"/etc/default/motd-news ]] && sed -i "s/^ENABLED=.*/ENABLED=0/" "${SDCARD}"/etc/default/motd-news [[ -f "${SDCARD}"/etc/default/motd-news ]] && sed -i "s/^ENABLED=.*/ENABLED=0/" "${SDCARD}"/etc/default/motd-news

View File

@@ -1,16 +1,21 @@
#!/usr/bin/env bash #!/usr/bin/env bash
post_debootstrap_tweaks() {
# remove service start blockers and QEMU binary function post_debootstrap_tweaks() {
rm -f "${SDCARD}"/sbin/initctl "${SDCARD}"/sbin/start-stop-daemon display_alert "Applying post-tweaks" "post_debootstrap_tweaks" "debug"
chroot "${SDCARD}" /bin/bash -c "dpkg-divert --quiet --local --rename --remove /sbin/initctl"
chroot "${SDCARD}" /bin/bash -c "dpkg-divert --quiet --local --rename --remove /sbin/start-stop-daemon"
rm -f "${SDCARD}"/usr/sbin/policy-rc.d "${SDCARD}/usr/bin/${QEMU_BINARY}"
call_extension_method "post_post_debootstrap_tweaks" "config_post_debootstrap_tweaks" << 'POST_POST_DEBOOTSTRAP_TWEAKS' # remove service start blockers
*run after removing diversions and qemu with chroot unmounted* run_host_command_logged rm -fv "${SDCARD}"/sbin/initctl "${SDCARD}"/sbin/start-stop-daemon
Last chance to touch the `${SDCARD}` filesystem before it is copied to the final media. chroot_sdcard dpkg-divert --quiet --local --rename --remove /sbin/initctl
It is too late to run any chrooted commands, since the supporting filesystems are already unmounted. chroot_sdcard dpkg-divert --quiet --local --rename --remove /sbin/start-stop-daemon
POST_POST_DEBOOTSTRAP_TWEAKS run_host_command_logged rm -fv "${SDCARD}"/usr/sbin/policy-rc.d
# remove the qemu static binary
undeploy_qemu_binary_from_chroot "${SDCARD}"
call_extension_method "post_post_debootstrap_tweaks" "config_post_debootstrap_tweaks" <<- 'POST_POST_DEBOOTSTRAP_TWEAKS'
*run after removing diversions and qemu with chroot unmounted*
Last chance to touch the `${SDCARD}` filesystem before it is copied to the final media.
It is too late to run any chrooted commands, since the supporting filesystems are already unmounted.
POST_POST_DEBOOTSTRAP_TWEAKS
} }

View File

@@ -0,0 +1,28 @@
function deploy_qemu_binary_to_chroot() {
local chroot_target="${1}"
# @TODO: rpardini: Only deploy the binary if we're actually building a different architecture? otherwise unneeded.
if [[ ! -f "${chroot_target}/usr/bin/${QEMU_BINARY}" ]]; then
display_alert "Deploying qemu-user-static binary to chroot" "${QEMU_BINARY}" "debug"
run_host_command_logged cp -pv "/usr/bin/${QEMU_BINARY}" "${chroot_target}/usr/bin/"
else
display_alert "qemu-user-static binary already deployed, skipping" "${QEMU_BINARY}" "debug"
fi
}
function undeploy_qemu_binary_from_chroot() {
local chroot_target="${1}"
# Hack: Check for magic "/usr/bin/qemu-s390x-static" marker; if that exists, it means "qemu-user-static" was installed
# in the chroot, and we shouldn't remove the binary, otherwise it's gonna be missing in the final image.
if [[ -f "${chroot_target}/usr/bin/qemu-s390x-static" ]]; then
display_alert "Not removing qemu binary, qemu-user-static package is installed in the chroot" "${QEMU_BINARY}" "debug"
return 0
fi
if [[ -f "${chroot_target}/usr/bin/${QEMU_BINARY}" ]]; then
display_alert "Removing qemu-user-static binary from chroot" "${QEMU_BINARY}" "debug"
run_host_command_logged rm -fv "${chroot_target}/usr/bin/${QEMU_BINARY}"
fi
}

View File

@@ -1,21 +1,23 @@
#!/usr/bin/env bash #!/usr/bin/env bash
# a-kind-of-hook, called by install_distribution_agnostic() if it's a desktop build
desktop_postinstall() { desktop_postinstall() {
# disable display manager for the first run # disable display manager for the first run
run_on_sdcard "systemctl --no-reload disable lightdm.service >/dev/null 2>&1" chroot_sdcard "systemctl --no-reload disable lightdm.service"
run_on_sdcard "systemctl --no-reload disable gdm3.service >/dev/null 2>&1" chroot_sdcard "systemctl --no-reload disable gdm3.service"
# update packages index # update packages index
run_on_sdcard "DEBIAN_FRONTEND=noninteractive apt-get update >/dev/null 2>&1" chroot_sdcard_apt_get "update"
# install per board packages # install per board packages
if [[ -n ${PACKAGE_LIST_DESKTOP_BOARD} ]]; then if [[ -n ${PACKAGE_LIST_DESKTOP_BOARD} ]]; then
run_on_sdcard "DEBIAN_FRONTEND=noninteractive apt-get -yqq --no-install-recommends install $PACKAGE_LIST_DESKTOP_BOARD" chroot_sdcard_apt_get_install "$PACKAGE_LIST_DESKTOP_BOARD"
fi fi
# install per family packages # install per family packages
if [[ -n ${PACKAGE_LIST_DESKTOP_FAMILY} ]]; then if [[ -n ${PACKAGE_LIST_DESKTOP_FAMILY} ]]; then
run_on_sdcard "DEBIAN_FRONTEND=noninteractive apt-get -yqq --no-install-recommends install $PACKAGE_LIST_DESKTOP_FAMILY" chroot_sdcard_apt_get_install "$PACKAGE_LIST_DESKTOP_FAMILY"
fi fi
} }

View File

@@ -1,6 +0,0 @@
#!/usr/bin/env bash
while read -r file; do
# shellcheck source=/dev/null
source "$file"
done <<< "$(find "${SRC}/lib/functions" -name "*.sh")"

531
lib/library-functions.sh Normal file
View File

@@ -0,0 +1,531 @@
#!/usr/bin/env bash
# This file is/was autogenerated by lib/tools/gen-library.sh; don't modify manually
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/bsp/bsp-cli.sh
# shellcheck source=lib/functions/bsp/bsp-cli.sh
source "${SRC}"/lib/functions/bsp/bsp-cli.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/bsp/bsp-desktop.sh
# shellcheck source=lib/functions/bsp/bsp-desktop.sh
source "${SRC}"/lib/functions/bsp/bsp-desktop.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/bsp/utils-bsp.sh
# shellcheck source=lib/functions/bsp/utils-bsp.sh
source "${SRC}"/lib/functions/bsp/utils-bsp.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/cli-entrypoint.sh
# shellcheck source=lib/functions/cli/cli-entrypoint.sh
source "${SRC}"/lib/functions/cli/cli-entrypoint.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/cli/utils-cli.sh
# shellcheck source=lib/functions/cli/utils-cli.sh
source "${SRC}"/lib/functions/cli/utils-cli.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/atf.sh
# shellcheck source=lib/functions/compilation/atf.sh
source "${SRC}"/lib/functions/compilation/atf.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/debs.sh
# shellcheck source=lib/functions/compilation/debs.sh
source "${SRC}"/lib/functions/compilation/debs.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/kernel-debs.sh
# shellcheck source=lib/functions/compilation/kernel-debs.sh
source "${SRC}"/lib/functions/compilation/kernel-debs.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/kernel.sh
# shellcheck source=lib/functions/compilation/kernel.sh
source "${SRC}"/lib/functions/compilation/kernel.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/patch/drivers_network.sh
# shellcheck source=lib/functions/compilation/patch/drivers_network.sh
source "${SRC}"/lib/functions/compilation/patch/drivers_network.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/patch/fasthash.sh
# shellcheck source=lib/functions/compilation/patch/fasthash.sh
source "${SRC}"/lib/functions/compilation/patch/fasthash.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/patch/kernel-bootsplash.sh
# shellcheck source=lib/functions/compilation/patch/kernel-bootsplash.sh
source "${SRC}"/lib/functions/compilation/patch/kernel-bootsplash.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/patch/kernel-drivers.sh
# shellcheck source=lib/functions/compilation/patch/kernel-drivers.sh
source "${SRC}"/lib/functions/compilation/patch/kernel-drivers.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/patch/patching.sh
# shellcheck source=lib/functions/compilation/patch/patching.sh
source "${SRC}"/lib/functions/compilation/patch/patching.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/uboot.sh
# shellcheck source=lib/functions/compilation/uboot.sh
source "${SRC}"/lib/functions/compilation/uboot.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/compilation/utils-compilation.sh
# shellcheck source=lib/functions/compilation/utils-compilation.sh
source "${SRC}"/lib/functions/compilation/utils-compilation.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/configuration/aggregation.sh
# shellcheck source=lib/functions/configuration/aggregation.sh
source "${SRC}"/lib/functions/configuration/aggregation.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/configuration/config-desktop.sh
# shellcheck source=lib/functions/configuration/config-desktop.sh
source "${SRC}"/lib/functions/configuration/config-desktop.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/configuration/interactive.sh
# shellcheck source=lib/functions/configuration/interactive.sh
source "${SRC}"/lib/functions/configuration/interactive.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/configuration/main-config.sh
# shellcheck source=lib/functions/configuration/main-config.sh
source "${SRC}"/lib/functions/configuration/main-config.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/configuration/menu.sh
# shellcheck source=lib/functions/configuration/menu.sh
source "${SRC}"/lib/functions/configuration/menu.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/extras/buildpkg.sh
# shellcheck source=lib/functions/extras/buildpkg.sh
source "${SRC}"/lib/functions/extras/buildpkg.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/extras/fel.sh
# shellcheck source=lib/functions/extras/fel.sh
source "${SRC}"/lib/functions/extras/fel.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/general/chroot-helpers.sh
# shellcheck source=lib/functions/general/chroot-helpers.sh
source "${SRC}"/lib/functions/general/chroot-helpers.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/general/cleaning.sh
# shellcheck source=lib/functions/general/cleaning.sh
source "${SRC}"/lib/functions/general/cleaning.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/general/downloads.sh
# shellcheck source=lib/functions/general/downloads.sh
source "${SRC}"/lib/functions/general/downloads.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/general/git.sh
# shellcheck source=lib/functions/general/git.sh
source "${SRC}"/lib/functions/general/git.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/general/repo.sh
# shellcheck source=lib/functions/general/repo.sh
source "${SRC}"/lib/functions/general/repo.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/apt-cacher-ng.sh
# shellcheck source=lib/functions/host/apt-cacher-ng.sh
source "${SRC}"/lib/functions/host/apt-cacher-ng.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/basic-deps.sh
# shellcheck source=lib/functions/host/basic-deps.sh
source "${SRC}"/lib/functions/host/basic-deps.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/external-toolchains.sh
# shellcheck source=lib/functions/host/external-toolchains.sh
source "${SRC}"/lib/functions/host/external-toolchains.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/host-utils.sh
# shellcheck source=lib/functions/host/host-utils.sh
source "${SRC}"/lib/functions/host/host-utils.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/host/prepare-host.sh
# shellcheck source=lib/functions/host/prepare-host.sh
source "${SRC}"/lib/functions/host/prepare-host.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/compress-checksum.sh
# shellcheck source=lib/functions/image/compress-checksum.sh
source "${SRC}"/lib/functions/image/compress-checksum.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/fingerprint.sh
# shellcheck source=lib/functions/image/fingerprint.sh
source "${SRC}"/lib/functions/image/fingerprint.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/initrd.sh
# shellcheck source=lib/functions/image/initrd.sh
source "${SRC}"/lib/functions/image/initrd.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/loop.sh
# shellcheck source=lib/functions/image/loop.sh
source "${SRC}"/lib/functions/image/loop.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/partitioning.sh
# shellcheck source=lib/functions/image/partitioning.sh
source "${SRC}"/lib/functions/image/partitioning.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/rootfs-to-image.sh
# shellcheck source=lib/functions/image/rootfs-to-image.sh
source "${SRC}"/lib/functions/image/rootfs-to-image.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/image/write-device.sh
# shellcheck source=lib/functions/image/write-device.sh
source "${SRC}"/lib/functions/image/write-device.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/logging/capture.sh
# shellcheck source=lib/functions/logging/capture.sh
source "${SRC}"/lib/functions/logging/capture.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/logging/logging.sh
# shellcheck source=lib/functions/logging/logging.sh
source "${SRC}"/lib/functions/logging/logging.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/logging/runners.sh
# shellcheck source=lib/functions/logging/runners.sh
source "${SRC}"/lib/functions/logging/runners.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/logging/stacktraces.sh
# shellcheck source=lib/functions/logging/stacktraces.sh
source "${SRC}"/lib/functions/logging/stacktraces.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/logging/traps.sh
# shellcheck source=lib/functions/logging/traps.sh
source "${SRC}"/lib/functions/logging/traps.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/main/config-prepare.sh
# shellcheck source=lib/functions/main/config-prepare.sh
source "${SRC}"/lib/functions/main/config-prepare.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/main/default-build.sh
# shellcheck source=lib/functions/main/default-build.sh
source "${SRC}"/lib/functions/main/default-build.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/main/rootfs-image.sh
# shellcheck source=lib/functions/main/rootfs-image.sh
source "${SRC}"/lib/functions/main/rootfs-image.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/apt-install.sh
# shellcheck source=lib/functions/rootfs/apt-install.sh
source "${SRC}"/lib/functions/rootfs/apt-install.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/apt-sources.sh
# shellcheck source=lib/functions/rootfs/apt-sources.sh
source "${SRC}"/lib/functions/rootfs/apt-sources.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/boot_logo.sh
# shellcheck source=lib/functions/rootfs/boot_logo.sh
source "${SRC}"/lib/functions/rootfs/boot_logo.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/create-cache.sh
# shellcheck source=lib/functions/rootfs/create-cache.sh
source "${SRC}"/lib/functions/rootfs/create-cache.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/customize.sh
# shellcheck source=lib/functions/rootfs/customize.sh
source "${SRC}"/lib/functions/rootfs/customize.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/distro-agnostic.sh
# shellcheck source=lib/functions/rootfs/distro-agnostic.sh
source "${SRC}"/lib/functions/rootfs/distro-agnostic.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/distro-specific.sh
# shellcheck source=lib/functions/rootfs/distro-specific.sh
source "${SRC}"/lib/functions/rootfs/distro-specific.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/post-tweaks.sh
# shellcheck source=lib/functions/rootfs/post-tweaks.sh
source "${SRC}"/lib/functions/rootfs/post-tweaks.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/qemu-static.sh
# shellcheck source=lib/functions/rootfs/qemu-static.sh
source "${SRC}"/lib/functions/rootfs/qemu-static.sh
# no errors tolerated. invoked before each sourced file to make sure.
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
### lib/functions/rootfs/rootfs-desktop.sh
# shellcheck source=lib/functions/rootfs/rootfs-desktop.sh
source "${SRC}"/lib/functions/rootfs/rootfs-desktop.sh
# no errors tolerated. one last time for the win!
#set -o pipefail # trace ERR through pipes - will be enabled "soon"
#set -o nounset ## set -u : exit the script if you try to use an uninitialised variable - one day will be enabled
set -o errtrace # trace ERR through - enabled
set -o errexit ## set -e : exit the script if any statement returns a non-true return value - enabled
# This file is/was autogenerated by lib/tools/gen-library.sh; don't modify manually

21
lib/single.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash
#
# Copyright (c) 2013-2021 Igor Pecovnik, igor.pecovnik@gma**.com
#
# This file is licensed under the terms of the GNU General Public
# License version 2. This program is licensed "as is" without any
# warranty of any kind, whether express or implied.
#
# This file is a part of the Armbian build script
# https://github.com/armbian/build/
# Users should not start here, but instead use ./compile.sh at the root.
if [[ $(basename "$0") == single.sh ]]; then
echo "Please use compile.sh to start the build process"
exit 255
fi
# Libraries include. ONLY source files that contain ONLY functions here.
# shellcheck source=library-functions.sh
source "${SRC}"/lib/library-functions.sh