mirror of
https://github.com/armbian/build
synced 2025-09-24 19:47:06 +07:00
armbian-next: Python patching delusion, pt1 & pt2 & pt3
- WiP: Python patching delusion, pt 1: finding & parsing patches; apply & git commit with pygit2; Markdown summaries (also for aggregation); git-to-patches tool
- Python: Markdown aggregation and patching summaries; collapsible; SummarizedMarkdownWriter
- Python: Markdown aggregation and patching summaries
- Python: reorg a bit into common/armbian_utils; define the `ASSET_LOG_BASE` in preparation for Markdown delusion
- Python patching: initial apply patches & initial commit patches to git (using pygit2)
- Python patching: add basic `series.conf` support
- Python patching: force use of utf-8; better error handling; use realpath of dirs
- Python patching: `git-to-patches` initial hack. not proud. half-reused some of the patches-to-git
- Python patching: "tag" the git commits with info for extracting later; introduce REWRITE_PATCHES/rewrite_patches_in_place
- Python patching: commented-out, recover-bad-patches hacks
- Python patching: shorten the signature
- Python patching: allow BASE_GIT_TAG as well as BASE_GIT_REVISION
- Python patching: git-archeology for patches missing descriptions; avoid UTF-8 in header/desc (not diff)
- Python patching: use modern-er email.utils.parsedate_to_datetime to parse commit date
- Python patching: unify PatchInPatchFile; better git-commiting; re-exporting patches from Git (directly)
- Python patching: switch to GitPython
- GitPython is like 100x slower than pygit2, but actually allows for date & committer
- also allows to remove untracked files before starting
- Python aggregation: fix missing `AGGREGATED_APT_SOURCES_DICT`
- Python patching: add `unidecode` dependency to pip3 install
- Python patching: don't try archeology if SRC is not a Git Repo (eg, in Docker)
- Python patching: don't try archeology if not applying patches to git
- WiP: Python patching delusion, pt2: actually use for u-boot & kernel patching
- Python patching: much better problem handling/logging; lenient with recreations (kernel)
- Python patching: don't force SHOW_LOG for u-boot patching
- Python patching: don't bomb for no reason when there are no patches to apply
- Python patching: fully (?) switch kernel patching to Python
- Python patching: more logging fixups
- Python patching: capture `kernel_git_revision` from `fetch_from_repo()`'s `checked_out_revision`
- Python patching: fully switch u-boot patching to Python
- Python aggregation/patching: colored logging; patching: always reset to git revision
- Python aggregation/patching: better logging; introduce u-boot Python patching
- Python patching pt3: recovers and better Markdown
- Python patching: detect, and rescue, `wrong_strip_level` problem; don't try to export patches that didn't apply, bitch instead
- Python patching: Markdown patching summary table, complete with emoji
- Python patching: include the problem breakdown in Markdown summary
- Python patching: sanity check against half-bare, half-mbox patches
- Python patching: try to recover from 1) bad utf-8 encoded patches; 2) bad unidiff patches; add a few sanity checks
- Python patching: try, and fail, to apply badly utf-8 encoded patches directly as bytes [reverted]
- Python patching: try to recover from patch *parse* failures; show summary; better logging
- set `GIT_ARCHEOLOGY=yes` to do archeology, default not
- armbian-next: Python `pip` dependencies handling, similar to `hostdeps`
- same scheme for Dockerfile caching
- @TODO: still using global/shared environment; should move to a dir under `cache` or some kinda venv
- WiP: add `python3-pip` to hostdeps; remove `python-setuptools`
- remove `python-setuptools` (Python2, no longer exists in Sid) from hostdeps
- add `python3-pip` to hostdeps; part of virtualenv saga
- WiP: split `kernel.sh` a bit, into `kernel-patching.sh`, `kernel-config.sh` and `kernel-make.sh`
- `advanced_patch()`: rename vars for clarity; no real changes
- Python patching: introduce FAST_ARCHEOLOGY; still trying for Markdown links
This commit is contained in:
@@ -19,5 +19,6 @@ function cli_requirements_run() {
|
||||
declare -a -g host_dependencies=()
|
||||
early_prepare_host_dependencies
|
||||
LOG_SECTION="install_host_dependencies" do_with_logging install_host_dependencies "for requirements command"
|
||||
LOG_SECTION="prepare_pip_packages_for_python_tools" do_with_logging prepare_pip_packages_for_python_tools
|
||||
display_alert "Done with" "@host dependencies" "cachehit"
|
||||
}
|
||||
|
||||
107
lib/functions/compilation/kernel-config.sh
Normal file
107
lib/functions/compilation/kernel-config.sh
Normal file
@@ -0,0 +1,107 @@
|
||||
function kernel_config_maybe_interactive() {
|
||||
# Check if we're gonna do some interactive configuration; if so, don't run kernel_config under logging manager.
|
||||
if [[ $KERNEL_CONFIGURE != yes ]]; then
|
||||
LOG_SECTION="kernel_config" do_with_logging do_with_hooks kernel_config
|
||||
else
|
||||
LOG_SECTION="kernel_config_interactive" do_with_hooks kernel_config
|
||||
fi
|
||||
}
|
||||
|
||||
function kernel_config() {
|
||||
# re-read kernel version after patching
|
||||
version=$(grab_version "$kernel_work_dir")
|
||||
|
||||
display_alert "Compiling $BRANCH kernel" "$version" "info"
|
||||
|
||||
# compare with the architecture of the current Debian node
|
||||
# if it matches we use the system compiler
|
||||
if dpkg-architecture -e "${ARCH}"; then
|
||||
display_alert "Native compilation" "target ${ARCH} on host $(dpkg --print-architecture)"
|
||||
else
|
||||
display_alert "Cross compilation" "target ${ARCH} on host $(dpkg --print-architecture)"
|
||||
toolchain=$(find_toolchain "$KERNEL_COMPILER" "$KERNEL_USE_GCC")
|
||||
[[ -z $toolchain ]] && exit_with_error "Could not find required toolchain" "${KERNEL_COMPILER}gcc $KERNEL_USE_GCC"
|
||||
fi
|
||||
|
||||
kernel_compiler_version="$(eval env PATH="${toolchain}:${PATH}" "${KERNEL_COMPILER}gcc" -dumpfullversion -dumpversion)"
|
||||
display_alert "Compiler version" "${KERNEL_COMPILER}gcc ${kernel_compiler_version}" "info"
|
||||
|
||||
# copy kernel config
|
||||
local COPY_CONFIG_BACK_TO=""
|
||||
|
||||
if [[ $KERNEL_KEEP_CONFIG == yes && -f "${DEST}"/config/$LINUXCONFIG.config ]]; then
|
||||
display_alert "Using previous kernel config" "${DEST}/config/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv "${DEST}/config/${LINUXCONFIG}.config" .config
|
||||
else
|
||||
if [[ -f $USERPATCHES_PATH/$LINUXCONFIG.config ]]; then
|
||||
display_alert "Using kernel config provided by user" "userpatches/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv "${USERPATCHES_PATH}/${LINUXCONFIG}.config" .config
|
||||
elif [[ -f "${USERPATCHES_PATH}/config/kernel/${LINUXCONFIG}.config" ]]; then
|
||||
display_alert "Using kernel config provided by user in config/kernel folder" "config/kernel/${LINUXCONFIG}.config" "info"
|
||||
run_host_command_logged cp -pv "${USERPATCHES_PATH}/config/kernel/${LINUXCONFIG}.config" .config
|
||||
else
|
||||
display_alert "Using kernel config file" "config/kernel/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv "${SRC}/config/kernel/${LINUXCONFIG}.config" .config
|
||||
COPY_CONFIG_BACK_TO="${SRC}/config/kernel/${LINUXCONFIG}.config"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Store the .config modification date at this time, for restoring later. Otherwise rebuilds.
|
||||
local kernel_config_mtime
|
||||
kernel_config_mtime=$(get_file_modification_time ".config")
|
||||
|
||||
call_extension_method "custom_kernel_config" <<- 'CUSTOM_KERNEL_CONFIG'
|
||||
*Kernel .config is in place, still clean from git version*
|
||||
Called after ${LINUXCONFIG}.config is put in place (.config).
|
||||
Before any olddefconfig any Kconfig make is called.
|
||||
A good place to customize the .config directly.
|
||||
CUSTOM_KERNEL_CONFIG
|
||||
|
||||
# hack for OdroidXU4. Copy firmare files
|
||||
if [[ $BOARD == odroidxu4 ]]; then
|
||||
mkdir -p "${kernel_work_dir}/firmware/edid"
|
||||
cp -p "${SRC}"/packages/blobs/odroidxu4/*.bin "${kernel_work_dir}/firmware/edid"
|
||||
fi
|
||||
|
||||
display_alert "Kernel configuration" "${LINUXCONFIG}" "info"
|
||||
|
||||
if [[ $KERNEL_CONFIGURE != yes ]]; then
|
||||
run_kernel_make olddefconfig # @TODO: what is this? does it fuck up dates?
|
||||
else
|
||||
display_alert "Starting (non-interactive) kernel olddefconfig" "${LINUXCONFIG}" "debug"
|
||||
|
||||
run_kernel_make olddefconfig
|
||||
|
||||
# No logging for this. this is UI piece
|
||||
display_alert "Starting (interactive) kernel ${KERNEL_MENUCONFIG:-menuconfig}" "${LINUXCONFIG}" "debug"
|
||||
run_kernel_make_dialog "${KERNEL_MENUCONFIG:-menuconfig}"
|
||||
|
||||
# Capture new date. Otherwise changes not detected by make.
|
||||
kernel_config_mtime=$(get_file_modification_time ".config")
|
||||
|
||||
# store kernel config in easily reachable place
|
||||
mkdir -p "${DEST}"/config
|
||||
display_alert "Exporting new kernel config" "$DEST/config/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv .config "${DEST}/config/${LINUXCONFIG}.config"
|
||||
|
||||
# store back into original LINUXCONFIG too, if it came from there, so it's pending commits when done.
|
||||
[[ "${COPY_CONFIG_BACK_TO}" != "" ]] && run_host_command_logged cp -pv .config "${COPY_CONFIG_BACK_TO}"
|
||||
|
||||
# export defconfig too if requested
|
||||
if [[ $KERNEL_EXPORT_DEFCONFIG == yes ]]; then
|
||||
run_kernel_make savedefconfig
|
||||
|
||||
[[ -f defconfig ]] && run_host_command_logged cp -pv defconfig "${DEST}/config/${LINUXCONFIG}.defconfig"
|
||||
fi
|
||||
fi
|
||||
|
||||
call_extension_method "custom_kernel_config_post_defconfig" <<- 'CUSTOM_KERNEL_CONFIG_POST_DEFCONFIG'
|
||||
*Kernel .config is in place, already processed by Armbian*
|
||||
Called after ${LINUXCONFIG}.config is put in place (.config).
|
||||
After all olddefconfig any Kconfig make is called.
|
||||
A good place to customize the .config last-minute.
|
||||
CUSTOM_KERNEL_CONFIG_POST_DEFCONFIG
|
||||
|
||||
# Restore the date of .config. Above delta is a pure function, theoretically.
|
||||
set_files_modification_time "${kernel_config_mtime}" ".config"
|
||||
}
|
||||
67
lib/functions/compilation/kernel-make.sh
Normal file
67
lib/functions/compilation/kernel-make.sh
Normal file
@@ -0,0 +1,67 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
function run_kernel_make_internal() {
|
||||
set -e
|
||||
declare -a common_make_params_quoted common_make_envs full_command
|
||||
|
||||
# Prepare distcc, if enabled.
|
||||
declare -a -g DISTCC_EXTRA_ENVS=()
|
||||
declare -a -g DISTCC_CROSS_COMPILE_PREFIX=()
|
||||
declare -a -g DISTCC_MAKE_J_PARALLEL=()
|
||||
prepare_distcc_compilation_config
|
||||
|
||||
common_make_envs=(
|
||||
"CCACHE_BASEDIR=\"$(pwd)\"" # Base directory for ccache, for cache reuse # @TODO: experiment with this and the source path to maximize hit rate
|
||||
"PATH=\"${toolchain}:${PATH}\"" # Insert the toolchain first into the PATH.
|
||||
"DPKG_COLORS=always" # Use colors for dpkg @TODO no dpkg is done anymore, remove?
|
||||
"XZ_OPT='--threads=0'" # Use parallel XZ compression
|
||||
"TERM='${TERM}'" # Pass the terminal type, so that 'make menuconfig' can work.
|
||||
)
|
||||
|
||||
# If CCACHE_DIR is set, pass it to the kernel build; Pass the ccache dir explicitly, since we'll run under "env -i"
|
||||
if [[ -n "${CCACHE_DIR}" ]]; then
|
||||
common_make_envs+=("CCACHE_DIR='${CCACHE_DIR}'")
|
||||
fi
|
||||
|
||||
# Add the distcc envs, if any.
|
||||
common_make_envs+=("${DISTCC_EXTRA_ENVS[@]}")
|
||||
|
||||
common_make_params_quoted=(
|
||||
# @TODO: introduce O=path/to/binaries, so sources and bins are not in the same dir; this has high impact in headers packaging though.
|
||||
|
||||
"${DISTCC_MAKE_J_PARALLEL[@]}" # Parallel compile, "-j X" for X cpus; determined by distcc, or is just "$CTHREADS" if distcc is not enabled.
|
||||
|
||||
"ARCH=${ARCHITECTURE}" # Key param. Everything depends on this.
|
||||
"LOCALVERSION=-${LINUXFAMILY}" # Change the internal kernel version to include the family. Changing this causes recompiles # @TODO change to "localversion" file
|
||||
|
||||
"CROSS_COMPILE=${CCACHE} ${DISTCC_CROSS_COMPILE_PREFIX[@]} ${KERNEL_COMPILER}" # added as prefix to every compiler invocation by make
|
||||
"KCFLAGS=-fdiagnostics-color=always -Wno-error=misleading-indentation" # Force GCC colored messages, downgrade misleading indentation to warning
|
||||
|
||||
"SOURCE_DATE_EPOCH=${kernel_base_revision_ts}" # https://reproducible-builds.org/docs/source-date-epoch/ and https://www.kernel.org/doc/html/latest/kbuild/reproducible-builds.html
|
||||
"KBUILD_BUILD_TIMESTAMP=${kernel_base_revision_date}" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-timestamp
|
||||
"KBUILD_BUILD_USER=armbian" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-user-kbuild-build-host
|
||||
"KBUILD_BUILD_HOST=next" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-user-kbuild-build-host
|
||||
|
||||
"KGZIP=pigz" "KBZIP2=pbzip2" # Parallel compression, use explicit parallel compressors https://lore.kernel.org/lkml/20200901151002.988547791@linuxfoundation.org/
|
||||
)
|
||||
|
||||
# last statement, so it passes the result to calling function. "env -i" is used for empty env
|
||||
full_command=("${KERNEL_MAKE_RUNNER:-run_host_command_logged}" "env" "-i" "${common_make_envs[@]}"
|
||||
make "${common_make_params_quoted[@]@Q}" "$@" "${make_filter}")
|
||||
"${full_command[@]}" # and exit with it's code, since it's the last statement
|
||||
}
|
||||
|
||||
function run_kernel_make() {
|
||||
KERNEL_MAKE_RUNNER="run_host_command_logged" KERNEL_MAKE_UNBUFFER="unbuffer" run_kernel_make_internal "$@"
|
||||
}
|
||||
|
||||
function run_kernel_make_dialog() {
|
||||
KERNEL_MAKE_RUNNER="run_host_command_dialog" run_kernel_make_internal "$@"
|
||||
}
|
||||
|
||||
function run_kernel_make_long_running() {
|
||||
local seconds_start=${SECONDS} # Bash has a builtin SECONDS that is seconds since start of script
|
||||
KERNEL_MAKE_RUNNER="run_host_command_logged_long_running" KERNEL_MAKE_UNBUFFER="unbuffer" run_kernel_make_internal "$@"
|
||||
display_alert "Kernel Make '$*' took" "$((SECONDS - seconds_start)) seconds" "debug"
|
||||
}
|
||||
|
||||
146
lib/functions/compilation/kernel-patching.sh
Normal file
146
lib/functions/compilation/kernel-patching.sh
Normal file
@@ -0,0 +1,146 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
function kernel_main_patching_python() {
|
||||
prepare_pip_packages_for_python_tools
|
||||
|
||||
temp_file_for_output="$(mktemp)" # Get a temporary file for the output.
|
||||
# array with all parameters; will be auto-quoted by bash's @Q modifier below
|
||||
declare -a params_quoted=(
|
||||
"LOG_DEBUG=${SHOW_DEBUG}" # Logging level for python.
|
||||
"SRC=${SRC}" # Armbian root
|
||||
"OUTPUT=${temp_file_for_output}" # Output file for the python script.
|
||||
"ASSET_LOG_BASE=$(print_current_asset_log_base_file)" # base file name for the asset log; to write .md summaries.
|
||||
"PATCH_TYPE=kernel" # or, u-boot, or, atf
|
||||
"PATCH_DIRS_TO_APPLY=${KERNELPATCHDIR}" # A space-separated list of directories to apply...
|
||||
"BOARD=" # BOARD is needed for the patchset selection logic; mostly for u-boot. empty for kernel.
|
||||
"TARGET=" # TARGET is need for u-boot's SPI/SATA etc selection logic. empty for kernel
|
||||
"USERPATCHES_PATH=${USERPATCHES_PATH}" # Needed to find the userpatches.
|
||||
# What to do?
|
||||
"APPLY_PATCHES=yes" # Apply the patches to the filesystem. Does not imply git commiting. If no, still exports the hash.
|
||||
"PATCHES_TO_GIT=${PATCHES_TO_GIT:-no}" # Commit to git after applying the patches.
|
||||
"REWRITE_PATCHES=${REWRITE_PATCHES:-no}" # Rewrite the original patch files after git commiting.
|
||||
# Git dir, revision, and target branch
|
||||
"GIT_WORK_DIR=${kernel_work_dir}" # "Where to apply patches?"
|
||||
"BASE_GIT_REVISION=${kernel_git_revision}" # The revision we're building/patching. Python will reset and clean to this.
|
||||
"BRANCH_FOR_PATCHES=kernel-${LINUXFAMILY}-${KERNEL_MAJOR_MINOR}" # When applying patches-to-git, use this branch.
|
||||
# Lenience: allow problematic patches to be applied.
|
||||
"ALLOW_RECREATE_EXISTING_FILES=yes" # Allow patches to recreate files that already exist.
|
||||
)
|
||||
display_alert "Calling Python patching script" "for kernel" "info"
|
||||
run_host_command_logged env -i "${params_quoted[@]@Q}" python3 "${SRC}/lib/tools/patching.py"
|
||||
run_host_command_logged cat "${temp_file_for_output}"
|
||||
# shellcheck disable=SC1090
|
||||
source "${temp_file_for_output}" # SOURCE IT!
|
||||
run_host_command_logged rm -f "${temp_file_for_output}"
|
||||
return 0
|
||||
}
|
||||
|
||||
function kernel_main_patching() {
|
||||
LOG_SECTION="kernel_main_patching_python" do_with_logging do_with_hooks kernel_main_patching_python
|
||||
|
||||
# The old way...
|
||||
#LOG_SECTION="kernel_prepare_patching" do_with_logging do_with_hooks kernel_prepare_patching
|
||||
#LOG_SECTION="kernel_patching" do_with_logging do_with_hooks kernel_patching
|
||||
|
||||
# HACK: STOP HERE, for development.
|
||||
if [[ "${PATCH_ONLY}" == "yes" ]]; then
|
||||
display_alert "PATCH_ONLY is set, stopping here." "PATCH_ONLY=yes" "info"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Interactive!!!
|
||||
[[ $CREATE_PATCHES == yes ]] && userpatch_create "kernel" # create patch for manual source changes
|
||||
|
||||
return 0 # there is a shortcircuit above
|
||||
}
|
||||
|
||||
function kernel_prepare_patching() {
|
||||
if [[ $USE_OVERLAYFS == yes ]]; then # @TODO: when is this set to yes?
|
||||
display_alert "Using overlayfs_wrapper" "kernel_${LINUXFAMILY}_${BRANCH}" "debug"
|
||||
kernel_work_dir=$(overlayfs_wrapper "wrap" "$SRC/cache/sources/$LINUXSOURCEDIR" "kernel_${LINUXFAMILY}_${BRANCH}")
|
||||
fi
|
||||
cd "${kernel_work_dir}" || exit
|
||||
|
||||
# @TODO: why would we delete localversion?
|
||||
# @TODO: it should be the opposite, writing localversion to disk, _instead_ of passing it via make.
|
||||
# @TODO: if it turns out to be the case, do a commit with it... (possibly later, after patching?)
|
||||
rm -f localversion
|
||||
|
||||
# read kernel version
|
||||
version=$(grab_version "$kernel_work_dir")
|
||||
pre_patch_version="${version}"
|
||||
display_alert "Pre-patch kernel version" "${pre_patch_version}" "debug"
|
||||
|
||||
# read kernel git hash
|
||||
hash=$(git --git-dir="$kernel_work_dir"/.git rev-parse HEAD)
|
||||
}
|
||||
|
||||
function kernel_patching() {
|
||||
## Start kernel patching process.
|
||||
## There's a few objectives here:
|
||||
## - (always) produce a fasthash: represents "what would be done" (eg: md5 of a patch, crc32 of description).
|
||||
## - (optionally) execute modification against living tree (eg: apply a patch, copy a file, etc). only if `DO_MODIFY=yes`
|
||||
## - (always) call mark_change_commit with the description of what was done and fasthash.
|
||||
# shellcheck disable=SC2154 # declared in outer scope kernel_base_revision_mtime
|
||||
declare -i patch_minimum_target_mtime="${kernel_base_revision_mtime}"
|
||||
declare -i series_conf_mtime="${patch_minimum_target_mtime}"
|
||||
declare -i patch_dir_mtime="${patch_minimum_target_mtime}"
|
||||
display_alert "patch_minimum_target_mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
|
||||
local patch_dir="${SRC}/patch/kernel/${KERNELPATCHDIR}"
|
||||
local series_conf="${patch_dir}/series.conf"
|
||||
|
||||
# So the minimum date has to account for removed patches; if a patch was removed from disk, the only way to reflect that
|
||||
# is by looking at the parent directory's mtime, which will have been bumped.
|
||||
# So we take a look at the possible directories involved here (series.conf file, and ${KERNELPATCHDIR} dir itself)
|
||||
# and bump up the minimum date if that is the case.
|
||||
if [[ -f "${series_conf}" ]]; then
|
||||
series_conf_mtime=$(get_file_modification_time "${series_conf}")
|
||||
display_alert "series.conf mtime:" "${series_conf_mtime}" "debug"
|
||||
patch_minimum_target_mtime=$((series_conf_mtime > patch_minimum_target_mtime ? series_conf_mtime : patch_minimum_target_mtime))
|
||||
display_alert "patch_minimum_target_mtime after series.conf mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
fi
|
||||
|
||||
if [[ -d "${patch_dir}" ]]; then
|
||||
patch_dir_mtime=$(get_dir_modification_time "${patch_dir}")
|
||||
display_alert "patch_dir mtime:" "${patch_dir_mtime}" "debug"
|
||||
patch_minimum_target_mtime=$((patch_dir_mtime > patch_minimum_target_mtime ? patch_dir_mtime : patch_minimum_target_mtime))
|
||||
display_alert "patch_minimum_target_mtime after patch_dir mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
fi
|
||||
|
||||
# this prepares data, and possibly creates a git branch to receive the patches.
|
||||
initialize_fasthash "kernel" "${hash}" "${pre_patch_version}" "${kernel_work_dir}"
|
||||
fasthash_debug "init"
|
||||
|
||||
# Apply a series of patches if a series file exists
|
||||
if [[ -f "${series_conf}" ]]; then
|
||||
display_alert "series.conf exists. Apply"
|
||||
fasthash_branch "patches-${KERNELPATCHDIR}-series.conf"
|
||||
apply_patch_series "${kernel_work_dir}" "${series_conf}" # applies a series of patches, read from a file. calls process_patch_file
|
||||
fi
|
||||
|
||||
# applies a humongous amount of patches coming from github repos.
|
||||
# it's mostly conditional, and very complex.
|
||||
# @TODO: re-enable after finishing converting it with fasthash magic
|
||||
# apply_kernel_patches_for_drivers "${kernel_work_dir}" "${version}" # calls process_patch_file and other stuff. there is A LOT of it.
|
||||
|
||||
# @TODO: this is were "patch generation" happens?
|
||||
|
||||
# Extension hook: patch_kernel_for_driver
|
||||
call_extension_method "patch_kernel_for_driver" <<- 'PATCH_KERNEL_FOR_DRIVER'
|
||||
*allow to add drivers/patch kernel for drivers before applying the family patches*
|
||||
Patch *series* (not normal family patches) are already applied.
|
||||
Useful for migrating EXTRAWIFI-related stuff to individual extensions.
|
||||
Receives `${version}` and `${kernel_work_dir}` as environment variables.
|
||||
PATCH_KERNEL_FOR_DRIVER
|
||||
|
||||
# applies a series of patches, in directory order, from multiple directories (default/"user" patches)
|
||||
# @TODO: I believe using the $BOARD here is the most confusing thing in the whole of Armbian. It should be disabled.
|
||||
# @TODO: Armbian built kernels dont't vary per-board, but only per "$ARCH-$LINUXFAMILY-$BRANCH"
|
||||
# @TODO: allowing for board-specific kernel patches creates insanity. uboot is enough.
|
||||
fasthash_branch "patches-${KERNELPATCHDIR}-$BRANCH"
|
||||
advanced_patch "kernel" "$KERNELPATCHDIR" "$BOARD" "" "$BRANCH" "$LINUXFAMILY-$BRANCH" # calls process_patch_file, "target" is empty there
|
||||
|
||||
fasthash_debug "finish"
|
||||
finish_fasthash "kernel" # this reports the final hash and creates git branch to build ID. All modifications commited.
|
||||
}
|
||||
@@ -1,70 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
function run_kernel_make_internal() {
|
||||
set -e
|
||||
declare -a common_make_params_quoted common_make_envs full_command
|
||||
|
||||
# Prepare distcc, if enabled.
|
||||
declare -a -g DISTCC_EXTRA_ENVS=()
|
||||
declare -a -g DISTCC_CROSS_COMPILE_PREFIX=()
|
||||
declare -a -g DISTCC_MAKE_J_PARALLEL=()
|
||||
prepare_distcc_compilation_config
|
||||
|
||||
common_make_envs=(
|
||||
"CCACHE_BASEDIR=\"$(pwd)\"" # Base directory for ccache, for cache reuse # @TODO: experiment with this and the source path to maximize hit rate
|
||||
"PATH=\"${toolchain}:${PATH}\"" # Insert the toolchain first into the PATH.
|
||||
"DPKG_COLORS=always" # Use colors for dpkg @TODO no dpkg is done anymore, remove?
|
||||
"XZ_OPT='--threads=0'" # Use parallel XZ compression
|
||||
"TERM='${TERM}'" # Pass the terminal type, so that 'make menuconfig' can work.
|
||||
)
|
||||
|
||||
# If CCACHE_DIR is set, pass it to the kernel build; Pass the ccache dir explicitly, since we'll run under "env -i"
|
||||
if [[ -n "${CCACHE_DIR}" ]]; then
|
||||
common_make_envs+=("CCACHE_DIR='${CCACHE_DIR}'")
|
||||
fi
|
||||
|
||||
# Add the distcc envs, if any.
|
||||
common_make_envs+=("${DISTCC_EXTRA_ENVS[@]}")
|
||||
|
||||
common_make_params_quoted=(
|
||||
# @TODO: introduce O=path/to/binaries, so sources and bins are not in the same dir; this has high impact in headers packaging though.
|
||||
|
||||
"${DISTCC_MAKE_J_PARALLEL[@]}" # Parallel compile, "-j X" for X cpus; determined by distcc, or is just "$CTHREADS" if distcc is not enabled.
|
||||
|
||||
"ARCH=${ARCHITECTURE}" # Key param. Everything depends on this.
|
||||
"LOCALVERSION=-${LINUXFAMILY}" # Change the internal kernel version to include the family. Changing this causes recompiles # @TODO change to "localversion" file
|
||||
|
||||
"CROSS_COMPILE=${CCACHE} ${DISTCC_CROSS_COMPILE_PREFIX[@]} ${KERNEL_COMPILER}" # added as prefix to every compiler invocation by make
|
||||
"KCFLAGS=-fdiagnostics-color=always -Wno-error=misleading-indentation" # Force GCC colored messages, downgrade misleading indentation to warning
|
||||
|
||||
"SOURCE_DATE_EPOCH=${kernel_base_revision_ts}" # https://reproducible-builds.org/docs/source-date-epoch/ and https://www.kernel.org/doc/html/latest/kbuild/reproducible-builds.html
|
||||
"KBUILD_BUILD_TIMESTAMP=${kernel_base_revision_date}" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-timestamp
|
||||
"KBUILD_BUILD_USER=armbian" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-user-kbuild-build-host
|
||||
"KBUILD_BUILD_HOST=next" # https://www.kernel.org/doc/html/latest/kbuild/kbuild.html#kbuild-build-user-kbuild-build-host
|
||||
|
||||
"KGZIP=pigz" "KBZIP2=pbzip2" # Parallel compression, use explicit parallel compressors https://lore.kernel.org/lkml/20200901151002.988547791@linuxfoundation.org/
|
||||
)
|
||||
|
||||
# last statement, so it passes the result to calling function. "env -i" is used for empty env
|
||||
full_command=("${KERNEL_MAKE_RUNNER:-run_host_command_logged}" "env" "-i" "${common_make_envs[@]}"
|
||||
make "${common_make_params_quoted[@]@Q}" "$@" "${make_filter}")
|
||||
"${full_command[@]}" # and exit with it's code, since it's the last statement
|
||||
}
|
||||
|
||||
function run_kernel_make() {
|
||||
KERNEL_MAKE_RUNNER="run_host_command_logged" KERNEL_MAKE_UNBUFFER="unbuffer" run_kernel_make_internal "$@"
|
||||
}
|
||||
|
||||
function run_kernel_make_dialog() {
|
||||
KERNEL_MAKE_RUNNER="run_host_command_dialog" run_kernel_make_internal "$@"
|
||||
}
|
||||
|
||||
function run_kernel_make_long_running() {
|
||||
local seconds_start=${SECONDS} # Bash has a builtin SECONDS that is seconds since start of script
|
||||
KERNEL_MAKE_RUNNER="run_host_command_logged_long_running" KERNEL_MAKE_UNBUFFER="unbuffer" run_kernel_make_internal "$@"
|
||||
display_alert "Kernel Make '$*' took" "$((SECONDS - seconds_start)) seconds" "debug"
|
||||
}
|
||||
|
||||
function compile_kernel() {
|
||||
local kernel_work_dir="${SRC}/cache/sources/${LINUXSOURCEDIR}"
|
||||
display_alert "Kernel build starting" "${LINUXSOURCEDIR}" "info"
|
||||
@@ -76,54 +10,45 @@ function compile_kernel() {
|
||||
`${kernel_work_dir}` is set, but not yet populated with kernel sources.
|
||||
FETCH_SOURCES_FOR_KERNEL_DRIVER
|
||||
|
||||
# Prepare the git bare repo for the kernel.
|
||||
# Prepare the git bare repo for the kernel; shared between all kernel builds
|
||||
declare kernel_git_bare_tree
|
||||
LOG_SECTION="kernel_prepare_bare_repo_from_bundle" do_with_logging_unless_user_terminal do_with_hooks \
|
||||
kernel_prepare_bare_repo_from_bundle # this sets kernel_git_bare_tree
|
||||
|
||||
declare checked_out_revision_mtime="" checked_out_revision_ts="" # set by fetch_from_repo
|
||||
# prepare the working copy; this is the actual kernel source tree for this build
|
||||
declare checked_out_revision_mtime="" checked_out_revision_ts="" checked_out_revision="undetermined" # set by fetch_from_repo
|
||||
LOG_SECTION="kernel_prepare_git" do_with_logging_unless_user_terminal do_with_hooks kernel_prepare_git
|
||||
|
||||
# Capture date variables set by fetch_from_repo; it's the date of the last kernel revision
|
||||
declare kernel_git_revision="${checked_out_revision}"
|
||||
display_alert "Using kernel revision SHA1" "${kernel_git_revision}"
|
||||
declare kernel_base_revision_date
|
||||
declare kernel_base_revision_mtime="${checked_out_revision_mtime}"
|
||||
declare kernel_base_revision_ts="${checked_out_revision_ts}"
|
||||
kernel_base_revision_date="$(LC_ALL=C date -d "@${kernel_base_revision_ts}")"
|
||||
|
||||
# Possibly 'make clean'.
|
||||
LOG_SECTION="kernel_maybe_clean" do_with_logging do_with_hooks kernel_maybe_clean
|
||||
|
||||
# Patching.
|
||||
local version hash pre_patch_version
|
||||
LOG_SECTION="kernel_prepare_patching" do_with_logging do_with_hooks kernel_prepare_patching
|
||||
LOG_SECTION="kernel_patching" do_with_logging do_with_hooks kernel_patching
|
||||
[[ $CREATE_PATCHES == yes ]] && userpatch_create "kernel" # create patch for manual source changes
|
||||
local version
|
||||
kernel_main_patching
|
||||
|
||||
local toolchain
|
||||
kernel_config_maybe_interactive
|
||||
|
||||
# Check if we're gonna do some interactive configuration; if so, don't run kernel_config under logging manager.
|
||||
if [[ $KERNEL_CONFIGURE != yes ]]; then
|
||||
LOG_SECTION="kernel_config" do_with_logging do_with_hooks kernel_config
|
||||
else
|
||||
LOG_SECTION="kernel_config_interactive" do_with_hooks kernel_config
|
||||
fi
|
||||
|
||||
# package the kernel-source .deb
|
||||
LOG_SECTION="kernel_package_source" do_with_logging do_with_hooks kernel_package_source
|
||||
|
||||
# @TODO: might be interesting to package kernel-headers at this stage.
|
||||
# @TODO: would allow us to have a "HEADERS_ONLY=yes" that can prepare arm64 headers on arm64 without building the whole kernel
|
||||
# @TODO: also it makes sense, logically, to package headers after configuration, since that's all what's needed; it's the same
|
||||
# @TODO: stage at which `dkms` would run (a configured, tool-built, kernel tree).
|
||||
|
||||
# @TODO: might also be interesting to do the same for DTBs.
|
||||
# @TODO: those get packaged twice (once in linux-dtb and once in linux-image)
|
||||
# @TODO: but for the u-boot bootloader, only the linux-dtb is what matters.
|
||||
# @TODO: some users/maintainers do a lot of their work on "DTS/DTB only changes", which do require the kernel tree
|
||||
# @TODO: but the only testable artifacts are the .dtb themselves. Allow for a `DTB_ONLY=yes` might be useful.
|
||||
|
||||
# build via make and package .debs; they're separate sub-steps
|
||||
LOG_SECTION="kernel_build_and_package" do_with_logging do_with_hooks kernel_build_and_package
|
||||
|
||||
display_alert "Done with" "kernel compile" "debug"
|
||||
cd "${kernel_work_dir}/.." || exit
|
||||
|
||||
rm -f linux-firmware-image-*.deb # remove firmware image packages here - easier than patching ~40 packaging scripts at once
|
||||
run_host_command_logged rsync --remove-source-files -r ./*.deb "${DEB_STORAGE}/"
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
@@ -131,7 +56,7 @@ function kernel_maybe_clean() {
|
||||
if [[ $CLEAN_LEVEL == *make-kernel* ]]; then
|
||||
display_alert "Cleaning Kernel tree - CLEAN_LEVEL contains 'make-kernel'" "$LINUXSOURCEDIR" "info"
|
||||
(
|
||||
cd "${kernel_work_dir}"
|
||||
cd "${kernel_work_dir}" || exit_with_error "Can't cd to kernel_work_dir: ${kernel_work_dir}"
|
||||
run_host_command_logged make ARCH="${ARCHITECTURE}" clean
|
||||
)
|
||||
fasthash_debug "post make clean"
|
||||
@@ -140,192 +65,6 @@ function kernel_maybe_clean() {
|
||||
fi
|
||||
}
|
||||
|
||||
function kernel_prepare_patching() {
|
||||
if [[ $USE_OVERLAYFS == yes ]]; then # @TODO: when is this set to yes?
|
||||
display_alert "Using overlayfs_wrapper" "kernel_${LINUXFAMILY}_${BRANCH}" "debug"
|
||||
kernel_work_dir=$(overlayfs_wrapper "wrap" "$SRC/cache/sources/$LINUXSOURCEDIR" "kernel_${LINUXFAMILY}_${BRANCH}")
|
||||
fi
|
||||
cd "${kernel_work_dir}" || exit
|
||||
|
||||
# @TODO: why would we delete localversion?
|
||||
# @TODO: it should be the opposite, writing localversion to disk, _instead_ of passing it via make.
|
||||
# @TODO: if it turns out to be the case, do a commit with it... (possibly later, after patching?)
|
||||
rm -f localversion
|
||||
|
||||
# read kernel version
|
||||
version=$(grab_version "$kernel_work_dir")
|
||||
pre_patch_version="${version}"
|
||||
display_alert "Pre-patch kernel version" "${pre_patch_version}" "debug"
|
||||
|
||||
# read kernel git hash
|
||||
hash=$(git --git-dir="$kernel_work_dir"/.git rev-parse HEAD)
|
||||
}
|
||||
|
||||
function kernel_patching() {
|
||||
## Start kernel patching process.
|
||||
## There's a few objectives here:
|
||||
## - (always) produce a fasthash: represents "what would be done" (eg: md5 of a patch, crc32 of description).
|
||||
## - (optionally) execute modification against living tree (eg: apply a patch, copy a file, etc). only if `DO_MODIFY=yes`
|
||||
## - (always) call mark_change_commit with the description of what was done and fasthash.
|
||||
declare -i patch_minimum_target_mtime="${kernel_base_revision_mtime}"
|
||||
declare -i series_conf_mtime="${patch_minimum_target_mtime}"
|
||||
declare -i patch_dir_mtime="${patch_minimum_target_mtime}"
|
||||
display_alert "patch_minimum_target_mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
|
||||
local patch_dir="${SRC}/patch/kernel/${KERNELPATCHDIR}"
|
||||
local series_conf="${patch_dir}/series.conf"
|
||||
|
||||
# So the minimum date has to account for removed patches; if a patch was removed from disk, the only way to reflect that
|
||||
# is by looking at the parent directory's mtime, which will have been bumped.
|
||||
# So we take a look at the possible directories involved here (series.conf file, and ${KERNELPATCHDIR} dir itself)
|
||||
# and bump up the minimum date if that is the case.
|
||||
if [[ -f "${series_conf}" ]]; then
|
||||
series_conf_mtime=$(get_file_modification_time "${series_conf}")
|
||||
display_alert "series.conf mtime:" "${series_conf_mtime}" "debug"
|
||||
patch_minimum_target_mtime=$((series_conf_mtime > patch_minimum_target_mtime ? series_conf_mtime : patch_minimum_target_mtime))
|
||||
display_alert "patch_minimum_target_mtime after series.conf mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
fi
|
||||
|
||||
if [[ -d "${patch_dir}" ]]; then
|
||||
patch_dir_mtime=$(get_dir_modification_time "${patch_dir}")
|
||||
display_alert "patch_dir mtime:" "${patch_dir_mtime}" "debug"
|
||||
patch_minimum_target_mtime=$((patch_dir_mtime > patch_minimum_target_mtime ? patch_dir_mtime : patch_minimum_target_mtime))
|
||||
display_alert "patch_minimum_target_mtime after patch_dir mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
fi
|
||||
|
||||
initialize_fasthash "kernel" "${hash}" "${pre_patch_version}" "${kernel_work_dir}"
|
||||
fasthash_debug "init"
|
||||
|
||||
# Apply a series of patches if a series file exists
|
||||
if [[ -f "${series_conf}" ]]; then
|
||||
display_alert "series.conf exists. Apply"
|
||||
fasthash_branch "patches-${KERNELPATCHDIR}-series.conf"
|
||||
apply_patch_series "${kernel_work_dir}" "${series_conf}" # applies a series of patches, read from a file. calls process_patch_file
|
||||
fi
|
||||
|
||||
# applies a humongous amount of patches coming from github repos.
|
||||
# it's mostly conditional, and very complex.
|
||||
# @TODO: re-enable after finishing converting it with fasthash magic
|
||||
# apply_kernel_patches_for_drivers "${kernel_work_dir}" "${version}" # calls process_patch_file and other stuff. there is A LOT of it.
|
||||
|
||||
# Extension hook: patch_kernel_for_driver
|
||||
call_extension_method "patch_kernel_for_driver" <<- 'PATCH_KERNEL_FOR_DRIVER'
|
||||
*allow to add drivers/patch kernel for drivers before applying the family patches*
|
||||
Patch *series* (not normal family patches) are already applied.
|
||||
Useful for migrating EXTRAWIFI-related stuff to individual extensions.
|
||||
Receives `${version}` and `${kernel_work_dir}` as environment variables.
|
||||
PATCH_KERNEL_FOR_DRIVER
|
||||
|
||||
# applies a series of patches, in directory order, from multiple directories (default/"user" patches)
|
||||
# @TODO: I believe using the $BOARD here is the most confusing thing in the whole of Armbian. It should be disabled.
|
||||
# @TODO: Armbian built kernels dont't vary per-board, but only per "$ARCH-$LINUXFAMILY-$BRANCH"
|
||||
# @TODO: allowing for board-specific kernel patches creates insanity. uboot is enough.
|
||||
fasthash_branch "patches-${KERNELPATCHDIR}-$BRANCH"
|
||||
advanced_patch "kernel" "$KERNELPATCHDIR" "$BOARD" "" "$BRANCH" "$LINUXFAMILY-$BRANCH" # calls process_patch_file, "target" is empty there
|
||||
|
||||
fasthash_debug "finish"
|
||||
finish_fasthash "kernel" # this reports the final hash and creates git branch to build ID. All modifications commited.
|
||||
}
|
||||
|
||||
function kernel_config() {
|
||||
# re-read kernel version after patching
|
||||
version=$(grab_version "$kernel_work_dir")
|
||||
|
||||
display_alert "Compiling $BRANCH kernel" "$version" "info"
|
||||
|
||||
# compare with the architecture of the current Debian node
|
||||
# if it matches we use the system compiler
|
||||
if dpkg-architecture -e "${ARCH}"; then
|
||||
display_alert "Native compilation" "target ${ARCH} on host $(dpkg --print-architecture)"
|
||||
else
|
||||
display_alert "Cross compilation" "target ${ARCH} on host $(dpkg --print-architecture)"
|
||||
toolchain=$(find_toolchain "$KERNEL_COMPILER" "$KERNEL_USE_GCC")
|
||||
[[ -z $toolchain ]] && exit_with_error "Could not find required toolchain" "${KERNEL_COMPILER}gcc $KERNEL_USE_GCC"
|
||||
fi
|
||||
|
||||
kernel_compiler_version="$(eval env PATH="${toolchain}:${PATH}" "${KERNEL_COMPILER}gcc" -dumpfullversion -dumpversion)"
|
||||
display_alert "Compiler version" "${KERNEL_COMPILER}gcc ${kernel_compiler_version}" "info"
|
||||
|
||||
# copy kernel config
|
||||
local COPY_CONFIG_BACK_TO=""
|
||||
|
||||
if [[ $KERNEL_KEEP_CONFIG == yes && -f "${DEST}"/config/$LINUXCONFIG.config ]]; then
|
||||
display_alert "Using previous kernel config" "${DEST}/config/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv "${DEST}/config/${LINUXCONFIG}.config" .config
|
||||
else
|
||||
if [[ -f $USERPATCHES_PATH/$LINUXCONFIG.config ]]; then
|
||||
display_alert "Using kernel config provided by user" "userpatches/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv "${USERPATCHES_PATH}/${LINUXCONFIG}.config" .config
|
||||
elif [[ -f "${USERPATCHES_PATH}/config/kernel/${LINUXCONFIG}.config" ]]; then
|
||||
display_alert "Using kernel config provided by user in config/kernel folder" "config/kernel/${LINUXCONFIG}.config" "info"
|
||||
run_host_command_logged cp -pv "${USERPATCHES_PATH}/config/kernel/${LINUXCONFIG}.config" .config
|
||||
else
|
||||
display_alert "Using kernel config file" "config/kernel/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv "${SRC}/config/kernel/${LINUXCONFIG}.config" .config
|
||||
COPY_CONFIG_BACK_TO="${SRC}/config/kernel/${LINUXCONFIG}.config"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Store the .config modification date at this time, for restoring later. Otherwise rebuilds.
|
||||
local kernel_config_mtime
|
||||
kernel_config_mtime=$(get_file_modification_time ".config")
|
||||
|
||||
call_extension_method "custom_kernel_config" <<- 'CUSTOM_KERNEL_CONFIG'
|
||||
*Kernel .config is in place, still clean from git version*
|
||||
Called after ${LINUXCONFIG}.config is put in place (.config).
|
||||
Before any olddefconfig any Kconfig make is called.
|
||||
A good place to customize the .config directly.
|
||||
CUSTOM_KERNEL_CONFIG
|
||||
|
||||
# hack for OdroidXU4. Copy firmare files
|
||||
if [[ $BOARD == odroidxu4 ]]; then
|
||||
mkdir -p "${kernel_work_dir}/firmware/edid"
|
||||
cp -p "${SRC}"/packages/blobs/odroidxu4/*.bin "${kernel_work_dir}/firmware/edid"
|
||||
fi
|
||||
|
||||
display_alert "Kernel configuration" "${LINUXCONFIG}" "info"
|
||||
|
||||
if [[ $KERNEL_CONFIGURE != yes ]]; then
|
||||
run_kernel_make olddefconfig # @TODO: what is this? does it fuck up dates?
|
||||
else
|
||||
display_alert "Starting (non-interactive) kernel olddefconfig" "${LINUXCONFIG}" "debug"
|
||||
|
||||
run_kernel_make olddefconfig
|
||||
|
||||
# No logging for this. this is UI piece
|
||||
display_alert "Starting (interactive) kernel ${KERNEL_MENUCONFIG:-menuconfig}" "${LINUXCONFIG}" "debug"
|
||||
run_kernel_make_dialog "${KERNEL_MENUCONFIG:-menuconfig}"
|
||||
|
||||
# Capture new date. Otherwise changes not detected by make.
|
||||
kernel_config_mtime=$(get_file_modification_time ".config")
|
||||
|
||||
# store kernel config in easily reachable place
|
||||
mkdir -p "${DEST}"/config
|
||||
display_alert "Exporting new kernel config" "$DEST/config/$LINUXCONFIG.config" "info"
|
||||
run_host_command_logged cp -pv .config "${DEST}/config/${LINUXCONFIG}.config"
|
||||
|
||||
# store back into original LINUXCONFIG too, if it came from there, so it's pending commits when done.
|
||||
[[ "${COPY_CONFIG_BACK_TO}" != "" ]] && run_host_command_logged cp -pv .config "${COPY_CONFIG_BACK_TO}"
|
||||
|
||||
# export defconfig too if requested
|
||||
if [[ $KERNEL_EXPORT_DEFCONFIG == yes ]]; then
|
||||
run_kernel_make savedefconfig
|
||||
|
||||
[[ -f defconfig ]] && run_host_command_logged cp -pv defconfig "${DEST}/config/${LINUXCONFIG}.defconfig"
|
||||
fi
|
||||
fi
|
||||
|
||||
call_extension_method "custom_kernel_config_post_defconfig" <<- 'CUSTOM_KERNEL_CONFIG_POST_DEFCONFIG'
|
||||
*Kernel .config is in place, already processed by Armbian*
|
||||
Called after ${LINUXCONFIG}.config is put in place (.config).
|
||||
After all olddefconfig any Kconfig make is called.
|
||||
A good place to customize the .config last-minute.
|
||||
CUSTOM_KERNEL_CONFIG_POST_DEFCONFIG
|
||||
|
||||
# Restore the date of .config. Above delta is a pure function, theoretically.
|
||||
set_files_modification_time "${kernel_config_mtime}" ".config"
|
||||
}
|
||||
|
||||
function kernel_package_source() {
|
||||
[[ "${BUILD_KSRC}" != "yes" ]] && return 0
|
||||
|
||||
@@ -345,7 +84,7 @@ function kernel_package_source() {
|
||||
run_host_command_logged cp -v COPYING "${sources_pkg_dir}/usr/share/doc/linux-source-${version}-${LINUXFAMILY}/LICENSE"
|
||||
|
||||
display_alert "Compressing sources for the linux-source package" "exporting from git" "info"
|
||||
cd "${kernel_work_dir}"
|
||||
cd "${kernel_work_dir}" || exit_with_error "Can't cd to kernel_work_dir: ${kernel_work_dir}"
|
||||
|
||||
local tar_prefix="${version}/"
|
||||
local output_tarball="${sources_pkg_dir}/usr/src/linux-source-${version}-${LINUXFAMILY}.tar.zst"
|
||||
@@ -376,7 +115,7 @@ function kernel_package_source() {
|
||||
function kernel_build_and_package() {
|
||||
local ts=${SECONDS}
|
||||
|
||||
cd "${kernel_work_dir}"
|
||||
cd "${kernel_work_dir}" || exit_with_error "Can't cd to kernel_work_dir: ${kernel_work_dir}"
|
||||
|
||||
local -a build_targets=("all") # "All" builds the vmlinux/Image/Image.gz default for the ${ARCH}
|
||||
declare kernel_dest_install_dir
|
||||
@@ -413,7 +152,7 @@ function kernel_build_and_package() {
|
||||
run_kernel_make_long_running "${install_make_params_quoted[@]@Q}" "${build_targets[@]}"
|
||||
fasthash_debug "build"
|
||||
|
||||
cd "${kernel_work_dir}"
|
||||
cd "${kernel_work_dir}" || exit_with_error "Can't cd to kernel_work_dir: ${kernel_work_dir}"
|
||||
prepare_kernel_packaging_debs "${kernel_work_dir}" "${kernel_dest_install_dir}" "${version}" kernel_install_dirs
|
||||
|
||||
display_alert "Kernel built and packaged in" "$((SECONDS - ts)) seconds - ${version}-${LINUXFAMILY}" "info"
|
||||
|
||||
@@ -1,48 +1,44 @@
|
||||
#!/usr/bin/env bash
|
||||
# advanced_patch <dest> <family> <board> <target> <branch> <description>
|
||||
# advanced_patch <patch_kind> <{patch_dir}> <board> <target> <branch> <description>
|
||||
#
|
||||
# parameters:
|
||||
# <dest>: u-boot, kernel, atf
|
||||
# <family>: u-boot: u-boot, u-boot-neo; kernel: sun4i-default, sunxi-next, ...
|
||||
# <patch_kind>: u-boot, kernel, atf
|
||||
# <{patch_dir}>: u-boot: u-boot, u-boot-neo; kernel: sun4i-default, sunxi-next, ...
|
||||
# <board>: cubieboard, cubieboard2, cubietruck, ...
|
||||
# <target>: optional subdirectory
|
||||
# <description>: additional description text
|
||||
#
|
||||
# priority:
|
||||
# $USERPATCHES_PATH/<dest>/<family>/target_<target>
|
||||
# $USERPATCHES_PATH/<dest>/<family>/board_<board>
|
||||
# $USERPATCHES_PATH/<dest>/<family>/branch_<branch>
|
||||
# $USERPATCHES_PATH/<dest>/<family>
|
||||
# $SRC/patch/<dest>/<family>/target_<target>
|
||||
# $SRC/patch/<dest>/<family>/board_<board>
|
||||
# $SRC/patch/<dest>/<family>/branch_<branch>
|
||||
# $SRC/patch/<dest>/<family>
|
||||
#
|
||||
advanced_patch() {
|
||||
local dest=$1
|
||||
local family=$2
|
||||
local board=$3
|
||||
local target=$4
|
||||
local branch=$5
|
||||
local description=$6
|
||||
|
||||
display_alert "Started patching process for" "$dest $description" "info"
|
||||
display_alert "Looking for user patches in" "userpatches/$dest/$family" "info"
|
||||
# calls:
|
||||
# ${patch_kind} ${patch_dir} $board $target $branch $description
|
||||
# kernel: advanced_patch "kernel" "$KERNELPATCHDIR" "$BOARD" "" "$BRANCH" "$LINUXFAMILY-$BRANCH"
|
||||
# u-boot: advanced_patch "u-boot" "$BOOTPATCHDIR" "$BOARD" "$target_patchdir" "$BRANCH" "${LINUXFAMILY}-${BOARD}-${BRANCH}"
|
||||
function advanced_patch() {
|
||||
local patch_kind="$1"
|
||||
local patch_dir="$2"
|
||||
local board="$3"
|
||||
local target="$4"
|
||||
local branch="$5"
|
||||
local description="$6"
|
||||
|
||||
display_alert "Started patching process for" "${patch_kind} $description" "info"
|
||||
display_alert "Looking for user patches in" "userpatches/${patch_kind}/${patch_dir}" "info"
|
||||
|
||||
local names=()
|
||||
local dirs=(
|
||||
"$USERPATCHES_PATH/$dest/$family/target_${target}:[\e[33mu\e[0m][\e[34mt\e[0m]"
|
||||
"$USERPATCHES_PATH/$dest/$family/board_${board}:[\e[33mu\e[0m][\e[35mb\e[0m]"
|
||||
"$USERPATCHES_PATH/$dest/$family/branch_${branch}:[\e[33mu\e[0m][\e[33mb\e[0m]"
|
||||
"$USERPATCHES_PATH/$dest/$family:[\e[33mu\e[0m][\e[32mc\e[0m]"
|
||||
"$SRC/patch/$dest/$family/target_${target}:[\e[32ml\e[0m][\e[34mt\e[0m]"
|
||||
"$SRC/patch/$dest/$family/board_${board}:[\e[32ml\e[0m][\e[35mb\e[0m]"
|
||||
"$SRC/patch/$dest/$family/branch_${branch}:[\e[32ml\e[0m][\e[33mb\e[0m]"
|
||||
"$SRC/patch/$dest/$family:[\e[32ml\e[0m][\e[32mc\e[0m]"
|
||||
"$USERPATCHES_PATH/${patch_kind}/${patch_dir}/target_${target}:[\e[33mu\e[0m][\e[34mt\e[0m]"
|
||||
"$USERPATCHES_PATH/${patch_kind}/${patch_dir}/board_${board}:[\e[33mu\e[0m][\e[35mb\e[0m]"
|
||||
"$USERPATCHES_PATH/${patch_kind}/${patch_dir}/branch_${branch}:[\e[33mu\e[0m][\e[33mb\e[0m]"
|
||||
"$USERPATCHES_PATH/${patch_kind}/${patch_dir}:[\e[33mu\e[0m][\e[32mc\e[0m]"
|
||||
|
||||
"$SRC/patch/${patch_kind}/${patch_dir}/target_${target}:[\e[32ml\e[0m][\e[34mt\e[0m]" # used for u-boot "spi" stuff
|
||||
"$SRC/patch/${patch_kind}/${patch_dir}/board_${board}:[\e[32ml\e[0m][\e[35mb\e[0m]" # used for u-boot board-specific stuff
|
||||
"$SRC/patch/${patch_kind}/${patch_dir}/branch_${branch}:[\e[32ml\e[0m][\e[33mb\e[0m]" # NOT used, I think.
|
||||
"$SRC/patch/${patch_kind}/${patch_dir}:[\e[32ml\e[0m][\e[32mc\e[0m]" # used for everything
|
||||
)
|
||||
local links=()
|
||||
|
||||
# required for "for" command
|
||||
# @TODO these shopts leak for the rest of the build script! either make global, or restore them after this function
|
||||
shopt -s nullglob dotglob
|
||||
# get patch file names
|
||||
for dir in "${dirs[@]}"; do
|
||||
@@ -56,6 +52,7 @@ advanced_patch() {
|
||||
[[ -n $findlinks ]] && readarray -d '' links < <(find "${findlinks}" -maxdepth 1 -type f -follow -print -iname "*.patch" -print | grep "\.patch$" | sed "s|${dir%%:*}/||g" 2>&1)
|
||||
fi
|
||||
done
|
||||
|
||||
# merge static and linked
|
||||
names=("${names[@]}" "${links[@]}")
|
||||
# remove duplicates
|
||||
|
||||
@@ -30,11 +30,18 @@ function uboot_prepare_git() {
|
||||
|
||||
display_alert "Downloading sources" "u-boot; BOOTSOURCEDIR=${BOOTSOURCEDIR}" "git"
|
||||
|
||||
# This var will be set by fetch_from_repo().
|
||||
declare checked_out_revision="undetermined"
|
||||
|
||||
GIT_FIXED_WORKDIR="${BOOTSOURCEDIR}" \
|
||||
GIT_BARE_REPO_FOR_WORKTREE="${uboot_git_bare_tree}" \
|
||||
GIT_BARE_REPO_INITIAL_BRANCH="master" \
|
||||
GIT_SKIP_SUBMODULES="${UBOOT_GIT_SKIP_SUBMODULES}" \
|
||||
fetch_from_repo "$BOOTSOURCE" "$BOOTDIR" "$BOOTBRANCH" "yes" # fetch_from_repo <url> <dir> <ref> <subdir_flag>
|
||||
|
||||
# Sets the outer scope variable
|
||||
uboot_git_revision="${checked_out_revision}"
|
||||
display_alert "Using u-boot revision SHA1" "${uboot_git_revision}"
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
32
lib/functions/compilation/uboot-patching.sh
Normal file
32
lib/functions/compilation/uboot-patching.sh
Normal file
@@ -0,0 +1,32 @@
|
||||
function uboot_main_patching_python() {
|
||||
prepare_pip_packages_for_python_tools
|
||||
|
||||
temp_file_for_output="$(mktemp)" # Get a temporary file for the output.
|
||||
# array with all parameters; will be auto-quoted by bash's @Q modifier below
|
||||
declare -a params_quoted=(
|
||||
"LOG_DEBUG=${SHOW_DEBUG}" # Logging level for python.
|
||||
"SRC=${SRC}" # Armbian root
|
||||
"OUTPUT=${temp_file_for_output}" # Output file for the python script.
|
||||
"ASSET_LOG_BASE=$(print_current_asset_log_base_file)" # base file name for the asset log; to write .md summaries.
|
||||
"PATCH_TYPE=u-boot" # or, u-boot, or, atf
|
||||
"PATCH_DIRS_TO_APPLY=${BOOTPATCHDIR}" # A space-separated list of directories to apply...
|
||||
"BOARD=${BOARD}" # BOARD is needed for the patchset selection logic; mostly for u-boot.
|
||||
"TARGET=${target_patchdir}" # TARGET is need for u-boot's SPI/SATA etc selection logic
|
||||
"USERPATCHES_PATH=${USERPATCHES_PATH}" # Needed to find the userpatches.
|
||||
# What to do?
|
||||
"APPLY_PATCHES=yes" # Apply the patches to the filesystem. Does not imply git commiting. If no, still exports the hash.
|
||||
"PATCHES_TO_GIT=${PATCHES_TO_GIT:-no}" # Commit to git after applying the patches.
|
||||
"REWRITE_PATCHES=${REWRITE_PATCHES:-no}" # Rewrite the original patch files after git commiting.
|
||||
# Git dir, revision, and target branch
|
||||
"GIT_WORK_DIR=${uboot_work_dir}" # "Where to apply patches?"
|
||||
"BASE_GIT_REVISION=${uboot_git_revision}" # The revision we're building/patching. Python will reset and clean to this.
|
||||
"BRANCH_FOR_PATCHES=u-boot-${BRANCH}-${BOARD}" # When applying patches-to-git, use this branch.
|
||||
)
|
||||
display_alert "Calling Python patching script" "for u-boot target" "info"
|
||||
run_host_command_logged env -i "${params_quoted[@]@Q}" python3 "${SRC}/lib/tools/patching.py"
|
||||
run_host_command_logged cat "${temp_file_for_output}"
|
||||
# shellcheck disable=SC1090
|
||||
source "${temp_file_for_output}" # SOURCE IT!
|
||||
run_host_command_logged rm -f "${temp_file_for_output}"
|
||||
return 0
|
||||
}
|
||||
@@ -18,41 +18,19 @@ function compile_uboot_target() {
|
||||
local uboot_work_dir=""
|
||||
uboot_work_dir="$(pwd)"
|
||||
|
||||
# needed for multiple targets and for calling compile_uboot directly
|
||||
display_alert "${uboot_prefix} Checking out to clean sources" "{$BOOTSOURCEDIR} for ${target_make}"
|
||||
git checkout -f -q HEAD # @TODO: this assumes way too much. should call the wrapper again, not directly
|
||||
declare -I uboot_git_revision # use outer scope variable value
|
||||
|
||||
display_alert "${uboot_prefix} Checking out to clean sources SHA1 ${uboot_git_revision}" "{$BOOTSOURCEDIR} for ${target_make}"
|
||||
git checkout -f -q "${uboot_git_revision}"
|
||||
|
||||
# grab the prepatch version from Makefile
|
||||
local uboot_prepatch_version=""
|
||||
uboot_prepatch_version=$(grab_version "${uboot_work_dir}")
|
||||
|
||||
# grab the mtime of the revision.
|
||||
declare checked_out_revision_ts="" checked_out_revision_mtime=""
|
||||
checked_out_revision_ts="$(git log -1 --pretty=%ct "HEAD")" # unix timestamp of the commit date
|
||||
checked_out_revision_mtime="$(date +%Y%m%d%H%M%S -d "@${checked_out_revision_ts}")" # convert timestamp to local date/time
|
||||
display_alert "u-boot: checked_out_revision_mtime set!" "${checked_out_revision_mtime} - ${checked_out_revision_ts}" "git"
|
||||
|
||||
# mark the minimum mtime for uboot patches
|
||||
declare -i patch_minimum_target_mtime="${checked_out_revision_mtime}"
|
||||
declare -i patch_dir_mtime="${patch_minimum_target_mtime}"
|
||||
local patch_dir="${SRC}/patch/u-boot/${BOOTPATCHDIR}"
|
||||
|
||||
if [[ -d "${patch_dir}" ]]; then
|
||||
patch_dir_mtime=$(get_dir_modification_time "${patch_dir}")
|
||||
display_alert "uboot: patch_dir mtime:" "${patch_dir_mtime}" "debug"
|
||||
patch_minimum_target_mtime=$((patch_dir_mtime > patch_minimum_target_mtime ? patch_dir_mtime : patch_minimum_target_mtime))
|
||||
display_alert "uboot: patch_minimum_target_mtime after patch_dir mtime:" "${patch_minimum_target_mtime}" "debug"
|
||||
fi
|
||||
|
||||
# @TODO: for u-boot, there's also the BOARD patch directory, which should also be taken into account.
|
||||
|
||||
initialize_fasthash "u-boot-${uboot_target_counter}" "unknown-uboot-hash" "${uboot_prepatch_version}" "$(pwd)"
|
||||
fasthash_debug "init"
|
||||
|
||||
maybe_make_clean_uboot
|
||||
|
||||
fasthash_branch "patches-${uboot_target_counter}-${BOOTPATCHDIR}-$BRANCH"
|
||||
advanced_patch "u-boot" "$BOOTPATCHDIR" "$BOARD" "$target_patchdir" "$BRANCH" "${LINUXFAMILY}-${BOARD}-${BRANCH}"
|
||||
# Python patching for u-boot!
|
||||
do_with_hooks uboot_main_patching_python
|
||||
|
||||
# create patch for manual source changes
|
||||
[[ $CREATE_PATCHES == yes ]] && userpatch_create "u-boot"
|
||||
|
||||
@@ -6,8 +6,10 @@ function aggregate_all_packages() {
|
||||
|
||||
# array with all parameters; will be auto-quoted by bash's @Q modifier below
|
||||
declare -a aggregation_params_quoted=(
|
||||
"LOG_DEBUG=${SHOW_DEBUG}" # Logging level for python.
|
||||
"SRC=${SRC}"
|
||||
"OUTPUT=${temp_file_for_aggregation}"
|
||||
"ASSET_LOG_BASE=$(print_current_asset_log_base_file)" # base file name for the asset log; to write .md summaries.
|
||||
|
||||
# For the main packages, and others; main packages are not mixed with BOARD or DESKTOP packages.
|
||||
# Results:
|
||||
|
||||
25
lib/functions/general/python-tools.sh
Normal file
25
lib/functions/general/python-tools.sh
Normal file
@@ -0,0 +1,25 @@
|
||||
function early_prepare_pip3_dependencies_for_python_tools() {
|
||||
declare -a -g python3_pip_dependencies=(
|
||||
"unidiff==0.7.4" # for parsing unified diff
|
||||
"GitPython==3.1.29" # for manipulating git repos
|
||||
"unidecode==1.3.6" # for converting strings to ascii
|
||||
"coloredlogs==15.0.1" # for colored logging
|
||||
)
|
||||
return 0
|
||||
}
|
||||
|
||||
function prepare_pip_packages_for_python_tools() {
|
||||
early_prepare_pip3_dependencies_for_python_tools
|
||||
|
||||
declare -g PYTHON_TOOLS_PIP_PACKAGES_DONE="${PYTHON_TOOLS_PIP_PACKAGES_DONE:-no}"
|
||||
if [[ "${PYTHON_TOOLS_PIP_PACKAGES_DONE}" == "yes" ]]; then
|
||||
display_alert "Required Python packages" "already installed" "info"
|
||||
return 0
|
||||
fi
|
||||
|
||||
# @TODO: virtualenv? system-wide for now
|
||||
display_alert "Installing required Python packages" "via pip3" "info"
|
||||
run_host_command_logged pip3 install "${python3_pip_dependencies[@]}"
|
||||
|
||||
return 0
|
||||
}
|
||||
@@ -111,9 +111,11 @@ function docker_cli_prepare() {
|
||||
|
||||
enable_all_extensions_builtin_and_user
|
||||
initialize_extension_manager # initialize the extension manager.
|
||||
declare -a -g host_dependencies=()
|
||||
declare -a -g host_dependencies=() python3_pip_dependencies=()
|
||||
early_prepare_pip3_dependencies_for_python_tools
|
||||
early_prepare_host_dependencies
|
||||
display_alert "Pre-game dependencies" "${host_dependencies[*]}" "debug"
|
||||
display_alert "Pre-game host dependencies" "${host_dependencies[*]}" "debug"
|
||||
display_alert "Pre-game pip3 dependencies" "${python3_pip_dependencies[*]}" "debug"
|
||||
|
||||
#############################################################################################################
|
||||
# Stop here if Docker can't be used at all.
|
||||
@@ -210,6 +212,8 @@ function docker_cli_prepare() {
|
||||
RUN echo "--> CACHE MISS IN DOCKERFILE: apt packages." && \
|
||||
DEBIAN_FRONTEND=noninteractive apt-get -y update && \
|
||||
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends ${BASIC_DEPS[@]} ${host_dependencies[@]}
|
||||
RUN echo "--> CACHE MISS IN DOCKERFILE: pip3 packages." && \
|
||||
pip3 install ${python3_pip_dependencies[@]}
|
||||
RUN sed -i 's/# en_US.UTF-8/en_US.UTF-8/' /etc/locale.gen
|
||||
RUN locale-gen
|
||||
WORKDIR ${DOCKER_ARMBIAN_TARGET_PATH}
|
||||
|
||||
@@ -228,9 +228,12 @@ function early_prepare_host_dependencies() {
|
||||
|
||||
# python3 stuff (eg, for modern u-boot)
|
||||
python3-dev python3-distutils python3-setuptools
|
||||
# python3 pip (for Armbian's Python utilities) @TODO virtualenv?
|
||||
python3-pip
|
||||
|
||||
# python2, including headers, mostly used by some u-boot builds (2017 et al, odroidxu4 and others).
|
||||
python2 python2-dev python-setuptools
|
||||
python2 python2-dev
|
||||
# Attention: 'python-setuptools' (Python2's setuptools) does not exist in Debian Sid. Use Python3 instead.
|
||||
|
||||
# systemd-container brings in systemd-nspawn, which is used by the buildpkg functionality
|
||||
# systemd-container # @TODO: bring this back eventually. I don't think trying to use those inside a container is a good idea.
|
||||
|
||||
@@ -99,6 +99,7 @@ function main_default_build_single() {
|
||||
fi
|
||||
# @TODO: refactor this construct. we use it too many times.
|
||||
if [[ "${REPOSITORY_INSTALL}" != *u-boot* ]]; then
|
||||
declare uboot_git_revision="not_determined_yet"
|
||||
LOG_SECTION="uboot_prepare_git" do_with_logging_unless_user_terminal uboot_prepare_git
|
||||
LOG_SECTION="compile_uboot" do_with_logging compile_uboot
|
||||
fi
|
||||
|
||||
@@ -7,37 +7,43 @@
|
||||
# -- rpardini, 23/11/2022
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
|
||||
import common.aggregation_utils as util
|
||||
import common.armbian_utils as armbian_utils
|
||||
from common.md_asset_log import SummarizedMarkdownWriter
|
||||
|
||||
# Prepare logging
|
||||
armbian_utils.setup_logging()
|
||||
log: logging.Logger = logging.getLogger("aggregation")
|
||||
|
||||
# Read SRC from the environment, treat it.
|
||||
armbian_build_directory = util.get_from_env_or_bomb("SRC")
|
||||
armbian_build_directory = armbian_utils.get_from_env_or_bomb("SRC")
|
||||
if not os.path.isdir(armbian_build_directory):
|
||||
raise Exception("SRC is not a directory")
|
||||
|
||||
# OUTPUT from the environment, treat it.
|
||||
output_file = util.get_from_env_or_bomb("OUTPUT")
|
||||
with open(output_file, "w") as out:
|
||||
out.write("")
|
||||
output_file = armbian_utils.get_from_env_or_bomb("OUTPUT")
|
||||
with open(output_file, "w") as bash:
|
||||
bash.write("")
|
||||
|
||||
BUILD_DESKTOP = util.yes_or_no_or_bomb(util.get_from_env_or_bomb("BUILD_DESKTOP"))
|
||||
BUILD_DESKTOP = armbian_utils.yes_or_no_or_bomb(armbian_utils.get_from_env_or_bomb("BUILD_DESKTOP"))
|
||||
INCLUDE_EXTERNAL_PACKAGES = True
|
||||
ARCH = util.get_from_env_or_bomb("ARCH")
|
||||
DESKTOP_ENVIRONMENT = util.get_from_env("DESKTOP_ENVIRONMENT")
|
||||
DESKTOP_ENVIRONMENT_CONFIG_NAME = util.get_from_env("DESKTOP_ENVIRONMENT_CONFIG_NAME")
|
||||
RELEASE = util.get_from_env_or_bomb("RELEASE") # "kinetic"
|
||||
LINUXFAMILY = util.get_from_env_or_bomb("LINUXFAMILY")
|
||||
BOARD = util.get_from_env_or_bomb("BOARD")
|
||||
USERPATCHES_PATH = util.get_from_env_or_bomb("USERPATCHES_PATH")
|
||||
ARCH = armbian_utils.get_from_env_or_bomb("ARCH")
|
||||
DESKTOP_ENVIRONMENT = armbian_utils.get_from_env("DESKTOP_ENVIRONMENT")
|
||||
DESKTOP_ENVIRONMENT_CONFIG_NAME = armbian_utils.get_from_env("DESKTOP_ENVIRONMENT_CONFIG_NAME")
|
||||
RELEASE = armbian_utils.get_from_env_or_bomb("RELEASE") # "kinetic"
|
||||
LINUXFAMILY = armbian_utils.get_from_env_or_bomb("LINUXFAMILY")
|
||||
BOARD = armbian_utils.get_from_env_or_bomb("BOARD")
|
||||
USERPATCHES_PATH = armbian_utils.get_from_env_or_bomb("USERPATCHES_PATH")
|
||||
|
||||
# Show the environment
|
||||
#print("Environment:")
|
||||
#for k, v in os.environ.items():
|
||||
# print("{}={}".format(k, v))
|
||||
armbian_utils.show_incoming_environment()
|
||||
|
||||
util.SELECTED_CONFIGURATION = util.get_from_env_or_bomb("SELECTED_CONFIGURATION") # "cli_standard"
|
||||
util.DESKTOP_APPGROUPS_SELECTED = util.parse_env_for_tokens("DESKTOP_APPGROUPS_SELECTED") # ["browsers", "chat"]
|
||||
util.SELECTED_CONFIGURATION = armbian_utils.get_from_env_or_bomb("SELECTED_CONFIGURATION") # "cli_standard"
|
||||
util.DESKTOP_APPGROUPS_SELECTED = armbian_utils.parse_env_for_tokens(
|
||||
"DESKTOP_APPGROUPS_SELECTED") # ["browsers", "chat"]
|
||||
util.SRC = armbian_build_directory
|
||||
|
||||
util.AGGREGATION_SEARCH_ROOT_ABSOLUTE_DIRS = [
|
||||
@@ -152,43 +158,44 @@ if BUILD_DESKTOP:
|
||||
# ----------------------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
with open(output_file, "w") as out:
|
||||
# with sys.stdout as f:
|
||||
out.write("#!/bin/env bash\n")
|
||||
out.write(
|
||||
util.prepare_bash_output_array_for_list(
|
||||
"AGGREGATED_PACKAGES_DEBOOTSTRAP", AGGREGATED_PACKAGES_DEBOOTSTRAP))
|
||||
out.write(util.prepare_bash_output_array_for_list(
|
||||
"AGGREGATED_PACKAGES_ROOTFS", AGGREGATED_PACKAGES_ROOTFS))
|
||||
out.write(util.prepare_bash_output_array_for_list(
|
||||
"AGGREGATED_PACKAGES_IMAGE", AGGREGATED_PACKAGES_IMAGE))
|
||||
out.write(util.prepare_bash_output_array_for_list(
|
||||
"AGGREGATED_PACKAGES_DESKTOP", AGGREGATED_PACKAGES_DESKTOP))
|
||||
output_lists: list[tuple[str, str, object, object]] = [
|
||||
("debootstrap", "AGGREGATED_PACKAGES_DEBOOTSTRAP", AGGREGATED_PACKAGES_DEBOOTSTRAP, None),
|
||||
("rootfs", "AGGREGATED_PACKAGES_ROOTFS", AGGREGATED_PACKAGES_ROOTFS, None),
|
||||
("image", "AGGREGATED_PACKAGES_IMAGE", AGGREGATED_PACKAGES_IMAGE, None),
|
||||
("desktop", "AGGREGATED_PACKAGES_DESKTOP", AGGREGATED_PACKAGES_DESKTOP, None),
|
||||
("apt-sources", "AGGREGATED_APT_SOURCES", AGGREGATED_APT_SOURCES, util.encode_source_base_path_extra)
|
||||
]
|
||||
|
||||
with open(output_file, "w") as bash, SummarizedMarkdownWriter("aggregation.md", "Aggregation") as md:
|
||||
bash.write("#!/bin/env bash\n")
|
||||
|
||||
# loop over the aggregated lists
|
||||
for id, name, value, extra_func in output_lists:
|
||||
stats = util.prepare_bash_output_array_for_list(bash, md, name, value, extra_func)
|
||||
md.add_summary(f"{id}: {stats['number_items']}")
|
||||
|
||||
# The rootfs hash (md5) is used as a cache key.
|
||||
out.write(f"declare -g -r AGGREGATED_ROOTFS_HASH='{AGGREGATED_ROOTFS_HASH}'\n")
|
||||
bash.write(f"declare -g -r AGGREGATED_ROOTFS_HASH='{AGGREGATED_ROOTFS_HASH}'\n")
|
||||
|
||||
# Special case for components: debootstrap also wants a list of components, comma separated.
|
||||
out.write(
|
||||
bash.write(
|
||||
f"declare -g -r AGGREGATED_DEBOOTSTRAP_COMPONENTS_COMMA='{AGGREGATED_DEBOOTSTRAP_COMPONENTS_COMMA}'\n")
|
||||
|
||||
# Single string stuff for desktop packages postinst's and preparation. @TODO use functions instead of eval.
|
||||
out.write(util.prepare_bash_output_single_string(
|
||||
bash.write(util.prepare_bash_output_single_string(
|
||||
"AGGREGATED_DESKTOP_POSTINST", AGGREGATED_DESKTOP_POSTINST))
|
||||
out.write(util.prepare_bash_output_single_string(
|
||||
bash.write(util.prepare_bash_output_single_string(
|
||||
"AGGREGATED_DESKTOP_CREATE_DESKTOP_PACKAGE", AGGREGATED_DESKTOP_CREATE_DESKTOP_PACKAGE))
|
||||
out.write(util.prepare_bash_output_single_string(
|
||||
bash.write(util.prepare_bash_output_single_string(
|
||||
"AGGREGATED_DESKTOP_BSP_POSTINST", AGGREGATED_DESKTOP_BSP_POSTINST))
|
||||
out.write(util.prepare_bash_output_single_string(
|
||||
bash.write(util.prepare_bash_output_single_string(
|
||||
"AGGREGATED_DESKTOP_BSP_PREPARE", AGGREGATED_DESKTOP_BSP_PREPARE))
|
||||
|
||||
# The apt sources.
|
||||
out.write(util.prepare_bash_output_array_for_list(
|
||||
"AGGREGATED_APT_SOURCES", AGGREGATED_APT_SOURCES, util.encode_source_base_path_extra))
|
||||
# 2) @TODO: Some removals...
|
||||
|
||||
# 2) @TODO: Some removals...
|
||||
# aggregate_all_cli "packages.uninstall" " "
|
||||
# aggregate_all_desktop "packages.uninstall" " "
|
||||
# PACKAGE_LIST_UNINSTALL="$(cleanup_list aggregated_content)"
|
||||
# unset aggregated_content
|
||||
|
||||
# aggregate_all_cli "packages.uninstall" " "
|
||||
# aggregate_all_desktop "packages.uninstall" " "
|
||||
# PACKAGE_LIST_UNINSTALL="$(cleanup_list aggregated_content)"
|
||||
# unset aggregated_content
|
||||
log.debug(f"Done. Output written to {output_file}")
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
import fnmatch
|
||||
import logging
|
||||
import os
|
||||
|
||||
from . import armbian_utils as armbian_utils
|
||||
|
||||
log: logging.Logger = logging.getLogger("aggregation_utils")
|
||||
|
||||
AGGREGATION_SEARCH_ROOT_ABSOLUTE_DIRS = []
|
||||
DEBOOTSTRAP_SEARCH_RELATIVE_DIRS = []
|
||||
CLI_SEARCH_RELATIVE_DIRS = []
|
||||
@@ -127,9 +132,9 @@ def remove_common_path_from_refs(merged):
|
||||
|
||||
# Let's produce a list from the environment variables, complete with the references.
|
||||
def parse_env_for_list(env_name, fixed_ref=None):
|
||||
env_list = parse_env_for_tokens(env_name)
|
||||
env_list = armbian_utils.parse_env_for_tokens(env_name)
|
||||
if fixed_ref is None:
|
||||
refs = parse_env_for_tokens(env_name + "_REFS")
|
||||
refs = armbian_utils.parse_env_for_tokens(env_name + "_REFS")
|
||||
# Sanity check: the number of refs should be the same as the number of items in the list.
|
||||
if len(env_list) != len(refs):
|
||||
raise Exception(f"Expected {len(env_list)} refs for {env_name}, got {len(refs)}")
|
||||
@@ -203,41 +208,6 @@ def aggregate_all_desktop(artifact, aggregation_function=aggregate_packages_from
|
||||
return aggregation_function(process_common_path_for_potentials(potential_paths))
|
||||
|
||||
|
||||
def parse_env_for_tokens(env_name):
|
||||
result = []
|
||||
# Read the environment; if None, return an empty list.
|
||||
val = os.environ.get(env_name, None)
|
||||
if val is None:
|
||||
return result
|
||||
# tokenize val; split by whitespace, line breaks, commas, and semicolons.
|
||||
# trim whitespace from tokens.
|
||||
return [token for token in [token.strip() for token in (val.split())] if token != ""]
|
||||
|
||||
|
||||
def get_from_env(env_name):
|
||||
value = os.environ.get(env_name, None)
|
||||
if value is not None:
|
||||
value = value.strip()
|
||||
return value
|
||||
|
||||
|
||||
def get_from_env_or_bomb(env_name):
|
||||
value = get_from_env(env_name)
|
||||
if value is None:
|
||||
raise Exception(f"{env_name} environment var not set")
|
||||
if value == "":
|
||||
raise Exception(f"{env_name} environment var is empty")
|
||||
return value
|
||||
|
||||
|
||||
def yes_or_no_or_bomb(value):
|
||||
if value == "yes":
|
||||
return True
|
||||
if value == "no":
|
||||
return False
|
||||
raise Exception(f"Expected yes or no, got {value}")
|
||||
|
||||
|
||||
def join_refs_for_bash_single_string(refs):
|
||||
single_line_refs = []
|
||||
for ref in refs:
|
||||
@@ -252,7 +222,27 @@ def join_refs_for_bash_single_string(refs):
|
||||
return " ".join(single_line_refs)
|
||||
|
||||
|
||||
def prepare_bash_output_array_for_list(output_array_name, merged_list, extra_dict_function=None):
|
||||
# @TODO this is shit make it less shit urgently
|
||||
def join_refs_for_markdown_single_string(refs):
|
||||
single_line_refs = []
|
||||
for ref in refs:
|
||||
one_line = f" - `"
|
||||
if "operation" in ref and "line" in ref:
|
||||
one_line += ref["operation"] + ":" + ref["path"] + ":" + str(ref["line"])
|
||||
else:
|
||||
one_line += ref["path"]
|
||||
if "symlink_to" in ref:
|
||||
if ref["symlink_to"] is not None:
|
||||
one_line += ":symlink->" + ref["symlink_to"]
|
||||
one_line += "`\n"
|
||||
single_line_refs.append(one_line)
|
||||
return "".join(single_line_refs)
|
||||
|
||||
|
||||
def prepare_bash_output_array_for_list(
|
||||
bash_writer, md_writer, output_array_name, merged_list, extra_dict_function=None):
|
||||
md_writer.write(f"### `{output_array_name}`\n")
|
||||
|
||||
values_list = []
|
||||
explain_dict = {}
|
||||
extra_dict = {}
|
||||
@@ -260,8 +250,8 @@ def prepare_bash_output_array_for_list(output_array_name, merged_list, extra_dic
|
||||
value = merged_list[key]
|
||||
# print(f"key: {key}, value: {value}")
|
||||
refs = value["refs"]
|
||||
# join the refs with a comma
|
||||
refs_str = join_refs_for_bash_single_string(refs)
|
||||
md_writer.write(f"- `{key}`: *{value['status']}*\n" + join_refs_for_markdown_single_string(refs))
|
||||
refs_str = join_refs_for_bash_single_string(refs) # join the refs with a comma
|
||||
explain_dict[key] = refs_str
|
||||
if value["status"] != "remove":
|
||||
values_list.append(key)
|
||||
@@ -288,8 +278,10 @@ def prepare_bash_output_array_for_list(output_array_name, merged_list, extra_dic
|
||||
extra_dict_decl = f"declare -r -g -A {output_array_name}_DICT=(\n{extra_list_bash}\n)\n"
|
||||
|
||||
final_value = actual_var + "\n" + extra_dict_decl + "\n" + comma_var + "\n" + explain_var
|
||||
# print(final_value)
|
||||
return final_value
|
||||
bash_writer.write(final_value)
|
||||
|
||||
# return some statistics for the summary
|
||||
return {"number_items": len(values_list)}
|
||||
|
||||
|
||||
def prepare_bash_output_single_string(output_array_name, merged_list):
|
||||
|
||||
69
lib/tools/common/armbian_utils.py
Normal file
69
lib/tools/common/armbian_utils.py
Normal file
@@ -0,0 +1,69 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
|
||||
log: logging.Logger = logging.getLogger("armbian_utils")
|
||||
|
||||
|
||||
def parse_env_for_tokens(env_name):
|
||||
result = []
|
||||
# Read the environment; if None, return an empty list.
|
||||
val = os.environ.get(env_name, None)
|
||||
if val is None:
|
||||
return result
|
||||
# tokenize val; split by whitespace, line breaks, commas, and semicolons.
|
||||
# trim whitespace from tokens.
|
||||
return [token for token in [token.strip() for token in (val.split())] if token != ""]
|
||||
|
||||
|
||||
def get_from_env(env_name):
|
||||
value = os.environ.get(env_name, None)
|
||||
if value is not None:
|
||||
value = value.strip()
|
||||
return value
|
||||
|
||||
|
||||
def get_from_env_or_bomb(env_name):
|
||||
value = get_from_env(env_name)
|
||||
if value is None:
|
||||
raise Exception(f"{env_name} environment var not set")
|
||||
if value == "":
|
||||
raise Exception(f"{env_name} environment var is empty")
|
||||
return value
|
||||
|
||||
|
||||
def yes_or_no_or_bomb(value):
|
||||
if value == "yes":
|
||||
return True
|
||||
if value == "no":
|
||||
return False
|
||||
raise Exception(f"Expected yes or no, got {value}")
|
||||
|
||||
|
||||
def show_incoming_environment():
|
||||
log.debug("--ENV-- Environment:")
|
||||
for key in os.environ:
|
||||
log.debug(f"--ENV-- {key}={os.environ[key]}")
|
||||
|
||||
|
||||
def setup_logging():
|
||||
try:
|
||||
import coloredlogs
|
||||
level = "INFO"
|
||||
if get_from_env("LOG_DEBUG") == "yes":
|
||||
level = "DEBUG"
|
||||
format = "%(message)s"
|
||||
styles = {
|
||||
'trace': {'color': 'white', },
|
||||
'debug': {'color': 'white'},
|
||||
'info': {'color': 'white', 'bold': True},
|
||||
'warning': {'color': 'yellow', 'bold': True},
|
||||
'error': {'color': 'red'},
|
||||
'critical': {'bold': True, 'color': 'red'}
|
||||
}
|
||||
coloredlogs.install(level=level, stream=sys.stderr, isatty=True, fmt=format, level_styles=styles)
|
||||
except ImportError:
|
||||
level = logging.INFO
|
||||
if get_from_env("LOG_DEBUG") == "yes":
|
||||
level = logging.DEBUG
|
||||
logging.basicConfig(level=level, stream=sys.stderr)
|
||||
49
lib/tools/common/md_asset_log.py
Normal file
49
lib/tools/common/md_asset_log.py
Normal file
@@ -0,0 +1,49 @@
|
||||
import logging
|
||||
|
||||
from . import armbian_utils as armbian_utils
|
||||
|
||||
log: logging.Logger = logging.getLogger("md_asset_log")
|
||||
|
||||
ASSET_LOG_BASE = armbian_utils.get_from_env("ASSET_LOG_BASE")
|
||||
|
||||
|
||||
def write_md_asset_log(file: str, contents: str):
|
||||
"""Log a message to the asset log file."""
|
||||
if ASSET_LOG_BASE is None:
|
||||
log.debug(f"ASSET_LOG_BASE not defined; here's the contents:\n{contents}")
|
||||
return
|
||||
target_file = ASSET_LOG_BASE + file
|
||||
with open(target_file, "w") as asset_log:
|
||||
asset_log.write(contents)
|
||||
log.debug(f"- Wrote to {target_file}.")
|
||||
|
||||
|
||||
class SummarizedMarkdownWriter:
|
||||
def __init__(self, file_name, title):
|
||||
self.file_name = file_name
|
||||
self.title = title
|
||||
self.summary: list[str] = []
|
||||
self.contents = ""
|
||||
|
||||
def __enter__(self):
|
||||
return self
|
||||
|
||||
def __exit__(self, *args):
|
||||
write_md_asset_log(self.file_name, self.get_summarized_markdown())
|
||||
log.info(f"Summary: {self.title}: {'; '.join(self.summary)}")
|
||||
|
||||
def add_summary(self, summary):
|
||||
self.summary.append(summary)
|
||||
|
||||
def write(self, text):
|
||||
self.contents += text
|
||||
|
||||
# see https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/organizing-information-with-collapsed-sections
|
||||
def get_summarized_markdown(self):
|
||||
if len(self.title) == 0:
|
||||
raise Exception("Markdown Summary Title not set")
|
||||
if len(self.summary) == 0:
|
||||
raise Exception("Markdown Summary not set")
|
||||
if self.contents == "":
|
||||
raise Exception("Markdown Contents not set")
|
||||
return f"<details><summary>{self.title}: {'; '.join(self.summary)}</summary>\n<p>\n\n{self.contents}\n\n</p></details>\n"
|
||||
611
lib/tools/common/patching_utils.py
Normal file
611
lib/tools/common/patching_utils.py
Normal file
@@ -0,0 +1,611 @@
|
||||
#! /bin/env python3
|
||||
import email.utils
|
||||
import logging
|
||||
import mailbox
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
|
||||
import git # GitPython
|
||||
from unidecode import unidecode
|
||||
from unidiff import PatchSet
|
||||
|
||||
log: logging.Logger = logging.getLogger("patching_utils")
|
||||
|
||||
|
||||
class PatchRootDir:
|
||||
def __init__(self, abs_dir, root_type, patch_type, root_dir):
|
||||
self.abs_dir = abs_dir
|
||||
self.root_type = root_type
|
||||
self.patch_type = patch_type
|
||||
self.root_dir = root_dir
|
||||
|
||||
|
||||
class PatchSubDir:
|
||||
def __init__(self, rel_dir, sub_type):
|
||||
self.rel_dir = rel_dir
|
||||
self.sub_type = sub_type
|
||||
|
||||
|
||||
class PatchDir:
|
||||
def __init__(self, patch_root_dir: PatchRootDir, patch_sub_dir: PatchSubDir, abs_root_dir: str):
|
||||
self.patch_root_dir: PatchRootDir = patch_root_dir
|
||||
self.patch_sub_dir: PatchSubDir = patch_sub_dir
|
||||
self.full_dir = os.path.realpath(os.path.join(self.patch_root_dir.abs_dir, self.patch_sub_dir.rel_dir))
|
||||
self.rel_dir = os.path.relpath(self.full_dir, abs_root_dir)
|
||||
self.root_type = self.patch_root_dir.root_type
|
||||
self.sub_type = self.patch_sub_dir.sub_type
|
||||
self.patch_files: list[PatchFileInDir] = []
|
||||
|
||||
def __str__(self) -> str:
|
||||
return "<PatchDir: full_dir:'" + str(self.full_dir) + "'>"
|
||||
|
||||
def find_patch_files(self):
|
||||
# do nothing if the self.full_path is not a real, existing, directory
|
||||
if not os.path.isdir(self.full_dir):
|
||||
return
|
||||
|
||||
# If the directory contains a series.conf file.
|
||||
series_conf_path = os.path.join(self.full_dir, "series.conf")
|
||||
if os.path.isfile(series_conf_path):
|
||||
patches_in_series = self.parse_series_conf(series_conf_path)
|
||||
for patch_file_name in patches_in_series:
|
||||
patch_file_path = os.path.join(self.full_dir, patch_file_name)
|
||||
if os.path.isfile(patch_file_path):
|
||||
patch_file = PatchFileInDir(patch_file_path, self)
|
||||
self.patch_files.append(patch_file)
|
||||
else:
|
||||
raise Exception(
|
||||
f"series.conf file {series_conf_path} contains a patch file {patch_file_name} that does not exist")
|
||||
|
||||
# Find the files in self.full_dir that end in .patch; do not consider subdirectories.
|
||||
# Add them to self.patch_files.
|
||||
for file in os.listdir(self.full_dir):
|
||||
# noinspection PyTypeChecker
|
||||
if file.endswith(".patch"):
|
||||
self.patch_files.append(PatchFileInDir(file, self))
|
||||
|
||||
@staticmethod
|
||||
def parse_series_conf(series_conf_path):
|
||||
patches_in_series = []
|
||||
with open(series_conf_path, "r") as series_conf_file:
|
||||
for line in series_conf_file:
|
||||
line = line.strip()
|
||||
if line.startswith("#"):
|
||||
continue
|
||||
# if line begins with "-", skip it
|
||||
if line.startswith("-"):
|
||||
continue
|
||||
if line == "":
|
||||
continue
|
||||
patches_in_series.append(line)
|
||||
return patches_in_series
|
||||
|
||||
|
||||
class PatchFileInDir:
|
||||
def __init__(self, file_name, patch_dir: PatchDir):
|
||||
self.file_name = file_name
|
||||
self.patch_dir: PatchDir = patch_dir
|
||||
self.file_base_name = os.path.splitext(self.file_name)[0]
|
||||
|
||||
def __str__(self) -> str:
|
||||
desc: str = f"<PatchFileInDir: file_name:'{self.file_name}', dir:{self.patch_dir.__str__()} >"
|
||||
return desc
|
||||
|
||||
def full_file_path(self):
|
||||
return os.path.join(self.patch_dir.full_dir, self.file_name)
|
||||
|
||||
def relative_to_src_filepath(self):
|
||||
return os.path.join(self.patch_dir.rel_dir, self.file_name)
|
||||
|
||||
def split_patches_from_file(self) -> list["PatchInPatchFile"]:
|
||||
counter: int = 1
|
||||
mbox: mailbox.mbox = mailbox.mbox(self.full_file_path())
|
||||
is_invalid_mbox: bool = False
|
||||
|
||||
# Sanity check: if the file is understood as mailbox, make sure the first line is a valid "From " line,
|
||||
# and has the magic marker 'Mon Sep 17 00:00:00 2001' in it; otherwise, it could be a combined
|
||||
# bare patch + mbox-formatted patch in a single file, and we'd lose the bare patch.
|
||||
if len(mbox) > 0:
|
||||
contents, contents_read_problems = read_file_as_utf8(self.full_file_path())
|
||||
first_line = contents.splitlines()[0].strip()
|
||||
if not first_line.startswith("From ") or "Mon Sep 17 00:00:00 2001" not in first_line:
|
||||
# is_invalid_mbox = True # we might try to recover from this is there's too many
|
||||
# log.error(
|
||||
raise Exception(
|
||||
f"File {self.full_file_path()} seems to be a valid mbox file, but it begins with"
|
||||
f" '{first_line}', but in mbox the 1st line should be a valid From: header"
|
||||
f" with the magic date.")
|
||||
|
||||
# if there is no emails, it's a diff-only patch file.
|
||||
if is_invalid_mbox or len(mbox) == 0:
|
||||
# read the file into a string; explicitly use utf-8 to not depend on the system locale
|
||||
diff, read_problems = read_file_as_utf8(self.full_file_path())
|
||||
bare_patch = PatchInPatchFile(self, counter, diff, None, None, None, None)
|
||||
bare_patch.problems.append("not_mbox")
|
||||
bare_patch.problems.extend(read_problems)
|
||||
log.warning(f"Patch file {self.full_file_path()} is not properly mbox-formatted.")
|
||||
return [bare_patch]
|
||||
|
||||
# loop over the emails in the mbox
|
||||
patches: list[PatchInPatchFile] = []
|
||||
msg: mailbox.mboxMessage
|
||||
for msg in mbox:
|
||||
patch: str = msg.get_payload()
|
||||
# split the patch itself and the description from the payload
|
||||
desc, patch_contents = self.split_description_and_patch(patch)
|
||||
if len(patch_contents) == 0:
|
||||
log.warning(
|
||||
f"WARNING: patch file {self.full_file_path()} fragment {counter} contains an empty patch")
|
||||
continue
|
||||
|
||||
patches.append(PatchInPatchFile(
|
||||
self, counter, patch_contents, desc, msg['From'], msg['Subject'], msg['Date']))
|
||||
|
||||
counter += 1
|
||||
|
||||
# sanity check, throw exception if there are no patches
|
||||
if len(patches) == 0:
|
||||
raise Exception("No valid patches found in file " + self.full_file_path())
|
||||
return patches
|
||||
|
||||
@staticmethod
|
||||
def split_description_and_patch(full_message_text: str) -> tuple["str | None", str]:
|
||||
separator = "\n---\n"
|
||||
# check if the separator is in the patch, if so, split
|
||||
if separator in full_message_text:
|
||||
# find the _last_ occurrence of the separator, and split two chunks from that position
|
||||
separator_pos = full_message_text.rfind(separator)
|
||||
desc = full_message_text[:separator_pos]
|
||||
patch = full_message_text[separator_pos + len(separator):]
|
||||
return desc, patch
|
||||
else: # no separator, so no description, patch is the full message
|
||||
desc = None
|
||||
patch = full_message_text
|
||||
return desc, patch
|
||||
|
||||
def rewrite_patch_file(self, patches: list["PatchInPatchFile"]):
|
||||
# Produce a mailbox file from the patches.
|
||||
# The patches are assumed to be in the same order as they were in the original file.
|
||||
# The original file is overwritten.
|
||||
output_file = self.full_file_path()
|
||||
log.info(f"Rewriting {output_file} with new patches...")
|
||||
with open(output_file, "w") as f:
|
||||
for patch in patches:
|
||||
log.info(f"Writing patch {patch.counter} to {output_file}...")
|
||||
f.write(patch.rewritten_patch)
|
||||
|
||||
|
||||
# Placeholder for future manual work
|
||||
def shorten_patched_file_name_for_stats(path):
|
||||
return os.path.basename(path)
|
||||
|
||||
|
||||
class PatchInPatchFile:
|
||||
|
||||
def __init__(self, parent: PatchFileInDir, counter: int, diff: str, desc, from_hdr, sbj_hdr, date_hdr):
|
||||
self.problems: list[str] = []
|
||||
self.applied_ok: bool = False
|
||||
self.rewritten_patch: str | None = None
|
||||
self.git_commit_hash: str | None = None
|
||||
|
||||
self.parent: PatchFileInDir = parent
|
||||
self.counter: int = counter
|
||||
self.diff: str = diff
|
||||
|
||||
self.failed_to_parse: bool = False
|
||||
|
||||
# Basic parsing of properly mbox-formatted patches
|
||||
self.desc: str = downgrade_to_ascii(desc) if desc is not None else None
|
||||
self.from_name, self.from_email = self.parse_from_name_email(from_hdr) if from_hdr is not None else (
|
||||
None, None)
|
||||
self.subject: str = downgrade_to_ascii(fix_patch_subject(sbj_hdr)) if sbj_hdr is not None else None
|
||||
self.date = email.utils.parsedate_to_datetime(date_hdr) if date_hdr is not None else None
|
||||
|
||||
self.patched_file_stats_dict: dict = {}
|
||||
self.total_additions: int = 0
|
||||
self.total_deletions: int = 0
|
||||
self.files_modified: int = 0
|
||||
self.files_added: int = 0
|
||||
self.files_renamed: int = 0
|
||||
self.files_removed: int = 0
|
||||
self.created_file_names = []
|
||||
self.all_file_names_touched = []
|
||||
|
||||
def parse_from_name_email(self, from_str: str) -> tuple["str | None", "str | None"]:
|
||||
m = re.match(r'(?P<name>.*)\s*<\s*(?P<email>.*)\s*>', from_str)
|
||||
if m is None:
|
||||
self.problems.append("invalid_author")
|
||||
log.warning(
|
||||
f"Failed to parse name and email from: '{from_str}' while parsing patch {self.counter} in file {self.parent.full_file_path()}")
|
||||
return downgrade_to_ascii(from_str), "unknown-email@domain.tld"
|
||||
else:
|
||||
# Return the name and email
|
||||
return downgrade_to_ascii(m.group("name")), m.group("email")
|
||||
|
||||
def one_line_patch_stats(self) -> str:
|
||||
files_desc = ", ".join(self.patched_file_stats_dict)
|
||||
return f"{self.text_diffstats()} {{{files_desc}}}"
|
||||
|
||||
def text_diffstats(self) -> str:
|
||||
operations: list[str] = []
|
||||
operations.append(f"{self.files_modified}M") if self.files_modified > 0 else None
|
||||
operations.append(f"{self.files_added}A") if self.files_added > 0 else None
|
||||
operations.append(f"{self.files_removed}D") if self.files_removed > 0 else None
|
||||
operations.append(f"{self.files_renamed}R") if self.files_renamed > 0 else None
|
||||
return f"(+{self.total_additions}/-{self.total_deletions})[{', '.join(operations)}]"
|
||||
|
||||
def parse_patch(self):
|
||||
# parse the patch, using the unidiff package
|
||||
try:
|
||||
patch = PatchSet(self.diff, encoding=None)
|
||||
except Exception as e:
|
||||
self.problems.append("invalid_diff")
|
||||
self.failed_to_parse = True
|
||||
log.error(
|
||||
f"Failed to parse unidiff for file {self.parent.full_file_path()}(:{self.counter}): {str(e).strip()}")
|
||||
return # no point in continuing; the patch is invalid; might be recovered during apply
|
||||
|
||||
self.total_additions = 0
|
||||
self.total_deletions = 0
|
||||
self.files_renamed = 0
|
||||
self.files_modified = len(patch.modified_files)
|
||||
self.files_added = len(patch.added_files)
|
||||
self.files_removed = len(patch.removed_files)
|
||||
self.created_file_names = [f.path for f in patch.added_files]
|
||||
self.all_file_names_touched = \
|
||||
[f.path for f in patch.added_files] + \
|
||||
[f.path for f in patch.modified_files] + \
|
||||
[f.path for f in patch.removed_files]
|
||||
self.patched_file_stats_dict = {}
|
||||
for f in patch:
|
||||
if not f.is_binary_file:
|
||||
self.total_additions += f.added
|
||||
self.total_deletions += f.removed
|
||||
self.patched_file_stats_dict[shorten_patched_file_name_for_stats(f.path)] = {
|
||||
"abs_changed_lines": f.added + f.removed}
|
||||
self.files_renamed = self.files_renamed + 1 if f.is_rename else self.files_renamed
|
||||
# sort the self.patched_file_stats_dict by the abs_changed_lines, descending
|
||||
self.patched_file_stats_dict = dict(sorted(
|
||||
self.patched_file_stats_dict.items(),
|
||||
key=lambda item: item[1]["abs_changed_lines"],
|
||||
reverse=True))
|
||||
# sanity check; if all the values are zeroes, throw an exception
|
||||
if self.total_additions == 0 and self.total_deletions == 0 and \
|
||||
self.files_modified == 0 and self.files_added == 0 and self.files_removed == 0:
|
||||
self.problems.append("diff_has_no_changes")
|
||||
raise Exception(
|
||||
f"Patch file {self.parent.full_file_path()} has no changes. diff is {len(self.diff)} bytes: '{self.diff}'")
|
||||
|
||||
def __str__(self) -> str:
|
||||
desc: str = \
|
||||
f"<{self.parent.file_base_name}(:{self.counter}):" + \
|
||||
f"{self.one_line_patch_stats()}: {self.from_email}: '{self.subject}' >"
|
||||
return desc
|
||||
|
||||
def apply_patch(self, working_dir: str, options: dict[str, bool]):
|
||||
# Sanity check: if patch would create files, make sure they don't exist to begin with.
|
||||
# This avoids patches being able to overwrite the mainline.
|
||||
for would_be_created_file in self.created_file_names:
|
||||
full_path = os.path.join(working_dir, would_be_created_file)
|
||||
if os.path.exists(full_path):
|
||||
self.problems.append("overwrites")
|
||||
log.warning(
|
||||
f"File {would_be_created_file} already exists, but patch {self} would re-create it.")
|
||||
if options["allow_recreate_existing_files"]:
|
||||
log.warning(f"Tolerating recreation in {self} as instructed.")
|
||||
os.remove(full_path)
|
||||
|
||||
# Use the 'patch' utility to apply the patch.
|
||||
proc = subprocess.run(
|
||||
["patch", "--batch", "-p1", "-N", "--reject-file=patching.rejects"],
|
||||
cwd=working_dir,
|
||||
input=self.diff.encode("utf-8"),
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
check=False)
|
||||
# read the output of the patch command
|
||||
stdout_output = proc.stdout.decode("utf-8").strip()
|
||||
stderr_output = proc.stderr.decode("utf-8").strip()
|
||||
if stdout_output != "":
|
||||
log.debug(f"patch stdout: {stdout_output}")
|
||||
if stderr_output != "":
|
||||
log.warning(f"patch stderr: {stderr_output}")
|
||||
|
||||
# Look at stdout. If it contains:
|
||||
if " (offset" in stdout_output or " with fuzz " in stdout_output:
|
||||
log.warning(f"Patch {self} needs rebase: offset/fuzz used during apply.")
|
||||
self.problems.append("needs_rebase")
|
||||
|
||||
# Check if the exit code is not zero and bomb
|
||||
if proc.returncode != 0:
|
||||
# prefix each line of the stderr_output with "STDERR: ", then join again
|
||||
stderr_output = "\n".join([f"STDERR: {line}" for line in stderr_output.splitlines()])
|
||||
stderr_output = "\n" + stderr_output if stderr_output != "" else stderr_output
|
||||
stdout_output = "\n".join([f"STDOUT: {line}" for line in stdout_output.splitlines()])
|
||||
stdout_output = "\n" + stdout_output if stdout_output != "" else stdout_output
|
||||
self.problems.append("failed_to_apply")
|
||||
raise Exception(
|
||||
f"Failed to apply patch {self.parent.full_file_path()}:{stderr_output}{stdout_output}")
|
||||
|
||||
def commit_changes_to_git(self, repo: git.Repo, add_rebase_tags: bool):
|
||||
log.info(f"Committing changes to git: {self.parent.file_base_name}")
|
||||
# add all the files that were touched by the patch
|
||||
# if the patch failed to parse, this will be an empty list, so we'll just add all changes.
|
||||
add_all_changes_in_git = False
|
||||
if not self.failed_to_parse:
|
||||
# sanity check.
|
||||
if len(self.all_file_names_touched) == 0:
|
||||
raise Exception(
|
||||
f"Patch {self} has no files touched, but is not marked as failed to parse.")
|
||||
# add all files to git staging area
|
||||
for file_name in self.all_file_names_touched:
|
||||
log.info(f"Adding file {file_name} to git")
|
||||
full_path = os.path.join(repo.working_tree_dir, file_name)
|
||||
if not os.path.exists(full_path):
|
||||
self.problems.append("wrong_strip_level")
|
||||
log.error(f"File '{full_path}' does not exist, but is touched by {self}")
|
||||
add_all_changes_in_git = True
|
||||
else:
|
||||
repo.git.add(file_name)
|
||||
|
||||
if self.failed_to_parse or add_all_changes_in_git:
|
||||
log.warning(f"Rescue: adding all changed files to git for {self}")
|
||||
repo.git.add(repo.working_tree_dir)
|
||||
|
||||
# commit the changes, using GitPython; show the produced commit hash
|
||||
commit_message = f"{self.parent.file_base_name}(:{self.counter})\n\nOriginal-Subject: {self.subject}\n{self.desc}"
|
||||
if add_rebase_tags:
|
||||
commit_message = f"{commit_message}\n{self.patch_rebase_tags_desc()}"
|
||||
author: git.Actor = git.Actor(self.from_name, self.from_email)
|
||||
committer: git.Actor = git.Actor("Armbian AutoPatcher", "patching@armbian.com")
|
||||
commit = repo.index.commit(
|
||||
message=commit_message,
|
||||
author=author,
|
||||
committer=committer,
|
||||
author_date=self.date,
|
||||
commit_date=self.date,
|
||||
skip_hooks=True
|
||||
)
|
||||
log.info(f"Committed changes to git: {commit.hexsha}")
|
||||
# Make sure the commit is not empty
|
||||
if commit.stats.total["files"] == 0:
|
||||
self.problems.append("empty_commit")
|
||||
raise Exception(
|
||||
f"Commit {commit.hexsha} ended up empty; source patch is {self} at {self.parent.full_file_path()}(:{self.counter})")
|
||||
return {"commit_hash": commit.hexsha, "patch": self}
|
||||
|
||||
def patch_rebase_tags_desc(self):
|
||||
tags = {}
|
||||
tags["Patch-File"] = self.parent.file_base_name
|
||||
tags["Patch-File-Counter"] = self.counter
|
||||
tags["Patch-Rel-Directory"] = self.parent.patch_dir.rel_dir
|
||||
tags["Patch-Type"] = self.parent.patch_dir.patch_root_dir.patch_type
|
||||
tags["Patch-Root-Type"] = self.parent.patch_dir.root_type
|
||||
tags["Patch-Sub-Type"] = self.parent.patch_dir.sub_type
|
||||
if self.subject is not None:
|
||||
tags["Original-Subject"] = self.subject
|
||||
ret = ""
|
||||
for k, v in tags.items():
|
||||
ret += f"X-Armbian: {k}: {v}\n"
|
||||
return ret
|
||||
|
||||
def markdown_applied(self):
|
||||
if self.applied_ok:
|
||||
return "✅"
|
||||
return "❌"
|
||||
|
||||
def markdown_problems(self):
|
||||
if len(self.problems) == 0:
|
||||
return "✅"
|
||||
ret = []
|
||||
for problem in self.problems:
|
||||
if problem in ["not_mbox", "needs_rebase"]:
|
||||
# warning emoji
|
||||
ret.append(f"⚠️{problem}") # normal
|
||||
else:
|
||||
ret.append(f"**❌{problem}**") # bold
|
||||
|
||||
return " ".join(ret)
|
||||
|
||||
def markdown_diffstat(self):
|
||||
return f"`{self.text_diffstats()}`"
|
||||
|
||||
def markdown_files(self):
|
||||
ret = []
|
||||
max_files_shown = 5
|
||||
# Use the keys of the patch_file_stats_dict which is already sorted by the larger files
|
||||
file_names = list(self.patched_file_stats_dict.keys())
|
||||
# if no files were touched, just return an interrobang
|
||||
if len(file_names) == 0:
|
||||
return "`?`"
|
||||
for file_name in file_names[:max_files_shown]:
|
||||
ret.append(f"`{file_name}`")
|
||||
if len(file_names) > max_files_shown:
|
||||
ret.append(f"_and {len(file_names) - max_files_shown} more_")
|
||||
return ", ".join(ret)
|
||||
|
||||
def markdown_author(self):
|
||||
if self.from_name:
|
||||
return f"{self.from_name}"
|
||||
return "`?`"
|
||||
|
||||
def markdown_subject(self):
|
||||
if self.subject:
|
||||
return f"_{self.subject}_"
|
||||
return "`?`"
|
||||
|
||||
|
||||
def fix_patch_subject(subject):
|
||||
# replace newlines with one space
|
||||
subject = re.sub(r"\s+", " ", subject.strip())
|
||||
# replace every non-printable character with a space
|
||||
subject = re.sub(r"[^\x20-\x7e]", " ", subject)
|
||||
# replace two consecutive spaces with one
|
||||
subject = re.sub(r" {2}", " ", subject).strip()
|
||||
# remove tags from the beginning of the subject
|
||||
tags = ['PATCH']
|
||||
for tag in tags:
|
||||
# subject might begin with "[tag xxxxx]"; remove it
|
||||
if subject.startswith(f"[{tag}"):
|
||||
subject = subject[subject.find("]") + 1:].strip()
|
||||
prefixes = ['FROMLIST(v1): ']
|
||||
for prefix in prefixes:
|
||||
if subject.startswith(prefix):
|
||||
subject = subject[len(prefix):].strip()
|
||||
return subject
|
||||
|
||||
|
||||
# This is definitely not the right way to do this, but it works for now.
|
||||
def prepare_clean_git_tree_for_patching(repo: git.Repo, revision_sha: str, branch_name: str):
|
||||
# Let's find the Commit object for the revision_sha
|
||||
log.info("Resetting git tree to revision '%s'", revision_sha)
|
||||
commit = repo.commit(revision_sha)
|
||||
# Lets checkout, detached HEAD, to that Commit
|
||||
repo.head.reference = commit
|
||||
repo.head.reset(index=True, working_tree=True)
|
||||
# Let's create a new branch, and checkout to it, discarding any existing branch
|
||||
log.info("Creating branch '%s'", branch_name)
|
||||
repo.create_head(branch_name, revision_sha, force=True)
|
||||
repo.head.reference = repo.heads[branch_name]
|
||||
repo.head.reset(index=True, working_tree=True)
|
||||
# Let's remove all the untracked, but not ignored, files from the working copy
|
||||
for file in repo.untracked_files:
|
||||
full_name = os.path.join(repo.working_tree_dir, file)
|
||||
log.info(f"Removing untracked file '{file}'")
|
||||
os.remove(full_name)
|
||||
|
||||
|
||||
def export_commit_as_patch(repo: git.Repo, commit: str):
|
||||
# Export the commit as a patch
|
||||
proc = subprocess.run([
|
||||
"git", "format-patch",
|
||||
"--unified=3", # force 3 lines of diff context
|
||||
"--keep-subject", # do not add a prefix to the subject "[PATCH] "
|
||||
# "--add-header=Organization: Armbian", # add a header to the patch (ugly, changes the header)
|
||||
"--no-encode-email-headers", # do not encode email headers
|
||||
# "--signature=66666" # add a signature; this does not work and causes patch to not be emitted.
|
||||
'--signature', "Armbian",
|
||||
'--stat=120', # 'wider' stat output; default is 80
|
||||
'--stat-graph-width=10', # shorten the diffgraph graph part, it's too long
|
||||
"-1", "--stdout", commit
|
||||
],
|
||||
cwd=repo.working_tree_dir,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
check=False)
|
||||
# read the output of the patch command
|
||||
stdout_output = proc.stdout.decode("utf-8")
|
||||
stderr_output = proc.stderr.decode("utf-8")
|
||||
# if stdout_output != "":
|
||||
# print(f"git format-patch stdout: \n{stdout_output}", file=sys.stderr)
|
||||
# if stderr_output != "":
|
||||
# print(f"git format-patch stderr: {stderr_output}", file=sys.stderr)
|
||||
# Check if the exit code is not zero and bomb
|
||||
if proc.returncode != 0:
|
||||
raise Exception(f"Failed to export commit {commit} to patch: {stderr_output}")
|
||||
if stdout_output == "":
|
||||
raise Exception(f"Failed to export commit {commit} to patch: no output")
|
||||
find = f"From {commit} Mon Sep 17 00:00:00 2001\n"
|
||||
replace = "From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001\n"
|
||||
fixed_patch_zeros = stdout_output.replace(find, replace)
|
||||
# Return the patch
|
||||
return fixed_patch_zeros
|
||||
|
||||
|
||||
# Hack
|
||||
def downgrade_to_ascii(utf8: str) -> str:
|
||||
return unidecode(utf8)
|
||||
|
||||
|
||||
# Try hard to read a possibly invalid utf-8 file
|
||||
def read_file_as_utf8(file_name: str) -> tuple[str, list[str]]:
|
||||
with open(file_name, "rb") as f:
|
||||
content = f.read() # Read the file as bytes
|
||||
try:
|
||||
return content.decode("utf-8"), [] # no problems if this worked
|
||||
except UnicodeDecodeError:
|
||||
# If decoding failed, try to decode as iso-8859-1
|
||||
return content.decode("iso-8859-1"), ["invalid_utf8"] # utf-8 problems
|
||||
|
||||
|
||||
# Extremely Armbian-specific.
|
||||
def perform_git_archeology(
|
||||
base_armbian_src_dir: str, armbian_git_repo: git.Repo, patch: PatchInPatchFile,
|
||||
bad_archeology_hexshas: list[str], fast: bool):
|
||||
log.info(f"Trying to recover description for {patch.parent.file_name}:{patch.counter}")
|
||||
patch_file_name = patch.parent.file_name
|
||||
|
||||
patch_file_paths: list[str] = []
|
||||
if fast:
|
||||
patch_file_paths = [patch.parent.full_file_path()]
|
||||
else:
|
||||
# Find all the files in the repo with the same name as the patch file.
|
||||
# Use the UNIX find command to find all the files with the same name as the patch file.
|
||||
proc = subprocess.run(
|
||||
[
|
||||
"find", base_armbian_src_dir,
|
||||
"-name", patch_file_name,
|
||||
"-type", "f"
|
||||
],
|
||||
cwd=base_armbian_src_dir, stdout=subprocess.PIPE, stderr=subprocess.PIPE, check=True)
|
||||
patch_file_paths = proc.stdout.decode("utf-8").splitlines()
|
||||
log.info(f"Found {len(patch_file_paths)} files with name {patch_file_name}")
|
||||
all_commits: list = []
|
||||
for found_file in patch_file_paths:
|
||||
relative_file_path = os.path.relpath(found_file, base_armbian_src_dir)
|
||||
hexshas = armbian_git_repo.git.log('--pretty=%H', '--follow', '--', relative_file_path) \
|
||||
.split('\n')
|
||||
log.info(f"- Trying to recover description for {relative_file_path} from {len(hexshas)} commits")
|
||||
|
||||
# filter out hexshas that are in the known-bad archeology list
|
||||
hexshas = [hexsha for hexsha in hexshas if hexsha not in bad_archeology_hexshas]
|
||||
|
||||
commits = [armbian_git_repo.rev_parse(c) for c in hexshas]
|
||||
all_commits.extend(commits)
|
||||
|
||||
unique_commits: list[git.Commit] = []
|
||||
for commit in all_commits:
|
||||
if commit not in unique_commits:
|
||||
unique_commits.append(commit)
|
||||
|
||||
unique_commits.sort(key=lambda c: c.committed_datetime)
|
||||
|
||||
main_suspect: git.Commit = unique_commits[0]
|
||||
log.info(f"- Main suspect: {main_suspect}: {main_suspect.message.rstrip()} Author: {main_suspect.author}")
|
||||
|
||||
# From the main_suspect, set the subject and the author, and the dates.
|
||||
main_suspect_msg_lines = main_suspect.message.splitlines()
|
||||
# strip each line
|
||||
main_suspect_msg_lines = [line.strip() for line in main_suspect_msg_lines]
|
||||
# remove empty lines
|
||||
main_suspect_msg_lines = [line for line in main_suspect_msg_lines if line != ""]
|
||||
main_suspect_subject = main_suspect_msg_lines[0].strip()
|
||||
# remove the first line, which is the subject
|
||||
suspect_desc_lines = main_suspect_msg_lines[1:]
|
||||
|
||||
# Now, create a list for all other non-main suspects.
|
||||
other_suspects_desc: list[str] = []
|
||||
other_suspects_desc.extend(
|
||||
[f"> recovered message: > {suspect_desc_line}" for suspect_desc_line in suspect_desc_lines])
|
||||
other_suspects_desc.extend("")
|
||||
for commit in unique_commits:
|
||||
subject = commit.message.splitlines()[0].strip()
|
||||
rfc822_date = commit.committed_datetime.strftime("%a, %d %b %Y %H:%M:%S %z")
|
||||
other_suspects_desc.extend([
|
||||
f"- Revision {commit.hexsha}: https://github.com/armbian/build/commit/{commit.hexsha}",
|
||||
f" Date: {rfc822_date}",
|
||||
f" From: {commit.author.name} <{commit.author.email}>",
|
||||
f" Subject: {subject}",
|
||||
""
|
||||
])
|
||||
|
||||
patch.desc = downgrade_to_ascii("\n".join([f"> X-Git-Archeology: {line}" for line in other_suspects_desc]))
|
||||
|
||||
if patch.subject is None:
|
||||
patch.subject = downgrade_to_ascii("[ARCHEOLOGY] " + main_suspect_subject)
|
||||
if patch.date is None:
|
||||
patch.date = main_suspect.committed_datetime
|
||||
if patch.from_name is None or patch.from_email is None:
|
||||
patch.from_name, patch.from_email = downgrade_to_ascii(
|
||||
main_suspect.author.name), main_suspect.author.email
|
||||
123
lib/tools/git-to-patches.py
Normal file
123
lib/tools/git-to-patches.py
Normal file
@@ -0,0 +1,123 @@
|
||||
#! /bin/env python3
|
||||
import os.path
|
||||
|
||||
# Let's use GitPython to query and manipulate the git repo
|
||||
from git import Repo, GitCmdObjectDB
|
||||
|
||||
import common.armbian_utils as armbian_utils
|
||||
import common.patching_utils as patching_utils
|
||||
|
||||
# Show the environment variables we've been called with
|
||||
armbian_utils.show_incoming_environment()
|
||||
|
||||
# Parse env vars.
|
||||
SRC = armbian_utils.get_from_env_or_bomb("SRC")
|
||||
GIT_WORK_DIR = armbian_utils.get_from_env_or_bomb("GIT_WORK_DIR")
|
||||
GIT_BRANCH = armbian_utils.get_from_env_or_bomb("GIT_BRANCH")
|
||||
GIT_TARGET_REPLACE = armbian_utils.get_from_env("GIT_TARGET_REPLACE")
|
||||
GIT_TARGET_SEARCH = armbian_utils.get_from_env("GIT_TARGET_SEARCH")
|
||||
|
||||
git_repo = Repo(GIT_WORK_DIR, odbt=GitCmdObjectDB)
|
||||
|
||||
|
||||
BASE_GIT_REVISION = armbian_utils.get_from_env("BASE_GIT_REVISION")
|
||||
BASE_GIT_TAG = armbian_utils.get_from_env("BASE_GIT_TAG")
|
||||
if BASE_GIT_REVISION is None:
|
||||
if BASE_GIT_TAG is None:
|
||||
raise Exception("BASE_GIT_REVISION or BASE_GIT_TAG must be set")
|
||||
else:
|
||||
BASE_GIT_REVISION = git_repo.tags[BASE_GIT_TAG].commit.hexsha
|
||||
print(f"Found BASE_GIT_REVISION={BASE_GIT_REVISION} for BASE_GIT_TAG={BASE_GIT_TAG}")
|
||||
|
||||
# Using GitPython, get the list of commits between the HEAD of the branch and the base revision
|
||||
# (which is either a tag or a commit)
|
||||
git_commits = list(git_repo.iter_commits(f"{BASE_GIT_REVISION}..{GIT_BRANCH}"))
|
||||
|
||||
|
||||
class ParsedPatch:
|
||||
def __init__(self, original_patch: str, sha1, title):
|
||||
self.sha1: str = sha1
|
||||
self.title: str = title
|
||||
self.original_patch: str = original_patch
|
||||
self.patch_diff: str | None = None
|
||||
self.original_header: str | None = None
|
||||
self.final_desc: str | None = None
|
||||
self.final_patch: str | None = None
|
||||
self.tags: dict[str, str] | None = None
|
||||
self.target_dir_fn: str | None = None
|
||||
self.target_dir: str | None = None
|
||||
self.target_filename: str | None = None
|
||||
self.target_counter: int | None = None
|
||||
|
||||
def parse(self):
|
||||
# print(f"Patch: {patch}")
|
||||
self.original_header, self.patch_diff = patching_utils.PatchFileInDir.split_description_and_patch(
|
||||
self.original_patch)
|
||||
self.final_desc, self.tags = self.remove_tags_from_description(self.original_header)
|
||||
self.final_patch = self.final_desc + "\n---\n" + self.patch_diff
|
||||
# print(f"Description: ==={desc}===")
|
||||
# print(f"Diff: ==={diff}===")
|
||||
# print(f"Tags: {self.tags}")
|
||||
self.target_dir = self.tags.get("Patch-Rel-Directory", None)
|
||||
self.target_filename = self.tags.get("Patch-File", None)
|
||||
self.target_counter = int(self.tags.get("Patch-File-Counter", "0"))
|
||||
|
||||
def remove_tags_from_description(self, desc: str) -> (str, dict[str, str]):
|
||||
tag_prefix = "X-Armbian: "
|
||||
ret_desc = []
|
||||
ret_tags = {}
|
||||
lines: list[str] = desc.splitlines()
|
||||
for line in lines:
|
||||
if line.startswith(tag_prefix):
|
||||
# remove the prefix
|
||||
line = line[len(tag_prefix):]
|
||||
tag, value = line.split(":", 1)
|
||||
ret_tags[tag.strip()] = value.strip()
|
||||
else:
|
||||
ret_desc.append(line)
|
||||
return "\n".join(ret_desc), ret_tags
|
||||
|
||||
def prepare_target_dir_fn(self, search: "str | None", replace: "str | None"):
|
||||
if search is not None and replace is not None:
|
||||
self.target_dir = self.target_dir.replace(search, replace)
|
||||
self.target_dir_fn = self.target_dir + "/" + self.target_filename
|
||||
|
||||
|
||||
parsed_patches: list[ParsedPatch] = []
|
||||
|
||||
for commit in git_commits:
|
||||
patch = patching_utils.export_commit_as_patch(git_repo, commit.hexsha)
|
||||
parsed = ParsedPatch(patch, commit.hexsha, commit.message.splitlines()[0])
|
||||
parsed.parse()
|
||||
parsed.prepare_target_dir_fn(GIT_TARGET_SEARCH, GIT_TARGET_REPLACE)
|
||||
parsed_patches.append(parsed)
|
||||
|
||||
# Now we have a list of parsed patches, each with its target dir, filename and counter.
|
||||
for patch in parsed_patches:
|
||||
print(f"- Patch: target_dir_fn: {patch.target_dir_fn} counter: {patch.target_counter}")
|
||||
|
||||
# Now we need to sort the patches by target_dir_fn and counter
|
||||
# We'll use a dict of lists, where the key is the target_dir_fn and the value is a list of patches
|
||||
# with that target_dir_fn
|
||||
patches_by_target_dir_fn: dict[str, list[ParsedPatch]] = {}
|
||||
for patch in parsed_patches:
|
||||
if patch.target_dir_fn not in patches_by_target_dir_fn:
|
||||
patches_by_target_dir_fn[patch.target_dir_fn] = []
|
||||
patches_by_target_dir_fn[patch.target_dir_fn].append(patch)
|
||||
|
||||
# sort the patches by counter
|
||||
for patches in patches_by_target_dir_fn.values():
|
||||
patches.sort(key=lambda p: p.target_counter)
|
||||
|
||||
# Show the stuff; write it to files, replacing
|
||||
for target_dir_fn, patches in patches_by_target_dir_fn.items():
|
||||
print(f"Target dir/fn: {target_dir_fn}")
|
||||
full_target_file = os.path.join(SRC, f"{target_dir_fn}.patch")
|
||||
print(f"Writing to {full_target_file}")
|
||||
full_target_dir = os.path.dirname(full_target_file)
|
||||
if not os.path.exists(full_target_dir):
|
||||
os.makedirs(full_target_dir)
|
||||
with open(full_target_file, "w") as f:
|
||||
for patch in patches:
|
||||
print(f" - Patch: {patch.target_counter}: '{patch.title}'")
|
||||
f.write(patch.final_patch)
|
||||
224
lib/tools/patching.py
Executable file
224
lib/tools/patching.py
Executable file
@@ -0,0 +1,224 @@
|
||||
#! /bin/env python3
|
||||
import logging
|
||||
|
||||
# Let's use GitPython to query and manipulate the git repo
|
||||
from git import Repo, GitCmdObjectDB, InvalidGitRepositoryError
|
||||
|
||||
import common.armbian_utils as armbian_utils
|
||||
import common.patching_utils as patching_utils
|
||||
from common.md_asset_log import SummarizedMarkdownWriter
|
||||
|
||||
# Prepare logging
|
||||
armbian_utils.setup_logging()
|
||||
log: logging.Logger = logging.getLogger("patching")
|
||||
|
||||
# Show the environment variables we've been called with
|
||||
armbian_utils.show_incoming_environment()
|
||||
|
||||
# Let's start by reading environment variables.
|
||||
# Those are always needed, and we should bomb if they're not set.
|
||||
SRC = armbian_utils.get_from_env_or_bomb("SRC")
|
||||
PATCH_TYPE = armbian_utils.get_from_env_or_bomb("PATCH_TYPE")
|
||||
PATCH_DIRS_TO_APPLY = armbian_utils.parse_env_for_tokens("PATCH_DIRS_TO_APPLY")
|
||||
APPLY_PATCHES = armbian_utils.get_from_env("APPLY_PATCHES")
|
||||
PATCHES_TO_GIT = armbian_utils.get_from_env("PATCHES_TO_GIT")
|
||||
REWRITE_PATCHES = armbian_utils.get_from_env("REWRITE_PATCHES")
|
||||
ALLOW_RECREATE_EXISTING_FILES = armbian_utils.get_from_env("ALLOW_RECREATE_EXISTING_FILES")
|
||||
GIT_ARCHEOLOGY = armbian_utils.get_from_env("GIT_ARCHEOLOGY")
|
||||
FAST_ARCHEOLOGY = armbian_utils.get_from_env("FAST_ARCHEOLOGY")
|
||||
apply_patches = APPLY_PATCHES == "yes"
|
||||
apply_patches_to_git = PATCHES_TO_GIT == "yes"
|
||||
git_archeology = GIT_ARCHEOLOGY == "yes"
|
||||
fast_archeology = FAST_ARCHEOLOGY == "yes"
|
||||
rewrite_patches_in_place = REWRITE_PATCHES == "yes"
|
||||
apply_options = {"allow_recreate_existing_files": (ALLOW_RECREATE_EXISTING_FILES == "yes")}
|
||||
|
||||
# Those are optional.
|
||||
GIT_WORK_DIR = armbian_utils.get_from_env("GIT_WORK_DIR")
|
||||
BOARD = armbian_utils.get_from_env("BOARD")
|
||||
TARGET = armbian_utils.get_from_env("TARGET")
|
||||
USERPATCHES_PATH = armbian_utils.get_from_env("USERPATCHES_PATH")
|
||||
|
||||
# Some path possibilities
|
||||
CONST_PATCH_ROOT_DIRS = []
|
||||
for patch_dir_to_apply in PATCH_DIRS_TO_APPLY:
|
||||
if USERPATCHES_PATH is not None:
|
||||
CONST_PATCH_ROOT_DIRS.append(
|
||||
patching_utils.PatchRootDir(f"{USERPATCHES_PATH}/{PATCH_TYPE}/{patch_dir_to_apply}", "user",
|
||||
PATCH_TYPE, USERPATCHES_PATH))
|
||||
CONST_PATCH_ROOT_DIRS.append(
|
||||
patching_utils.PatchRootDir(f"{SRC}/patch/{PATCH_TYPE}/{patch_dir_to_apply}", "core", PATCH_TYPE, SRC))
|
||||
|
||||
# Some sub-path possibilities:
|
||||
CONST_PATCH_SUB_DIRS = []
|
||||
if TARGET is not None:
|
||||
CONST_PATCH_SUB_DIRS.append(patching_utils.PatchSubDir(f"target_{TARGET}", "target"))
|
||||
if BOARD is not None:
|
||||
CONST_PATCH_SUB_DIRS.append(patching_utils.PatchSubDir(f"board_{BOARD}", "board"))
|
||||
CONST_PATCH_SUB_DIRS.append(patching_utils.PatchSubDir("", "common"))
|
||||
|
||||
# Prepare the full list of patch directories to apply
|
||||
ALL_DIRS = []
|
||||
for patch_root_dir in CONST_PATCH_ROOT_DIRS:
|
||||
for patch_sub_dir in CONST_PATCH_SUB_DIRS:
|
||||
ALL_DIRS.append(patching_utils.PatchDir(patch_root_dir, patch_sub_dir, SRC))
|
||||
|
||||
# Now, loop over ALL_DIRS, and find the patch files in each directory
|
||||
for one_dir in ALL_DIRS:
|
||||
one_dir.find_patch_files()
|
||||
|
||||
# Gather all the PatchFileInDir objects into a single list
|
||||
ALL_DIR_PATCH_FILES: list[patching_utils.PatchFileInDir] = []
|
||||
for one_dir in ALL_DIRS:
|
||||
for one_patch_file in one_dir.patch_files:
|
||||
ALL_DIR_PATCH_FILES.append(one_patch_file)
|
||||
|
||||
ALL_DIR_PATCH_FILES_BY_NAME: dict[(str, patching_utils.PatchFileInDir)] = {}
|
||||
for one_patch_file in ALL_DIR_PATCH_FILES:
|
||||
# Hack: do a single one: DO NOT ENABLE THIS
|
||||
# if one_patch_file.file_name == "board-pbp-add-dp-alt-mode.patch":
|
||||
ALL_DIR_PATCH_FILES_BY_NAME[one_patch_file.file_name] = one_patch_file
|
||||
|
||||
# sort the dict by the key (file_name, sans dir...)
|
||||
ALL_DIR_PATCH_FILES_BY_NAME = dict(sorted(ALL_DIR_PATCH_FILES_BY_NAME.items()))
|
||||
|
||||
# Now, actually read the patch files.
|
||||
# Patch files might be in mailbox format, and in that case contain more than one "patch".
|
||||
# It might also be just a unified diff, with no mailbox headers.
|
||||
# We need to read the file, and see if it's a mailbox file; if so, split into multiple patches.
|
||||
# If not, just use the whole file as a single patch.
|
||||
# We'll store the patches in a list of Patch objects.
|
||||
VALID_PATCHES: list[patching_utils.PatchInPatchFile] = []
|
||||
for key in ALL_DIR_PATCH_FILES_BY_NAME:
|
||||
patch_file_in_dir: patching_utils.PatchFileInDir = ALL_DIR_PATCH_FILES_BY_NAME[key]
|
||||
try:
|
||||
patches_from_file = patch_file_in_dir.split_patches_from_file()
|
||||
VALID_PATCHES.extend(patches_from_file)
|
||||
except Exception as e:
|
||||
log.critical(
|
||||
f"Failed to read patch file {patch_file_in_dir.file_name}: {e}\n"
|
||||
f"Can't continue; please fix the patch file {patch_file_in_dir.full_file_path()} manually. Sorry."
|
||||
, exc_info=True)
|
||||
exit(1)
|
||||
|
||||
# Now, some patches might not be mbox-formatted, or somehow else invalid. We can try and recover those.
|
||||
# That is only possible if we're applying patches to git.
|
||||
# Rebuilding description is only possible if we've the git repo where the patches themselves reside.
|
||||
for patch in VALID_PATCHES:
|
||||
try:
|
||||
patch.parse_patch() # this handles diff-level parsing; modifies itself; throws exception if invalid
|
||||
except Exception as invalid_exception:
|
||||
log.critical(f"Failed to parse {patch.parent.full_file_path()}(:{patch.counter}): {invalid_exception}")
|
||||
log.critical(
|
||||
f"Can't continue; please fix the patch file {patch.parent.full_file_path()} manually;"
|
||||
f" check for possible double-mbox encoding. Sorry.")
|
||||
exit(2)
|
||||
|
||||
log.info(f"Parsed patches.")
|
||||
|
||||
# Now, for patches missing description, try to recover descriptions from the Armbian repo.
|
||||
# It might be the SRC is not a git repo (say, when building in Docker), so we need to check.
|
||||
if apply_patches_to_git and git_archeology:
|
||||
try:
|
||||
armbian_git_repo = Repo(SRC)
|
||||
except InvalidGitRepositoryError:
|
||||
armbian_git_repo = None
|
||||
log.warning(f"- SRC is not a git repo, so cannot recover descriptions from there.")
|
||||
if armbian_git_repo is not None:
|
||||
bad_archeology_hexshas = ["something"]
|
||||
|
||||
for patch in VALID_PATCHES:
|
||||
if patch.desc is None:
|
||||
patching_utils.perform_git_archeology(
|
||||
SRC, armbian_git_repo, patch, bad_archeology_hexshas, fast_archeology)
|
||||
|
||||
# Now, we need to apply the patches.
|
||||
if apply_patches:
|
||||
log.info("Cleaning target git directory...")
|
||||
git_repo = Repo(GIT_WORK_DIR, odbt=GitCmdObjectDB)
|
||||
BRANCH_FOR_PATCHES = armbian_utils.get_from_env_or_bomb("BRANCH_FOR_PATCHES")
|
||||
BASE_GIT_REVISION = armbian_utils.get_from_env("BASE_GIT_REVISION")
|
||||
BASE_GIT_TAG = armbian_utils.get_from_env("BASE_GIT_TAG")
|
||||
if BASE_GIT_REVISION is None:
|
||||
if BASE_GIT_TAG is None:
|
||||
raise Exception("BASE_GIT_REVISION or BASE_GIT_TAG must be set")
|
||||
else:
|
||||
BASE_GIT_REVISION = git_repo.tags[BASE_GIT_TAG].commit.hexsha
|
||||
log.debug(f"Found BASE_GIT_REVISION={BASE_GIT_REVISION} for BASE_GIT_TAG={BASE_GIT_TAG}")
|
||||
|
||||
patching_utils.prepare_clean_git_tree_for_patching(git_repo, BASE_GIT_REVISION, BRANCH_FOR_PATCHES)
|
||||
|
||||
# Loop over the VALID_PATCHES, and apply them
|
||||
log.info(f"- Applying {len(VALID_PATCHES)} patches...")
|
||||
for one_patch in VALID_PATCHES:
|
||||
log.info(f"Applying patch {one_patch}")
|
||||
one_patch.applied_ok = False
|
||||
try:
|
||||
one_patch.apply_patch(GIT_WORK_DIR, apply_options)
|
||||
one_patch.applied_ok = True
|
||||
except Exception as e:
|
||||
log.error(f"Exception while applying patch {one_patch}: {e}", exc_info=True)
|
||||
|
||||
if one_patch.applied_ok and apply_patches_to_git:
|
||||
committed = one_patch.commit_changes_to_git(git_repo, (not rewrite_patches_in_place))
|
||||
commit_hash = committed['commit_hash']
|
||||
one_patch.git_commit_hash = commit_hash
|
||||
if rewrite_patches_in_place:
|
||||
rewritten_patch = patching_utils.export_commit_as_patch(
|
||||
git_repo, commit_hash)
|
||||
one_patch.rewritten_patch = rewritten_patch
|
||||
|
||||
if rewrite_patches_in_place:
|
||||
# Now; we need to write the patches to files.
|
||||
# loop over the patches, and group them by the parent; the parent is the PatchFileInDir object.
|
||||
patch_files_by_parent: dict[(patching_utils.PatchFileInDir, list[patching_utils.PatchInPatchFile])] = {}
|
||||
for one_patch in VALID_PATCHES:
|
||||
if not one_patch.applied_ok:
|
||||
log.warning(f"Skipping patch {one_patch} because it was not applied successfully.")
|
||||
continue
|
||||
|
||||
if one_patch.parent not in patch_files_by_parent:
|
||||
patch_files_by_parent[one_patch.parent] = []
|
||||
patch_files_by_parent[one_patch.parent].append(one_patch)
|
||||
parent: patching_utils.PatchFileInDir
|
||||
for parent in patch_files_by_parent:
|
||||
patches = patch_files_by_parent[parent]
|
||||
parent.rewrite_patch_file(patches)
|
||||
UNAPPLIED_PATCHES = [one_patch for one_patch in VALID_PATCHES if not one_patch.applied_ok]
|
||||
for failed_patch in UNAPPLIED_PATCHES:
|
||||
log.info(
|
||||
f"Consider removing {failed_patch.parent.full_file_path()}(:{failed_patch.counter}); "
|
||||
f"it was not applied successfully.")
|
||||
|
||||
# Create markdown about the patches
|
||||
with SummarizedMarkdownWriter(f"patching_{PATCH_TYPE}.md", f"{PATCH_TYPE} patching") as md:
|
||||
patch_count = 0
|
||||
patches_applied = 0
|
||||
patches_with_problems = 0
|
||||
problem_by_type: dict[str, int] = {}
|
||||
if len(VALID_PATCHES) == 0:
|
||||
md.write(f"- No patches found.\n")
|
||||
else:
|
||||
# Prepare the Markdown table header
|
||||
md.write(
|
||||
"| Applied? | Problems | Patch | Diffstat Summary | Files patched | Author | Subject | Link to patch |\n")
|
||||
# Markdown table hyphen line and column alignment
|
||||
md.write("| :---: | :---: | :--- | :--- | :--- | :--- | :--- | :--- |\n")
|
||||
for one_patch in VALID_PATCHES:
|
||||
# Markdown table row
|
||||
md.write(
|
||||
f"| {one_patch.markdown_applied()} | {one_patch.markdown_problems()} | `{one_patch.parent.file_base_name}` | {one_patch.markdown_diffstat()} | {one_patch.markdown_files()} | {one_patch.markdown_author()} | {one_patch.markdown_subject()} | {one_patch.git_commit_hash} |\n")
|
||||
patch_count += 1
|
||||
if one_patch.applied_ok:
|
||||
patches_applied += 1
|
||||
if len(one_patch.problems) > 0:
|
||||
patches_with_problems += 1
|
||||
for problem in one_patch.problems:
|
||||
if problem not in problem_by_type:
|
||||
problem_by_type[problem] = 0
|
||||
problem_by_type[problem] += 1
|
||||
md.add_summary(f"{patch_count} total patches")
|
||||
md.add_summary(f"{patches_applied} applied")
|
||||
md.add_summary(f"{patches_with_problems} with problems")
|
||||
for problem in problem_by_type:
|
||||
md.add_summary(f"{problem_by_type[problem]} {problem}")
|
||||
Reference in New Issue
Block a user