mirror of
https://github.com/armbian/build
synced 2025-09-24 19:47:06 +07:00
pipeline: inventory all board vars; add not-eos-with-video; introduce TARGETS_FILTER_INCLUDE
> How to use:
>
> `./compile.sh inventory` - does just the board inventory; look for output in `output/info`
>
> `./compile.sh targets-dashboard` - does inventory, targets compositing, and images info; look for output in `output/info`, read the instructions output by the command if you want to load the OpenSearch dashboards.
>
> `./compile.sh targets` - does the full targets compositing and artifacts, look for output in `output/info`
>
> If you don't have a `userpatches/targets.yaml`, _one will be provided for you_ defaulting to Jammy minimal CLI
> and Jammy xfce desktop, for all boards in all branches. You can pass filters via `TARGETS_FILTER_INCLUDE=...` to narrow.
>
- board JSON inventory:
- more generic regex parsing of variables from board files:
- all top-level (non-indented) variables are parsed and included in the JSON board inventory
- this allows us to add new variables to the board files without having to update the parser
- variables can be bare, `export` or `declare -g`, but **_must_ be quoted** (single or double) and UPPER_CASE
- some special treatment for certain variables:
- `KERNEL_TARGET` is parsed as a _comma-separated_ list of valid BRANCH'es
- `BOARD_MAINTAINER` is parsed as _space-separated_ list of valid maintainer GH usernames as `BOARD_MAINTAINERS: [...]` in the JSON
- script complains if `BOARD_MAINTAINER` is not set in core boards. Empty is still allowed.
- `HAS_VIDEO_OUTPUT="no"` causes `BOARD_HAS_VIDEO: false` in the JSON (for desktop-only inventorying, see below)
- introduce `not-eos-with-video` in `items-from-inventory` at the targets compositor
- the same as `not-eos`, but with added `BOARD_HAS_VIDEO: true` filter, see above
- introduce `TARGETS_FILTER_INCLUDE` for targets compositor
- this filters the targets _after_ compositing (but before getting image info), based on the board inventory data
- it's a comma-separated list of `key:value` pairs, which are OR-ed together
- new virtual info `BOARD_SLASH_BRANCH` post-compositing inventory for filtering of a specific BOARD/BRANCH combo (e.g. `odroidhc4/edge`)
- some interesting possible filters:
- `TARGETS_FILTER_INCLUDE="BOARD:odroidhc4"`: _only_ build a single board, all branches. JIRA [AR-1806]
- `TARGETS_FILTER_INCLUDE="BOARD_SLASH_BRANCH:odroidhc4/current"`: _only_ build a single board/branch combo
- `TARGETS_FILTER_INCLUDE="BOARD:odroidhc4,BOARD:odroidn2"`: _only_ build _two_ boards, all branches.
- `TARGETS_FILTER_INCLUDE="BOARD_MAINTAINERS:rpardini"`: build all boards and branches where rpardini is a maintainer
- `TARGETS_FILTER_INCLUDE="BOARDFAMILY:rockchip64"`: build all boards and branches in the rockchip64 family
- image-info-only variables like `LINUXFAMILY` is **not** available for filtering at this stage
- rename `config/templates` `targets-all-cli.yaml` to `targets-default.yaml`
- this is used when no `userpatches/targets.yaml` is found
- new default includes all boards vs branches for non-EOS boards
- also desktop for all boards that _don't_ have `HAS_VIDEO_OUTPUT='no``
- introduce simplified `targets-dashboard` CLI:
- does only inventory, compositing, and image info, but not artifact reducing, etc.
- ignore desktop builds in the OpenSearch indexer
- update the OpenSearch Dashboards, including new information now available
- invert the logic used for `CLEAN_INFO` and `CLEAN_MATRIX`
- defaults to `yes` now, so new users/CI don't get hit by stale caches by default
- repo pipeline CLI stuff is usually run on saved/restored artifacts for `output/info`, so don't clean by default via the CLI
This commit is contained in:
@@ -1,18 +0,0 @@
|
|||||||
targets:
|
|
||||||
|
|
||||||
cli-ubuntu:
|
|
||||||
vars:
|
|
||||||
BUILD_MINIMAL: "no" # quoting "no", since we want the string 'no', not the False boolean
|
|
||||||
BUILD_DESKTOP: "no"
|
|
||||||
RELEASE: jammy
|
|
||||||
|
|
||||||
items-from-inventory:
|
|
||||||
all: yes # includes all available BOARD and BRANCH combinations
|
|
||||||
#conf: yes # includes all supported boards
|
|
||||||
#wip: yes # includes all work-in-progress boards
|
|
||||||
#not-eos: yes # not-eos boards, all branches
|
|
||||||
|
|
||||||
# comment items-from-inventory: above, and uncomment items: below if you want to build only a subset of the inventory
|
|
||||||
#items:
|
|
||||||
# - { BOARD: odroidn2, BRANCH: edge }
|
|
||||||
# - { BOARD: odroidhc4, BRANCH: edge }
|
|
||||||
26
config/templates/targets-default.yaml
Normal file
26
config/templates/targets-default.yaml
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
#
|
||||||
|
# This DEFAULT config generates all artefacts except for EOS targets.
|
||||||
|
# It will be used if no other targets file is found in userpatches/targets.yaml (or TARGETS_FILENAME/TARGETS_FILE is passed).
|
||||||
|
# It is meant to be used for testing and development.
|
||||||
|
# It uses the latest Ubuntu LTS release, xfce desktop, without appgroups and with the config_base desktop config.
|
||||||
|
#
|
||||||
|
|
||||||
|
targets:
|
||||||
|
all-desktop-DEFAULT:
|
||||||
|
vars:
|
||||||
|
BUILD_MINIMAL: "no"
|
||||||
|
BUILD_DESKTOP: "yes"
|
||||||
|
DESKTOP_ENVIRONMENT: "xfce"
|
||||||
|
DESKTOP_ENVIRONMENT_CONFIG_NAME: "config_base"
|
||||||
|
RELEASE: "jammy"
|
||||||
|
items-from-inventory:
|
||||||
|
not-eos-with-video: yes # not-eos boards, all branches, only those without HAS_VIDEO_OUTPUT="no"
|
||||||
|
|
||||||
|
all-cli-DEFAULT:
|
||||||
|
vars:
|
||||||
|
BUILD_MINIMAL: "yes"
|
||||||
|
BUILD_DESKTOP: "no"
|
||||||
|
RELEASE: "jammy"
|
||||||
|
items-from-inventory:
|
||||||
|
not-eos: yes # not-eos boards, all branches
|
||||||
|
|
||||||
@@ -29,11 +29,11 @@ function cli_json_info_run() {
|
|||||||
display_alert "Here we go" "generating JSON info :: ${ARMBIAN_COMMAND} " "info"
|
display_alert "Here we go" "generating JSON info :: ${ARMBIAN_COMMAND} " "info"
|
||||||
|
|
||||||
# Targets inventory. Will do all-by-all if no targets file is provided.
|
# Targets inventory. Will do all-by-all if no targets file is provided.
|
||||||
declare TARGETS_FILE="${TARGETS_FILE-"${USERPATCHES_PATH}/${TARGETS_FILENAME:-"targets.yaml"}"}" # @TODO: return to targets.yaml one day
|
declare TARGETS_FILE="${TARGETS_FILE-"${USERPATCHES_PATH}/${TARGETS_FILENAME:-"targets.yaml"}"}"
|
||||||
|
|
||||||
declare BASE_INFO_OUTPUT_DIR="${SRC}/output/info" # Output dir for info
|
declare BASE_INFO_OUTPUT_DIR="${SRC}/output/info" # Output dir for info
|
||||||
|
|
||||||
if [[ "${CLEAN_INFO}" == "yes" ]]; then
|
if [[ "${CLEAN_INFO:-"yes"}" != "no" ]]; then
|
||||||
display_alert "Cleaning info output dir" "${BASE_INFO_OUTPUT_DIR}" "info"
|
display_alert "Cleaning info output dir" "${BASE_INFO_OUTPUT_DIR}" "info"
|
||||||
rm -rf "${BASE_INFO_OUTPUT_DIR}"
|
rm -rf "${BASE_INFO_OUTPUT_DIR}"
|
||||||
fi
|
fi
|
||||||
@@ -134,16 +134,17 @@ function cli_json_info_run() {
|
|||||||
|
|
||||||
# if TARGETS_FILE does not exist, one will be provided for you, from a template.
|
# if TARGETS_FILE does not exist, one will be provided for you, from a template.
|
||||||
if [[ ! -f "${TARGETS_FILE}" ]]; then
|
if [[ ! -f "${TARGETS_FILE}" ]]; then
|
||||||
declare TARGETS_TEMPLATE="${TARGETS_TEMPLATE:-"targets-all-cli.yaml"}"
|
declare TARGETS_TEMPLATE="${TARGETS_TEMPLATE:-"targets-default.yaml"}"
|
||||||
display_alert "No targets file found" "using default targets template ${TARGETS_TEMPLATE}" "info"
|
display_alert "No targets file found" "using default targets template ${TARGETS_TEMPLATE}" "warn"
|
||||||
TARGETS_FILE="${SRC}/config/templates/${TARGETS_TEMPLATE}"
|
TARGETS_FILE="${SRC}/config/templates/${TARGETS_TEMPLATE}"
|
||||||
else
|
else
|
||||||
display_alert "Using targets file" "${TARGETS_FILE}" "info"
|
display_alert "Using targets file" "${TARGETS_FILE}" "warn"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ ! -f "${TARGETS_OUTPUT_FILE}" ]]; then
|
if [[ ! -f "${TARGETS_OUTPUT_FILE}" ]]; then
|
||||||
display_alert "Generating targets inventory" "targets-compositor" "info"
|
display_alert "Generating targets inventory" "targets-compositor" "info"
|
||||||
export TARGETS_BETA="${BETA}" # Read by the Python script, and injected into every target as "BETA=" param.
|
export TARGETS_BETA="${BETA}" # Read by the Python script, and injected into every target as "BETA=" param.
|
||||||
|
export TARGETS_FILTER_INCLUDE="${TARGETS_FILTER_INCLUDE}" # Read by the Python script; used to "only include" targets that match the given string.
|
||||||
run_host_command_logged "${PYTHON3_VARS[@]}" "${PYTHON3_INFO[BIN]}" "${INFO_TOOLS_DIR}"/targets-compositor.py "${ALL_BOARDS_ALL_BRANCHES_INVENTORY_FILE}" "not_yet_releases.json" "${TARGETS_FILE}" ">" "${TARGETS_OUTPUT_FILE}"
|
run_host_command_logged "${PYTHON3_VARS[@]}" "${PYTHON3_INFO[BIN]}" "${INFO_TOOLS_DIR}"/targets-compositor.py "${ALL_BOARDS_ALL_BRANCHES_INVENTORY_FILE}" "not_yet_releases.json" "${TARGETS_FILE}" ">" "${TARGETS_OUTPUT_FILE}"
|
||||||
unset TARGETS_BETA
|
unset TARGETS_BETA
|
||||||
fi
|
fi
|
||||||
@@ -154,16 +155,6 @@ function cli_json_info_run() {
|
|||||||
if [[ ! -f "${IMAGE_INFO_FILE}" ]]; then
|
if [[ ! -f "${IMAGE_INFO_FILE}" ]]; then
|
||||||
display_alert "Generating image info" "info-gatherer-image" "info"
|
display_alert "Generating image info" "info-gatherer-image" "info"
|
||||||
run_host_command_logged "${PYTHON3_VARS[@]}" "${PYTHON3_INFO[BIN]}" "${INFO_TOOLS_DIR}"/info-gatherer-image.py "${TARGETS_OUTPUT_FILE}" ">" "${IMAGE_INFO_FILE}"
|
run_host_command_logged "${PYTHON3_VARS[@]}" "${PYTHON3_INFO[BIN]}" "${INFO_TOOLS_DIR}"/info-gatherer-image.py "${TARGETS_OUTPUT_FILE}" ">" "${IMAGE_INFO_FILE}"
|
||||||
# if stdin is a terminal...
|
|
||||||
if [ -t 0 ]; then
|
|
||||||
display_alert "To load the OpenSearch dashboards:" "
|
|
||||||
pip3 install opensearch-py # install needed lib to talk to OS
|
|
||||||
docker-compose --file tools/dashboards/docker-compose-opensearch.yaml up -d # start up OS in docker-compose
|
|
||||||
python3 lib/tools/index-opensearch.py < output/info/image-info.json # index the JSON into OS
|
|
||||||
# go check out http://localhost:5601
|
|
||||||
docker-compose --file tools/dashboards/docker-compose-opensearch.yaml down # shut down OS when you're done
|
|
||||||
" "info"
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# convert image info output to CSV for easy import into Google Sheets etc
|
# convert image info output to CSV for easy import into Google Sheets etc
|
||||||
@@ -172,6 +163,19 @@ function cli_json_info_run() {
|
|||||||
run_host_command_logged "${PYTHON3_VARS[@]}" "${PYTHON3_INFO[BIN]}" "${INFO_TOOLS_DIR}"/json2csv.py "<" "${IMAGE_INFO_FILE}" ">" ${IMAGE_INFO_CSV_FILE}
|
run_host_command_logged "${PYTHON3_VARS[@]}" "${PYTHON3_INFO[BIN]}" "${INFO_TOOLS_DIR}"/json2csv.py "<" "${IMAGE_INFO_FILE}" ">" ${IMAGE_INFO_CSV_FILE}
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [[ "${ARMBIAN_COMMAND}" == "targets-dashboard" ]]; then
|
||||||
|
display_alert "To load the OpenSearch dashboards:" "
|
||||||
|
pip3 install opensearch-py # install needed lib to talk to OpenSearch
|
||||||
|
sysctl -w vm.max_map_count=262144 # raise limited needed by OpenSearch
|
||||||
|
docker-compose --file tools/dashboards/docker-compose-opensearch.yaml up -d # start up OS in docker-compose
|
||||||
|
python3 lib/tools/index-opensearch.py < output/info/image-info.json # index the JSON into OpenSearch
|
||||||
|
# go check out http://localhost:5601
|
||||||
|
docker-compose --file tools/dashboards/docker-compose-opensearch.yaml down # shut down OpenSearch when you're done
|
||||||
|
" "info"
|
||||||
|
display_alert "Done with" "targets-dashboard" "info"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
### Artifacts.
|
### Artifacts.
|
||||||
|
|
||||||
# Reducer: artifacts.
|
# Reducer: artifacts.
|
||||||
@@ -224,7 +228,7 @@ function cli_json_info_run() {
|
|||||||
# If the image or artifact is up-to-date, it is still included in matrix, but the job is skipped.
|
# If the image or artifact is up-to-date, it is still included in matrix, but the job is skipped.
|
||||||
# If any of the matrixes is bigger than 255 items, an error is generated.
|
# If any of the matrixes is bigger than 255 items, an error is generated.
|
||||||
if [[ "${ARMBIAN_COMMAND}" == "gha-matrix" ]]; then
|
if [[ "${ARMBIAN_COMMAND}" == "gha-matrix" ]]; then
|
||||||
if [[ "${CLEAN_MATRIX}" == "yes" ]]; then
|
if [[ "${CLEAN_MATRIX:-"yes"}" != "no" ]]; then
|
||||||
display_alert "Cleaning GHA matrix output" "clean-matrix" "info"
|
display_alert "Cleaning GHA matrix output" "clean-matrix" "info"
|
||||||
run_host_command_logged rm -fv "${BASE_INFO_OUTPUT_DIR}"/gha-*-matrix.json
|
run_host_command_logged rm -fv "${BASE_INFO_OUTPUT_DIR}"/gha-*-matrix.json
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -25,12 +25,13 @@ function armbian_register_commands() {
|
|||||||
["config-dump-json"]="config_dump_json" # implemented in cli_config_dump_json_pre_run and cli_config_dump_json_run
|
["config-dump-json"]="config_dump_json" # implemented in cli_config_dump_json_pre_run and cli_config_dump_json_run
|
||||||
["config-dump-no-json"]="config_dump_json" # implemented in cli_config_dump_json_pre_run and cli_config_dump_json_run
|
["config-dump-no-json"]="config_dump_json" # implemented in cli_config_dump_json_pre_run and cli_config_dump_json_run
|
||||||
|
|
||||||
["inventory"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["inventory"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
["targets"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["targets"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
["debs-to-repo-json"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["targets-dashboard"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
["gha-matrix"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["debs-to-repo-json"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
["gha-workflow"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["gha-matrix"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
["gha-template"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["gha-workflow"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
|
["gha-template"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
|
|
||||||
# These probably should be in their own separate CLI commands file, but for now they're together in jsoninfo.
|
# These probably should be in their own separate CLI commands file, but for now they're together in jsoninfo.
|
||||||
["debs-to-repo-download"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
["debs-to-repo-download"]="json_info" # implemented in cli_json_info_pre_run and cli_json_info_run
|
||||||
@@ -95,6 +96,10 @@ function armbian_register_commands() {
|
|||||||
|
|
||||||
["artifact-config-dump-json"]='CONFIG_DEFS_ONLY="yes"'
|
["artifact-config-dump-json"]='CONFIG_DEFS_ONLY="yes"'
|
||||||
|
|
||||||
|
# repo pipeline stuff is usually run on saved/restored artifacts for output/info, so don't clean them by default
|
||||||
|
["debs-to-repo-download"]="CLEAN_MATRIX='no' CLEAN_INFO='no'"
|
||||||
|
["debs-to-repo-reprepro"]="CLEAN_MATRIX='no' CLEAN_INFO='no'"
|
||||||
|
|
||||||
# artifact shortcuts
|
# artifact shortcuts
|
||||||
["rootfs"]="WHAT='rootfs' ${common_cli_artifact_vars}"
|
["rootfs"]="WHAT='rootfs' ${common_cli_artifact_vars}"
|
||||||
|
|
||||||
|
|||||||
@@ -16,12 +16,13 @@ import multiprocessing
|
|||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
|
import sys
|
||||||
|
|
||||||
REGEX_WHITESPACE_LINEBREAK_COMMA_SEMICOLON = r"[\s,;\n]+"
|
REGEX_WHITESPACE_LINEBREAK_COMMA_SEMICOLON = r"[\s,;\n]+"
|
||||||
|
|
||||||
ARMBIAN_CONFIG_REGEX_KERNEL_TARGET = r"^([export |declare -g]+)?KERNEL_TARGET=\"(.*)\""
|
ARMBIAN_BOARD_CONFIG_REGEX_GENERIC = r"^(?!\s)(?:[export |declare \-g]?)+([A-Z0-9_]+)=(?:'|\")(.*)(?:'|\")"
|
||||||
|
|
||||||
log: logging.Logger = logging.getLogger("armbian_utils")
|
log: logging.Logger = logging.getLogger("armbian_utils")
|
||||||
|
|
||||||
@@ -107,28 +108,66 @@ def to_yaml(gha_workflow):
|
|||||||
|
|
||||||
# I've to read the first line from the board file, that's the hardware description in a pound comment.
|
# I've to read the first line from the board file, that's the hardware description in a pound comment.
|
||||||
# Also, 'KERNEL_TARGET="legacy,current,edge"' which we need to parse.
|
# Also, 'KERNEL_TARGET="legacy,current,edge"' which we need to parse.
|
||||||
def armbian_parse_board_file_for_static_info(board_file, board_id):
|
def armbian_parse_board_file_for_static_info(board_file, board_id, core_or_userpatched):
|
||||||
file_handle = open(board_file, 'r')
|
file_handle = open(board_file, 'r')
|
||||||
file_lines = file_handle.readlines()
|
file_lines = file_handle.readlines()
|
||||||
file_handle.close()
|
file_handle.close()
|
||||||
|
|
||||||
file_lines.reverse()
|
file_lines.reverse()
|
||||||
hw_desc_line = file_lines.pop()
|
hw_desc_line = file_lines.pop()
|
||||||
|
file_lines.reverse()
|
||||||
hw_desc_clean = None
|
hw_desc_clean = None
|
||||||
if hw_desc_line.startswith("# "):
|
if hw_desc_line.startswith("# "):
|
||||||
hw_desc_clean = hw_desc_line.strip("# ").strip("\n")
|
hw_desc_clean = hw_desc_line.strip("# ").strip("\n")
|
||||||
|
|
||||||
# Parse KERNEL_TARGET line.
|
# Parse generic bash vars, with a horrendous regex.
|
||||||
kernel_targets = None
|
generic_vars = {}
|
||||||
kernel_target_matches = re.findall(ARMBIAN_CONFIG_REGEX_KERNEL_TARGET, "\n".join(file_lines), re.MULTILINE)
|
generic_var_matches = re.findall(ARMBIAN_BOARD_CONFIG_REGEX_GENERIC, "\n".join(file_lines), re.MULTILINE)
|
||||||
if len(kernel_target_matches) == 1:
|
for generic_var_match in generic_var_matches:
|
||||||
kernel_targets = kernel_target_matches[0][1].split(",")
|
generic_vars[generic_var_match[0]] = generic_var_match[1]
|
||||||
|
|
||||||
|
kernel_targets = []
|
||||||
|
if "KERNEL_TARGET" in generic_vars:
|
||||||
|
kernel_targets = generic_vars["KERNEL_TARGET"].split(",")
|
||||||
|
if (len(kernel_targets) == 0) or (kernel_targets[0] == ""):
|
||||||
|
log.warning(f"KERNEL_TARGET not found in '{board_file}', syntax error?, missing quotes? stray comma?")
|
||||||
|
|
||||||
|
maintainers = []
|
||||||
|
if "BOARD_MAINTAINER" in generic_vars:
|
||||||
|
maintainers = generic_vars["BOARD_MAINTAINER"].split(" ")
|
||||||
|
maintainers = list(filter(None, maintainers))
|
||||||
|
else:
|
||||||
|
if core_or_userpatched == "core":
|
||||||
|
log.warning(f"BOARD_MAINTAINER not found in '{board_file}', syntax error?, missing quotes? stray space? missing info?")
|
||||||
|
|
||||||
|
board_has_video = True
|
||||||
|
if "HAS_VIDEO_OUTPUT" in generic_vars:
|
||||||
|
if generic_vars["HAS_VIDEO_OUTPUT"] == "no":
|
||||||
|
board_has_video = False
|
||||||
|
|
||||||
|
if "BOARDFAMILY" not in generic_vars:
|
||||||
|
log.warning(f"BOARDFAMILY not found in '{board_file}', syntax error?, missing quotes?")
|
||||||
|
|
||||||
|
# Add some more vars that are not in the board file, so we've a complete BOARD_TOP_LEVEL_VARS as well as first-level
|
||||||
|
extras: list[dict[str, any]] = [
|
||||||
|
{"name": "BOARD", "value": board_id},
|
||||||
|
{"name": "BOARD_SUPPORT_LEVEL", "value": (Path(board_file).suffix)[1:]},
|
||||||
|
{"name": "BOARD_FILE_HARDWARE_DESC", "value": hw_desc_clean},
|
||||||
|
{"name": "BOARD_POSSIBLE_BRANCHES", "value": kernel_targets},
|
||||||
|
{"name": "BOARD_MAINTAINERS", "value": maintainers},
|
||||||
|
{"name": "BOARD_HAS_VIDEO", "value": board_has_video},
|
||||||
|
{"name": "BOARD_CORE_OR_USERPATCHED", "value": core_or_userpatched}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Append the extras to the generic_vars dict.
|
||||||
|
for extra in extras:
|
||||||
|
generic_vars[extra["name"]] = extra["value"]
|
||||||
|
|
||||||
|
ret = {"BOARD_TOP_LEVEL_VARS": generic_vars}
|
||||||
|
# Append the extras to the top-level.
|
||||||
|
for extra in extras:
|
||||||
|
ret[extra["name"]] = extra["value"]
|
||||||
|
|
||||||
ret = {"BOARD": board_id, "BOARD_SUPPORT_LEVEL": (Path(board_file).suffix)[1:]}
|
|
||||||
if hw_desc_clean is not None:
|
|
||||||
ret["BOARD_FILE_HARDWARE_DESC"] = hw_desc_clean
|
|
||||||
if kernel_targets is not None:
|
|
||||||
ret["BOARD_POSSIBLE_BRANCHES"] = kernel_targets
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|
||||||
@@ -178,8 +217,7 @@ def armbian_get_all_boards_inventory():
|
|||||||
# first, gather the board_info for every core board. if any fail, stop.
|
# first, gather the board_info for every core board. if any fail, stop.
|
||||||
info_for_board = {}
|
info_for_board = {}
|
||||||
for board in core_boards.keys():
|
for board in core_boards.keys():
|
||||||
board_info = armbian_parse_board_file_for_static_info(core_boards[board], board)
|
board_info = armbian_parse_board_file_for_static_info(core_boards[board], board, "core")
|
||||||
board_info["BOARD_CORE_OR_USERPATCHED"] = "core"
|
|
||||||
# Core boards must have the KERNEL_TARGET defined.
|
# Core boards must have the KERNEL_TARGET defined.
|
||||||
if "BOARD_POSSIBLE_BRANCHES" not in board_info:
|
if "BOARD_POSSIBLE_BRANCHES" not in board_info:
|
||||||
raise Exception(f"Core board '{board}' must have KERNEL_TARGET defined")
|
raise Exception(f"Core board '{board}' must have KERNEL_TARGET defined")
|
||||||
@@ -189,8 +227,7 @@ def armbian_get_all_boards_inventory():
|
|||||||
if armbian_paths["has_userpatches_path"]:
|
if armbian_paths["has_userpatches_path"]:
|
||||||
userpatched_boards = armbian_get_all_boards_list(armbian_paths["userpatches_boards_path"])
|
userpatched_boards = armbian_get_all_boards_list(armbian_paths["userpatches_boards_path"])
|
||||||
for uboard_name in userpatched_boards.keys():
|
for uboard_name in userpatched_boards.keys():
|
||||||
uboard = armbian_parse_board_file_for_static_info(userpatched_boards[uboard_name], uboard_name)
|
uboard = armbian_parse_board_file_for_static_info(userpatched_boards[uboard_name], uboard_name, "userpatched")
|
||||||
uboard["BOARD_CORE_OR_USERPATCHED"] = "userpatched"
|
|
||||||
is_new_board = not (uboard_name in info_for_board)
|
is_new_board = not (uboard_name in info_for_board)
|
||||||
if is_new_board:
|
if is_new_board:
|
||||||
log.debug(f"Userpatched Board {uboard_name} is new")
|
log.debug(f"Userpatched Board {uboard_name} is new")
|
||||||
|
|||||||
@@ -8,8 +8,8 @@
|
|||||||
# https://github.com/armbian/build/
|
# https://github.com/armbian/build/
|
||||||
#
|
#
|
||||||
import json
|
import json
|
||||||
import sys
|
|
||||||
|
|
||||||
|
import sys
|
||||||
from opensearchpy import OpenSearch # pip3 install opensearch-py
|
from opensearchpy import OpenSearch # pip3 install opensearch-py
|
||||||
|
|
||||||
|
|
||||||
@@ -48,11 +48,17 @@ eprint('\nCreating index...')
|
|||||||
response_create = client.indices.create(index_name, body=index_body)
|
response_create = client.indices.create(index_name, body=index_body)
|
||||||
# print(response_create)
|
# print(response_create)
|
||||||
|
|
||||||
for obj in json_object:
|
counter = 0
|
||||||
# print(obj)
|
|
||||||
response = client.index(index=index_name, body=obj)
|
|
||||||
|
|
||||||
eprint("\nRefreshing index...")
|
for obj in json_object:
|
||||||
|
# Skip desktop builds
|
||||||
|
if 'BUILD_DESKTOP' in obj['in']['vars']:
|
||||||
|
if obj['in']['vars']['BUILD_DESKTOP'] == 'yes':
|
||||||
|
continue
|
||||||
|
response = client.index(index=index_name, body=obj)
|
||||||
|
counter += 1
|
||||||
|
|
||||||
|
eprint(f"\nRefreshing index after loading {counter}...")
|
||||||
client.indices.refresh(index=index_name)
|
client.indices.refresh(index=index_name)
|
||||||
|
|
||||||
eprint("\nDone.")
|
eprint("\nDone.")
|
||||||
|
|||||||
@@ -64,7 +64,7 @@ def resolve_gha_runner_tags_via_pipeline_gha_config(input: dict, artifact_name:
|
|||||||
ret = by_names_and_archs[artifact_name_and_arch]
|
ret = by_names_and_archs[artifact_name_and_arch]
|
||||||
log.debug(f"Found 'by-name-and-arch' value '{artifact_name_and_arch}' config in input.pipeline.gha.runners, using '{ret}'")
|
log.debug(f"Found 'by-name-and-arch' value '{artifact_name_and_arch}' config in input.pipeline.gha.runners, using '{ret}'")
|
||||||
|
|
||||||
log.info(f"Resolved GHA runs_on for name:'{artifact_name}' arch:'{artifact_arch}' to runs_on:'{ret}'")
|
log.debug(f"Resolved GHA runs_on for name:'{artifact_name}' arch:'{artifact_arch}' to runs_on:'{ret}'")
|
||||||
|
|
||||||
return ret
|
return ret
|
||||||
|
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
#!/usr/bin/env python3
|
#!/usr/bin/env python3
|
||||||
|
import copy
|
||||||
# ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹
|
# ‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹‹
|
||||||
# SPDX-License-Identifier: GPL-2.0
|
# SPDX-License-Identifier: GPL-2.0
|
||||||
# Copyright (c) 2023 Ricardo Pardini <ricardo@pardini.net>
|
# Copyright (c) 2023 Ricardo Pardini <ricardo@pardini.net>
|
||||||
@@ -36,15 +36,21 @@ with open(board_inventory_file, 'r') as f:
|
|||||||
all_boards_all_branches = []
|
all_boards_all_branches = []
|
||||||
boards_by_support_level_and_branches = {}
|
boards_by_support_level_and_branches = {}
|
||||||
not_eos_boards_all_branches = []
|
not_eos_boards_all_branches = []
|
||||||
|
not_eos_with_video_boards_all_branches = []
|
||||||
|
|
||||||
for board in board_inventory:
|
for board in board_inventory:
|
||||||
for branch in board_inventory[board]["BOARD_POSSIBLE_BRANCHES"]:
|
for branch in board_inventory[board]["BOARD_POSSIBLE_BRANCHES"]:
|
||||||
all_boards_all_branches.append({"BOARD": board, "BRANCH": branch})
|
data_from_inventory = {"BOARD": board, "BRANCH": branch}
|
||||||
|
all_boards_all_branches.append(data_from_inventory)
|
||||||
|
|
||||||
if board_inventory[board]["BOARD_SUPPORT_LEVEL"] not in boards_by_support_level_and_branches:
|
if board_inventory[board]["BOARD_SUPPORT_LEVEL"] not in boards_by_support_level_and_branches:
|
||||||
boards_by_support_level_and_branches[board_inventory[board]["BOARD_SUPPORT_LEVEL"]] = []
|
boards_by_support_level_and_branches[board_inventory[board]["BOARD_SUPPORT_LEVEL"]] = []
|
||||||
boards_by_support_level_and_branches[board_inventory[board]["BOARD_SUPPORT_LEVEL"]].append({"BOARD": board, "BRANCH": branch})
|
boards_by_support_level_and_branches[board_inventory[board]["BOARD_SUPPORT_LEVEL"]].append(data_from_inventory)
|
||||||
|
|
||||||
if board_inventory[board]["BOARD_SUPPORT_LEVEL"] != "eos":
|
if board_inventory[board]["BOARD_SUPPORT_LEVEL"] != "eos":
|
||||||
not_eos_boards_all_branches.append({"BOARD": board, "BRANCH": branch})
|
not_eos_boards_all_branches.append(data_from_inventory)
|
||||||
|
if board_inventory[board]["BOARD_HAS_VIDEO"]:
|
||||||
|
not_eos_with_video_boards_all_branches.append(data_from_inventory)
|
||||||
|
|
||||||
# get the third argv, which is the targets.yaml file.
|
# get the third argv, which is the targets.yaml file.
|
||||||
targets_yaml_file = sys.argv[3]
|
targets_yaml_file = sys.argv[3]
|
||||||
@@ -98,6 +104,8 @@ for target_name in targets["targets"]:
|
|||||||
to_add.extend(all_boards_all_branches)
|
to_add.extend(all_boards_all_branches)
|
||||||
elif key == "not-eos":
|
elif key == "not-eos":
|
||||||
to_add.extend(not_eos_boards_all_branches)
|
to_add.extend(not_eos_boards_all_branches)
|
||||||
|
elif key == "not-eos-with-video":
|
||||||
|
to_add.extend(not_eos_with_video_boards_all_branches)
|
||||||
else:
|
else:
|
||||||
to_add.extend(boards_by_support_level_and_branches[key])
|
to_add.extend(boards_by_support_level_and_branches[key])
|
||||||
log.info(f"Adding '{key}' from inventory to target '{target_name}': {len(to_add)} targets")
|
log.info(f"Adding '{key}' from inventory to target '{target_name}': {len(to_add)} targets")
|
||||||
@@ -126,7 +134,64 @@ log.info(
|
|||||||
if len(invocations_dict) != len(invocations_unique):
|
if len(invocations_dict) != len(invocations_unique):
|
||||||
log.warning(f"Duplicate invocations found, de-duped from {len(invocations_dict)} to {len(invocations_unique)}")
|
log.warning(f"Duplicate invocations found, de-duped from {len(invocations_dict)} to {len(invocations_unique)}")
|
||||||
|
|
||||||
|
# A plain list
|
||||||
all_invocations = list(invocations_unique.values())
|
all_invocations = list(invocations_unique.values())
|
||||||
|
|
||||||
|
# Add information from inventory to each invocation, so it trickles down the pipeline.
|
||||||
|
for invocation in all_invocations:
|
||||||
|
if invocation["vars"]["BOARD"] not in board_inventory:
|
||||||
|
log.error(f"Board '{invocation['vars']['BOARD']}' not found in inventory!")
|
||||||
|
sys.exit(3)
|
||||||
|
invocation["inventory"] = copy.deepcopy(board_inventory[invocation["vars"]["BOARD"]]) # deep copy, so we can modify it
|
||||||
|
# Add "virtual" BOARD_SLASH_BRANCH var, for easy filtering
|
||||||
|
invocation["inventory"]["BOARD_TOP_LEVEL_VARS"]['BOARD_SLASH_BRANCH'] = f"{invocation['vars']['BOARD']}/{invocation['vars']['BRANCH']}"
|
||||||
|
|
||||||
|
# Allow filtering of invocations, using environment variable:
|
||||||
|
# - TARGETS_FILTER_INCLUDE: only include invocations that match this query-string
|
||||||
|
# For example: TARGETS_FILTER_INCLUDE="BOARD:xxx,BOARD:yyy"
|
||||||
|
include_filter = os.environ.get("TARGETS_FILTER_INCLUDE", "").strip()
|
||||||
|
if include_filter:
|
||||||
|
log.info(f"Filtering {len(all_invocations)} invocations to only include those matching: '{include_filter}'")
|
||||||
|
include_filter_list: list[dict[str, str]] = []
|
||||||
|
include_raw_split = include_filter.split(",")
|
||||||
|
for include_raw in include_raw_split:
|
||||||
|
include_split = include_raw.split(":")
|
||||||
|
if len(include_split) != 2:
|
||||||
|
log.error(f"Invalid include filter, wrong format: '{include_raw}'")
|
||||||
|
sys.exit(1)
|
||||||
|
if include_split[0].strip() == "" or include_split[1].strip() == "":
|
||||||
|
log.error(f"Invalid include filter, either key or value empty: '{include_raw}'")
|
||||||
|
sys.exit(1)
|
||||||
|
include_filter_list.append({"key": include_split[0].strip(), "value": include_split[1].strip()})
|
||||||
|
|
||||||
|
invocations_filtered = []
|
||||||
|
for invocation in all_invocations:
|
||||||
|
for include_filter in include_filter_list:
|
||||||
|
top_level_vars = invocation["inventory"]["BOARD_TOP_LEVEL_VARS"]
|
||||||
|
if include_filter["key"] not in top_level_vars:
|
||||||
|
log.warning(
|
||||||
|
f"Problem with include filter, key '{include_filter['key']}' not found in inventory data for board '{invocation['vars']['BOARD']}'")
|
||||||
|
continue
|
||||||
|
filtered_key = top_level_vars[include_filter["key"]]
|
||||||
|
# If it is an array...
|
||||||
|
if isinstance(filtered_key, list):
|
||||||
|
if include_filter["value"] in filtered_key:
|
||||||
|
invocations_filtered.append(invocation)
|
||||||
|
break
|
||||||
|
else:
|
||||||
|
if filtered_key == include_filter["value"]:
|
||||||
|
invocations_filtered.append(invocation)
|
||||||
|
break
|
||||||
|
|
||||||
|
log.info(f"Filtered invocations to {len(invocations_filtered)} invocations after include filters.")
|
||||||
|
if len(invocations_filtered) == 0:
|
||||||
|
log.error(f"No invocations left after filtering '{include_filter}'!")
|
||||||
|
sys.exit(2)
|
||||||
|
|
||||||
|
all_invocations = invocations_filtered
|
||||||
|
else:
|
||||||
|
log.info("No include filter set, not filtering invocations.")
|
||||||
|
|
||||||
counter = 1
|
counter = 1
|
||||||
for one_invocation in all_invocations:
|
for one_invocation in all_invocations:
|
||||||
# target_id is the counter left-padded with zeros to 10 digits, plus the total number of invocations, left-padded with zeros to 10 digits.
|
# target_id is the counter left-padded with zeros to 10 digits, plus the total number of invocations, left-padded with zeros to 10 digits.
|
||||||
|
|||||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user