diff --git a/.wordlist.txt b/.wordlist.txt index 66bfde43e2..cd0fdcc8db 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -5261,4 +5261,31 @@ tcmalloc tlsv vLLM's webp -HugeTLB \ No newline at end of file +HugeTLB +CIS +CRDs +ClusterIP +CoreDNS +DskipTests +Gtilab +KinD +Kubeconfig +Kubelet +Loopback +NetworkPolicies +StandaloneSessionClusterEntrypoint +TaskManagerRunner +Textfield +aquasecurity +dumpsys +hiredis +jemalloc +kubeconfig +kubelet +leaderboards +loopback +menuconfig +oss +saas +todo +yq \ No newline at end of file diff --git a/content/install-guides/windows-perf-wpa-plugin.md b/content/install-guides/windows-perf-wpa-plugin.md index 15bc00ca61..02ad731658 100644 --- a/content/install-guides/windows-perf-wpa-plugin.md +++ b/content/install-guides/windows-perf-wpa-plugin.md @@ -117,7 +117,7 @@ wpa -addsearchdir %USERPROFILE%\Downloads\wpa-plugin-1.0.3 Set the `WPA_ADDITIONAL_SEARCH_DIRECTORIES` environment variable to the location of the `.dll` file. -##### Option 3: Copy the `.dll` file to the `CustomDataSources` directory next to the WPA executable. +##### Option 3: Copy the DLL to the CustomDataSources directory next to the WPA executable. The default location is: `C:\\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\CustomDataSources` diff --git a/content/learning-paths/cross-platform/adler32/summary-10.md b/content/learning-paths/cross-platform/adler32/summary-10.md index 0c87aef025..251238c601 100644 --- a/content/learning-paths/cross-platform/adler32/summary-10.md +++ b/content/learning-paths/cross-platform/adler32/summary-10.md @@ -47,13 +47,13 @@ The project includes: ### Implementations -#### 1. Simple Implementation (`adler32-simple.c`) +#### 1. Simple Implementation -This is a straightforward C implementation following the standard Adler-32 algorithm definition. It processes the input data byte by byte, updating two 16-bit accumulators (`a` and `b`) modulo 65521 (the largest prime smaller than 2^16). +The code in `adler32-simple.c` is a straightforward C implementation following the standard Adler-32 algorithm definition. It processes the input data byte by byte, updating two 16-bit accumulators (`a` and `b`) modulo 65521 (the largest prime smaller than 2^16). -#### 2. NEON-Optimized Implementation (`adler32-neon.c`) +#### 2. NEON-Optimized Implementation -This implementation leverages ARM NEON SIMD (Single Instruction, Multiple Data) instructions to accelerate the checksum calculation. Key aspects include: +The code in `adler32-neon.c` leverages ARM NEON SIMD (Single Instruction, Multiple Data) instructions to accelerate the checksum calculation. Key aspects include: * Processing data in blocks (16 bytes at a time). * Using NEON intrinsics (`vld1q_u8`, `vmovl_u8`, `vaddq_u16`, `vpaddlq_u16`, `vmulq_u16`, etc.) to perform parallel operations on data vectors. * Calculating the sums `S1` (sum of bytes) and `S2` (weighted sum) for each block using vector operations. diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/ci-cd-new.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/ci-cd-new.png new file mode 100644 index 0000000000..17da0dab81 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/ci-cd-new.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/editor-yml.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/editor-yml.png new file mode 100644 index 0000000000..3cef6e3a2d Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/editor-yml.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/gitlab-projects.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/gitlab-projects.png new file mode 100644 index 0000000000..c7bb727dc9 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/gitlab-projects.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-pipeline.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-pipeline.png new file mode 100644 index 0000000000..5bba639d81 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-pipeline.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-project.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-project.png new file mode 100644 index 0000000000..4445213a8c Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-project.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-yml.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-yml.png new file mode 100644 index 0000000000..d8e52dcc08 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/new-yml.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/pipeline-execution.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/pipeline-execution.png new file mode 100644 index 0000000000..5e516c0fc4 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/pipeline-execution.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/project-done.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/project-done.png new file mode 100644 index 0000000000..f7bc2c77fc Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/project-done.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/project-info.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/project-info.png new file mode 100644 index 0000000000..500ab8f3b8 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/project-info.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_images/visual-pipeline.png b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/visual-pipeline.png new file mode 100644 index 0000000000..0a6728eb18 Binary files /dev/null and b/content/learning-paths/cross-platform/gitlab-managed-runners/_images/visual-pipeline.png differ diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_index.md b/content/learning-paths/cross-platform/gitlab-managed-runners/_index.md new file mode 100644 index 0000000000..2f39932a2f --- /dev/null +++ b/content/learning-paths/cross-platform/gitlab-managed-runners/_index.md @@ -0,0 +1,57 @@ +--- +title: Build a Simple CI/CD pipeline with GitLab-Hosted Runners + +draft: true +cascade: + draft: true + +minutes_to_complete: 30 + +who_is_this_for: This is an Introductory topic for DevOps professionals who are looking to build a CI/CD pipeline with GitLab on Google Axion using GitLab-Hosted runners. + +learning_objectives: + - Create a GitLab Project + - Understand basic pipeline script structure and how to use it + - Build and test a simple CI/CD pipeline Using Gitlab-hosted runners + + +prerequisites: + - A valid GitLab account + +author: Mohamed Ismail + +### Tags +skilllevels: Introductory +subjects: CI-CD +cloud_service_providers: Google Cloud + +armips: + - Neoverse + +tools_software_languages: + - GitLab + +operatingsystems: + - Linux + +### Cross-platform metadata only +shared_path: true +shared_between: + - servers-and-cloud-computing + - laptops-and-desktops + - embedded-and-microcontrollers + +further_reading: + - resource: + title: GitLab-hosted runners + link: https://docs.gitlab.com/ci/runners/hosted_runners/ + type: documentation + + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/_next-steps.md b/content/learning-paths/cross-platform/gitlab-managed-runners/_next-steps.md new file mode 100644 index 0000000000..05c53588f7 --- /dev/null +++ b/content/learning-paths/cross-platform/gitlab-managed-runners/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 100 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/info.md b/content/learning-paths/cross-platform/gitlab-managed-runners/info.md new file mode 100644 index 0000000000..cc40e6d597 --- /dev/null +++ b/content/learning-paths/cross-platform/gitlab-managed-runners/info.md @@ -0,0 +1,51 @@ +--- +title: "Important Information" +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## What is a GitLab runner? +A GitLab Runner works with GitLab CI/CD to run jobs in a pipeline. It acts as an agent and executes the jobs you define in your GitLab CI/CD configuration. Some key points to note about GitLab Runner: + +1. GitLab offers multiple types of runners - You can use GitLab-hosted runners, self-managed runners, or a combination of both. GitLab manages GitLab-hosted runners, while you install and manage self-managed runners on your own infrastructure. + +2. Each runner is configured as an Executor - When you register a runner, you choose an executor, which determines the environment in which the job runs. Executors can be Docker, Shell, Kubernetes, etc. + +3. Multi-architecture support: GitLab runners support multiple architectures including - **`x86/amd64`** and **`arm64`**. + +## What is Google Axion? +Axion is Google's first Arm-based server processor, built using the Armv9 Neoverse V2 CPU. The VM instances are part of the **`C4A`** family of compute instances. To learn more about Google Axion refer to this [page](http://cloud.google.com/products/axion/) . + +{{% notice Note %}} +All The information provided in the next section are from GitLab official Pages and it's provided here for convenience and can be changed by Gitlab at anytime. Please refer to the [Gitlab Documentation](https://docs.gitlab.com/ci/runners/hosted_runners/) for more details and for the latest updates. +{{% /notice %}} + +## GitLab-Hosted Runners Facts + +1. Each of your jobs runs in a newly provisioned VM, which is dedicated to the specific job. + +2. The storage is shared by the operating system, the container image with pre-installed software, and a copy of your cloned repository. This means that the available free disk space for your jobs to use is reduced. + +3. Untagged jobs run on the **`small`** Linux x86-64 runner. + +4. Jobs handled by hosted runners on GitLab.com time out after 3 hours, regardless of the timeout configured in a project. + +5. The virtual machine where your job runs has **`sudo`** access with no password. + +6. Firewall rules only allow outbound communication from the ephemeral VM to the public internet. + +7. Inbound communication from the public internet to the ephemeral VM is not allowed. + +8. Firewall rules do not permit communication between VMs. + +9. The only internal communication allowed to the ephemeral VMs is from the runner manager. + +10. Ephemeral runner VMs serve a single job and are deleted right after the job execution. + +11. In addition to isolating runners on the network, each ephemeral runner VM only serves a single job and is deleted straight after the job execution. In the following example, three jobs are executed in a project’s pipeline. Each of these jobs runs in a dedicated ephemeral VM. + +12. GitLab sends the command to remove the ephemeral runner VM to the Google Compute API immediately after the CI job completes. The [Google Compute Engine hypervisor](https://cloud.google.com/blog/products/gcp/7-ways-we-harden-our-kvm-hypervisor-at-google-cloud-security-in-plaintext) takes over the task of securely deleting the virtual machine and associated data. + +13. The hosted runners share a distributed cache stored in a Google Cloud Storage (GCS) bucket. Cache contents not updated in the last 14 days are automatically removed, based on the [object lifecycle management policy](https://cloud.google.com/storage/docs/lifecycle). The maximum size of an uploaded cache artifact can be 5 GB after the cache becomes a compressed archive. \ No newline at end of file diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/pipeline.md b/content/learning-paths/cross-platform/gitlab-managed-runners/pipeline.md new file mode 100644 index 0000000000..2a218102a7 --- /dev/null +++ b/content/learning-paths/cross-platform/gitlab-managed-runners/pipeline.md @@ -0,0 +1,90 @@ +--- +title: "Create and Test a New Simple CI/CD Pipeline" +weight: 15 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## How to Create a CI/CD Pipeline with Gitlab-hosted Runners? + +To create the pipeline we only need to create a new **`.gitlab-ci.yml`** file in our Project and define it's stages. Nothing else is needed since Gtilab-hosted runners are readily avilable to any Project and doesn't need to be created or instantiated by the Gitlab users. + +Once we run our pipeline with the correct **`tags`** Gitlab will create everything that we need and yep it is as simple as that. + +## How to Create .gitlab-ci.yml file in a Gitlab Project? + +1. Start by going to the main project page where we will need to Create the CI/CD pipeline. + +2. We can choose to create **`.gitlab-ci.yml`** file by using one of the 2 options circled in red in the image below. +![CI-CD-New #center](_images/ci-cd-new.png) + +Option1: We can Click on **`Set up CI/CD`** button/link and follow the wizad to create an empty **`.gitlab-ci.yml`** file. + +Option2: Click on the "+" button. From the popup menu click on **`New File`** option and name the file **`.gitlab-ci.yml`** and then click on **`Commit Changes`** button on the top right hand side like in the image below (Add any message as your commit message). +![New-YML #center](_images/new-yml.png) + +3. A page like the one in the image below will be visible with our **`.gitlab-ci.yml`** file. From here, we will need to Click on the **`Edit`** button. A menu will pop up, We will click on **`Edit in pipeline Editor`** which will allow us to add our CD/CD script. +![Editor-YML #center](_images/editor-yml.png) + +4. In the pipeline editor, just copy and paste the following YML script and click on commit changes (Add any relevent message as your commit update message). +```YML +# This file is a template, and might need editing before it works on your project. +# This is a sample GitLab CI/CD configuration file that should run without any modifications. +# It demonstrates a basic 3 stage CI/CD pipeline. Instead of real tests or scripts, +# it uses echo commands to simulate the pipeline execution. +# +# A pipeline is composed of independent jobs that run scripts, grouped into stages. +# Stages run in sequential order, but jobs within stages run in parallel. +# +# For more information, see: https://docs.gitlab.com/ee/ci/yaml/#stages +# +# You can copy and paste this template into a new `.gitlab-ci.yml` file. +# You should not add this template to an existing `.gitlab-ci.yml` file by using the `include:` keyword. +# +# To contribute improvements to CI/CD templates, please follow the Development guide at: +# https://docs.gitlab.com/development/cicd/templates/ +# This specific template is located at: +# https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Getting-Started.gitlab-ci.yml + +stages: # List of stages for jobs, and their order of execution + - build + - test + - deploy + +build-job: # This job runs in the build stage, which runs first. + stage: build + tags: + - saas-linux-small-arm64 #Instruct Gitlab to use it's own Hosted runners that use Linux on Arm64 instance of size small. + script: + - echo "Compiling the code..." + - echo "Compile complete." + +unit-test-job: # This job runs in the test stage. + stage: test # It only starts when the job in the build stage completes successfully. + tags: + - saas-linux-small-arm64 #Instruct Gitlab to use it's own Hosted runners that use Linux on Arm64 instance of size small. + script: + - echo "Running unit tests... This will take about 60 seconds." + - sleep 60 + - echo "Code coverage is 90%" + +lint-test-job: # This job also runs in the test stage. + stage: test # It can run at the same time as unit-test-job (in parallel). + tags: + - saas-linux-small-arm64 #Instruct Gitlab to use it's own Hosted runners that use Linux on Arm64 instance of size small. + script: + - echo "Linting code... This will take about 10 seconds." + - sleep 10 + - echo "No lint issues found." + +deploy-job: # This job runs in the deploy stage. + stage: deploy # It only runs when *both* jobs in the test stage complete successfully. + tags: + - saas-linux-small-arm64 #Instruct Gitlab to use it's own Hosted runners that use Linux on Arm64 instance of size small. + environment: production + script: + - echo "Deploying application..." + - echo "Application successfully deployed." +``` +5. Once you commit the file updates, Gitlab will check the scripts for errors and will try to execute the pipeline directly. diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/project.md b/content/learning-paths/cross-platform/gitlab-managed-runners/project.md new file mode 100644 index 0000000000..1ad8bea1a2 --- /dev/null +++ b/content/learning-paths/cross-platform/gitlab-managed-runners/project.md @@ -0,0 +1,37 @@ +--- +title: "Create a New Project/Repo in GitLab" +weight: 10 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Where Should We Start? + +Start by logging into your GitLab account or create a new one in the [Gitlab](https://gitlab.com/) main page. + +We will need to a new project/repo that will contain all our project files including our **`CI/CD`** pipeline configuration file. + +We can also choose to use any previously created projects in our Gitlab account. Simply open your previously created project. If that is the case then skip the rest of the steps in the current page and move to the next steps in this tutorial. + +## Create a New Project in Gitlab + +1. From the Home page, Click on the **`Projects`** icon from the menu on the left hand side panel as the image below. + +2. Click on the **`New Project`** button on the top right hand side as the image below. +![Gitlab-Projects #center](_images/gitlab-projects.png) + +3. We will get a new screen like the image below with multiple options. You can choose any of the 2 options highlighted in red from the image below. + +{{% notice Note %}} +If we chose option 2 then we will need to choose **`GitLab CI/CD components`** option from the list of templates. +{{%/notice%}} + +![New-Project #center](_images/new-project.png) + +4. Regardles of which option we choose, We will get a screen like the image below where we need to fill-in fields highlighted in red. The first field is the **`Project Name`** which we will name it **`CI-CD Runner`**. In the second field we need to choose any option from the **`Project Url`** list then click on the **`Create Project`** button at the end of the page. +![Project-Info #center](_images/project-info.png) + +##### **If we did everything correctly then we should get a screen like the one in the image below.** +![Project-Done #center](_images/project-done.png) + diff --git a/content/learning-paths/cross-platform/gitlab-managed-runners/results.md b/content/learning-paths/cross-platform/gitlab-managed-runners/results.md new file mode 100644 index 0000000000..3507c8bad9 --- /dev/null +++ b/content/learning-paths/cross-platform/gitlab-managed-runners/results.md @@ -0,0 +1,56 @@ +--- +title: "Pipeline Script Explanation and Test Results" +weight: 20 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Pipeline Script explanation: + +The Pipeline script has multiple sections where each section instructs the pipeline operator on what todo or use and how each Stage looks like. + +### First Section: Stages + +In this section we are describing how many squentional stages will our pipeline have and what are their names (ex. **`Build, Test and Deploy`**). If we would like all the stages or jobs to run simultinously then we simply don't define this section. + +### Second Section: Build-Job part of the Build stage + +In this section we are defining the Build-Job as part of the Build stage. This stage will run on Gitlab-Hosted runner that uses Linux OS on Arm64 instance of size small. + +{{% notice Important Note %}} +Gitlab offers 3 Arm64 based Instances that use Linux as their OS. + +- saas-linux-small-arm64 +- saas-linux-medium-arm64 +- saas-linux-large-arm64 + +For more information about all Arm and other avilable Gitlab-hosted runners check [Gitlab-Hosted Runners](https://docs.gitlab.com/ci/runners/hosted_runners/linux/) page. + +{{%/notice%}} + +### Other Sections: + +The rest of the other sections follow the same patthern. You will notice that the **`Test`** stage for example has 2 Jobs in it (unit-test-job and lint-test-job). The **`Deploy`** stage here has only 1 Job called **`deploy-job`**. +As you get to learn more YML scripting you will be able to add a lot more complex functionality to your pipelines. + +{{%notice Note%}} +Gitlab offers a lot of documentions on how to create pipeline that fits different needs and also offer common templates for them as well. You can access then from [Use CI/CD to build your application](https://docs.gitlab.com/topics/build_your_application/) page. +{{%/notice%}} + +## How to run your pipeline for testing and to check the results: + +From the left hand side panel, Navigate to **`Build`** then to **`Pipeline`** then click on **`New pipeline`** button on the top right hand side just like the image below. In the new window click on **`New pipeline`** button again and your pipeline will start to execute. +![New-Pipeline #center](_images/new-pipeline.png) + +To check the status of your pipeline and to check the output of any of it's Jobs simply click on any of the **`Jobs`** as the image below (with red rectangle around them). +![pipeline-execution #center](_images/pipeline-execution.png) + +## Gitlab Helpful tools + +If you navigate to your pipeline editor from before you will notice that there are more tabs in that page other than the **`Edit`** tab. ![visual-pipeline #center](_images/visual-pipeline.png) + +### The other Tabs are: + +1. Visualize: which can visulaize your pipeline for you as you edit it's componenets which can be very helpful especially for complex pipelines. +2. Validate: which can validate your pipeline script as you are editting them and saving from time to time so that you can catch any issues with you code early on. diff --git a/content/learning-paths/embedded-and-microcontrollers/_index.md b/content/learning-paths/embedded-and-microcontrollers/_index.md index 8a55dda130..0604985236 100644 --- a/content/learning-paths/embedded-and-microcontrollers/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/_index.md @@ -9,18 +9,18 @@ key_ip: - Ethos-U maintopic: true operatingsystems_filter: -- Android: 1 +- Android: 2 - Baremetal: 30 -- Linux: 32 +- Linux: 34 - macOS: 7 - RTOS: 10 -- Windows: 4 +- Windows: 5 subjects_filter: -- CI-CD: 5 +- CI-CD: 6 - Containers and Virtualization: 6 - Embedded Linux: 4 - Libraries: 3 -- ML: 17 +- ML: 18 - Performance and Architecture: 22 - RTOS Fundamentals: 5 - Security: 2 @@ -54,15 +54,16 @@ tools_software_languages_filter: - DSTREAM: 2 - Edge AI: 3 - Edge Impulse: 2 -- ExecuTorch: 4 +- ExecuTorch: 5 - FastAPI: 1 - FPGA: 1 - Fusion 360: 1 - FVP: 10 - GCC: 9 +- GenAI: 1 - Generative AI: 2 - GitHub: 3 -- GitLab: 1 +- GitLab: 2 - gpiozero: 1 - Himax SDK: 1 - Hugging Face: 3 @@ -75,7 +76,7 @@ tools_software_languages_filter: - Kubernetes: 1 - lgpio: 1 - Linux kernel: 1 -- LLM: 2 +- LLM: 3 - MCP: 1 - MPS3: 1 - MXNet: 1 @@ -85,8 +86,8 @@ tools_software_languages_filter: - Paddle: 1 - Performance analysis: 1 - Porcupine: 1 -- Python: 8 -- PyTorch: 3 +- Python: 9 +- PyTorch: 4 - QEMU: 1 - Raspberry Pi: 7 - Remote.It: 1 diff --git a/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/initialize.md b/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/initialize.md index 4b15b001ec..7933581070 100644 --- a/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/initialize.md +++ b/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/initialize.md @@ -13,7 +13,7 @@ When setting up the project's Run-Time Environment, ensure you add the appropria Once this is done, the `RTX5` initialization code is typically the same. It involves setting up the `SysTick` timer with the [SystemCoreClockUpdate()](https://www.keil.com/pack/doc/CMSIS/Core/html/group__system__init__gr.html#gae0c36a9591fe6e9c45ecb21a794f0f0f) function, then initializing and starting the RTOS. -## Create `main()` +## Create the main() function Return to the `CMSIS` view. diff --git a/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/threads.md b/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/threads.md index 15ebb7ba52..dca0d0f75a 100644 --- a/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/threads.md +++ b/content/learning-paths/embedded-and-microcontrollers/cmsis_rtx_vs/threads.md @@ -11,7 +11,7 @@ In this step, you will implement the main RTOS thread (`app_main`), which is pri You will create three threads. The number and naming of the threads are flexible, so feel free to adjust as needed. -## Create `app_main` +## Create the app_main() function Click on the `+` icon within the `Source Files` Group, and add a new file `app_main.c`. Populate with the below. @@ -28,6 +28,7 @@ void app_main (void *argument) { osThreadNew(thread3, NULL, NULL); // Create thread3 } ``` + ## Create Threads Now you can implement the functionality of the threads themselves. Start with a simple example. Each thread will say hello, and then pause for a period, forever. diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/_index.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/_index.md new file mode 100644 index 0000000000..05c9989cbb --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/_index.md @@ -0,0 +1,70 @@ +--- +title: Customer Support Chatbot with Llama and ExecuTorch on Arm-Based Mobile Devices (with Agentic AI Capabilities) + +draft: true +cascade: + draft: true + +minutes_to_complete: 60 + +who_is_this_for: This learning plan is designed for developers with basic knowledge of Python, Mobile development, and machine learning concepts.It guides you through creating an on-device customer support chatbot using Meta's Llama models deployed via PyTorch's ExecuTorch runtime.The focus is on Arm-based Android devices.The chatbot will handle common customer queries (e.g., product info, troubleshooting) with low latency, privacy (no cloud dependency), and optimized performance.Incorporates agentic AI capabilities, transforming the chatbot from reactive (simple Q&A) to proactive and autonomous. Agentic AI enables the bot to plan multi-step actions, use external tools,reason over user intent, and adapt responses dynamically. This is achieved by extending the core LLM with tool-calling mechanisms and multi-agent orchestration. + +learning_objectives: + - Explain the architecture and capabilities of Llama models (e.g., Llama 3.2 1B/3B) for mobile use. + - Master the process of quantizing LLMs (e.g., 4-bit PTQ) to reduce model size and enable efficient inference on resource-constrained mobile devices. + - Gain proficiency in using ExecuTorch to export PyTorch models to .pte format for on-device deployment. + - Learn to leverage Arm-specific optimizations (e.g., XNNPACK, KleidiAI) to achieve 2-3x faster inference on Arm-based Android devices. + - Implement real-time inference with Llama models, enabling seamless customer support interactions (e.g., handling FAQs, troubleshooting). + +prerequisites: + - Basic Understanding of Machine Learning & Deep Learning (Familiarity with concepts like supervised learning, neural networks, transfer learning and Understanding of model training, validation, & overfitting concepts). + - Familiarity with Deep Learning Frameworks (Experience with PyTorch for building, training neural networks and Knowledge of Hugging Face Transformers for working with pre-trained LLMs. + - An Arm-powered smartphone with the i8mm feature running Android, with 16GB of RAM. + - A USB cable to connect your smartphone to your development machine. + - An AWS Graviton4 r8g.16xlarge instance to test Arm performance optimizations, or any [Arm based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider or an on-premise Arm server or Arm based laptop. + - Android Debug Bridge (adb) installed on your device. Follow the steps in [adb](https://developer.android.com/tools/adb) to install Android SDK Platform Tools. The adb tool is included in this package. + - Java 17 JDK. Follow the steps in [Java 17 JDK](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html) to download and install JDK for host. + - Android Studio. Follow the steps in [Android Studio](https://developer.android.com/studio) to download and install Android Studio for host. + - Python 3.10. + +author: Parichay Das + +### Tags +skilllevels: Introductory +subjects: ML +armips: + - Neoverse + +tools_software_languages: + - LLM + - GenAI + - Python + - PyTorch + - ExecuTorch +operatingsystems: + - Linux + - Windows + - Android + + +further_reading: + - resource: + title: Hugging Face Documentation + link: https://huggingface.co/docs + type: documentation + - resource: + title: PyTorch Documentation + link: https://pytorch.org/docs/stable/index.html + type: documentation + - resource: + title: Android + link: https://www.android.com/ + type: website + + +### FIXED, DO NOT MODIFY +# ================================================================================ +weight: 1 # _index.md always has weight of 1 to order correctly +layout: "learningpathall" # All files under learning paths have this same wrapper +learning_path_main_page: "yes" # This should be surfaced when looking for related content. Only set for _index.md of learning path content. +--- diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/_next-steps.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/example-picture.png b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/example-picture.png new file mode 100644 index 0000000000..c69844bed4 Binary files /dev/null and b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/example-picture.png differ diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-1.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-1.md new file mode 100644 index 0000000000..224fa4013e --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-1.md @@ -0,0 +1,34 @@ +--- +title: Overview +weight: 2 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Understanding Llama: Meta’s Large Language Model +Llama is a family of large language models trained using publicly available datasets. These models demonstrate strong performance across a range of natural language processing (NLP) tasks, including language translation, question answering, and text summarization. + +In addition to their analytical capabilities, Llama models can generate human-like, coherent, and contextually relevant text, making them highly effective for applications that rely on natural language generation. Consequently, they serve as powerful tools in areas such as chatbots, virtual assistants, and language translation, as well as in creative and content-driven domains where producing natural and engaging text is essential. + +Please note that the models are subject to the [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and this [responsible use guide](https://github.com/meta-llama/llama/blob/main/RESPONSIBLE_USE_GUIDE.md) . + + + +## Quantization +A practical approach to make models fit within smartphone memory constraints is through 4-bit groupwise per-token dynamic quantization of all linear layers. In this technique, dynamic quantization is applied to activations—meaning the quantization parameters are computed at runtime based on the observed minimum and maximum activation values. Meanwhile, the model weights are statically quantized, where each channel is quantized in groups using 4-bit signed integers. This method significantly reduces memory usage while maintaining model performance for on-device inference. + +This method ensures efficient memory usage while maintaining model performance on resource-constrained devices. + +For further information, refer to [torchao: PyTorch Architecture Optimization](https://github.com/pytorch-labs/ao/). + +The table below evaluates WikiText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). + +The results are for two different groupsizes, with max_seq_len 2048, and 1000 samples: + +|Model | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256) +|--------|-----------------| ---------------------- | --------------- +|Llama 2 7B | 9.2 | 10.2 | 10.7 +|Llama 3 8B | 7.9 | 9.4 | 9.7 + +Note that groupsize less than 128 was not enabled in this example, since the model was still too large. This is because current efforts have focused on enabling FP32, and support for FP16 is under way. diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-2.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-2.md new file mode 100644 index 0000000000..195ad5f4cc --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-2.md @@ -0,0 +1,79 @@ +--- +title: Environment Setup +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Android NDK and Android Studio - Environment Setup + +#### Plartform Required +- An AWS Graviton4 r8g.16xlarge instance to test Arm performance optimizations, or any [Arm based instance](/learning-paths/servers-and-cloud-computing/csp/) from a cloud service provider or an on-premise Arm server or Arm based laptop. +- An Arm-powered smartphone with the i8mm feature running Android, with 16GB of RAM. +- A USB cable to connect your smartphone to your development machine. + +The installation and configuration of Android Studio can be accomplished through the following steps: +1. Download and install the latest version of [Android Studio](https://developer.android.com/studio). +2. Launch Android Studio and access the Settings dialog. +3. Navigate to Languages & Frameworks → Android SDK. +4. Under the SDK Platforms tab, ensure that Android 14.0 (“UpsideDownCake”) is selected. + +Next, proceed to install the required version of the Android NDK by first setting up the Android Command Line Tools. +Linux: +```bash +curl https://dl.google.com/android/repository/commandlinetools-linux-11076708_latest.zip -o commandlinetools.zip +unzip commandlinetools.zip +./commandlinetools/bin/sdkmanager --install "ndk;26.1.10909697" +``` +Install the NDK in the same directory where Android Studio has installed the SDK, which is typically located at ~/Library/Android/sdk by default. Then, configure the necessary environment variables as follows: +```bash +export ANDROID_HOME="$(realpath ~/Library/Android/sdk)" +export PATH=$ANDROID_HOME/cmdline-tools/bin/:$PATH +sdkmanager --sdk_root="${ANDROID_HOME}" --install "ndk;28.0.12433566" +export ANDROID_NDK=$ANDROID_HOME/ndk/28.0.12433566/ +``` + +#### Install Java 17 JDK +1. Open the Java SE 17 Archive [Downloads](https://www.oracle.com/java/technologies/javase/jdk17-archive-downloads.html) page in your browser. +2. Select an appropriate download for your development machine operating system. + +#### Install Git and cmake +```bash +sudo apt-get install git cmake +``` + +#### Install Python 3.10 +```bash +sudo apt-get install python3.10 +``` + +#### Set up ExecuTorch +ExecuTorch is an end-to-end framework designed to facilitate on-device inference across a wide range of mobile and edge platforms, including wearables, embedded systems, and microcontrollers. As a component of the PyTorch Edge ecosystem, it streamlines the efficient deployment of PyTorch models on edge devices. For further details, refer to the [ExecuTorch Overview](https://pytorch.org/executorch/stable/overview/). + +It is recommended to create an isolated Python environment to install the ExecuTorch dependencies. Instructions are available for setting up either a Python virtual environment or a Conda virtual environment—you only need to choose one of these options. + +##### Install Required Tools ( Python environment setup) +```python +python3 -m venv exec_env +source exec_env/bin/activate +pip install torch torchvision torchaudio +pip install executorch +``` +##### Clone Required Repositories +```bash +git clone https://github.com/pytorch/executorch.git +git clone https://github.com/pytorch/text.git +``` +##### Download Pretrained Model (Llama 3.1 Instruct) +Download the quantized model weights optimized for mobile deployment from either the Meta AI Hub or Hugging Face. +``` +git lfs install +git clone https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct +``` + +##### Verify Arm SDK Path +``` +ANDROID_SDK_ROOT=/Users//Library/Android/sdk +ANDROID_NDK_HOME=$ANDROID_SDK_ROOT/ndk/26.1.10909125 +``` \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-3.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-3.md new file mode 100644 index 0000000000..c05ddda728 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-3.md @@ -0,0 +1,161 @@ +--- +title: Model Preparation and Conversion +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +To begin working with Llama 3, the pre-trained model parameters can be accessed through Meta’s Llama Downloads page. Users are required to request access by submitting their details and reviewing and accepting the Responsible Use Guide. Upon approval, a license and a download link—valid for 24 hours—are provided. For this exercise, the Llama 3.2 1B Instruct model is utilized; however, the same procedures can be applied to other available variants with only minor modifications. + +Convert the model into an ExecuTorch-compatible format optimized for Arm devices +## Script the Model + +```python +import torch +from transformers import AutoModelForCausalLM + +model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", torch_dtype=torch.float16) +scripted_model = torch.jit.script(model) +scripted_model.save("llama_exec.pt") + +``` + +Install the llama-stack package from pip. +```python +pip install llama-stack +``` + +Run the command to download, and paste the download link from the email when prompted. +```python +llama model download --source meta --model-id Llama3.2-1B-Instruct +``` + +When the download is finished, the installation path is printed as output. +```python +Successfully downloaded model to //.llama/checkpoints/Llama3.2-1B-Instruct +``` +Verify by viewing the downloaded files under this path: +``` +ls $HOME/.llama/checkpoints/Llama3.2-1B-Instruct +checklist.chk consolidated.00.pth params.json tokenizer.model + +``` + +Export the model and generate a .pte file by running the appropriate Python command. This command will export the model and save the resulting file in your current working directory. +```python +python3 -m examples.models.llama.export_llama \ +--checkpoint $HOME/.llama/checkpoints/Llama3.2-1B-Instruct/consolidated.00.pth \ +--params $HOME/.llama/checkpoints/Llama3.2-1B-Instruct/params.json \ +-kv --use_sdpa_with_kv_cache -X --xnnpack-extended-ops -qmode 8da4w \ +--group_size 64 -d fp32 \ +--metadata '{"get_bos_id":128000, "get_eos_ids":[128009, 128001, 128006, 128007]}' \ +--embedding-quantize 4,32 \ +--output_name="llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte" \ +--max_seq_length 1024 \ +--max_context_length 1024 +``` + +Because Llama 3 has a larger vocabulary size, it is recommended to quantize the embeddings using the parameter --embedding-quantize 4,32. This helps to further optimize memory usage and reduce the overall model size. + + +###### Load a pre-fine-tuned model (from Hugging Face) +- Example: meta-llama/Llama-3-8B-Instruct or a customer-support fine-tuned variant + +###### Model Optimization for ARM (Understanding Quantization) +- Reduces model precision (e.g., 32-bit → 8-bit) +- Decreases memory footprint (~4x reduction) +- Speeds up inference on CPU +- Minimal accuracy loss for most tasks + +###### Apply Dynamic Quantization +- Create optimize_model.py + +```python +import torch +from transformers import AutoModelForCausalLM, AutoTokenizer +from torch.quantization import quantize_dynamic +import time +import os + +def load_base_model(model_name): + """Load the base model""" + print(f"Loading base model: {model_name}") + + tokenizer = AutoTokenizer.from_pretrained(model_name) + tokenizer.pad_token = tokenizer.eos_token + + model = AutoModelForCausalLM.from_pretrained( + model_name, + torch_dtype=torch.float32, + device_map=None, + low_cpu_mem_usage=True + ) + model.eval() + + return model, tokenizer + +def apply_quantization(model): + """Apply dynamic quantization""" + print("Applying dynamic quantization...") + + quantized_model = quantize_dynamic( + model, + {torch.nn.Linear}, # Quantize linear layers + dtype=torch.qint8 + ) + + return quantized_model + +def test_model(model, tokenizer, prompt): + """Test model with a sample prompt""" + inputs = tokenizer(prompt, return_tensors="pt") + + start_time = time.time() + with torch.no_grad(): + outputs = model.generate( + inputs.input_ids, + max_new_tokens=100, + do_sample=False, + pad_token_id=tokenizer.eos_token_id + ) + inference_time = time.time() - start_time + + response = tokenizer.decode(outputs[0], skip_special_tokens=True) + + return response, inference_time + +def main(): + model_name = "meta-llama/Meta-Llama-3-8B-Instruct" + + # Load base model + base_model, tokenizer = load_base_model(model_name) + + # Test base model + test_prompt = "How do I track my order?" + print("\nTesting base model...") + response, base_time = test_model(base_model, tokenizer, test_prompt) + print(f"Base model inference time: {base_time:.2f}s") + + # Apply quantization + quantized_model = apply_quantization(base_model) + + # Test quantized model + print("\nTesting quantized model...") + response, quant_time = test_model(quantized_model, tokenizer, test_prompt) + print(f"Quantized model inference time: {quant_time:.2f}s") + print(f"Speedup: {base_time / quant_time:.2f}x") + + # Save quantized model + save_dir = "./models/quantized_llama3" + os.makedirs(save_dir, exist_ok=True) + + torch.save(quantized_model.state_dict(), f"{save_dir}/model.pt") + tokenizer.save_pretrained(save_dir) + + print(f"\nQuantized model saved to: {save_dir}") + +if __name__ == "__main__": + main() + +``` \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-4.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-4.md new file mode 100644 index 0000000000..e2291f4795 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-4.md @@ -0,0 +1,31 @@ +--- +title: Building the Chatbot Logic + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Conversation Framework (Python prototype) +```python +from transformers import AutoTokenizer +tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") + +def generate_response(model, query, context): + prompt = f"### Context:\n{context}\n### User Query:\n{query}\n### Assistant Response:" + inputs = tokenizer(prompt, return_tensors="pt") + outputs = model.generate(**inputs, max_new_tokens=200) + return tokenizer.decode(outputs[0], skip_special_tokens=True) +``` + +###### Context Memory (Simple JSON Store) + +```python +import json + +def update_memory(user_id, query, response): + memory = json.load(open("chat_memory.json", "r")) + memory[user_id].append({"query": query, "response": response}) + json.dump(memory, open("chat_memory.json", "w")) + +``` + diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-5.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-5.md new file mode 100644 index 0000000000..b155e4245b --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-5.md @@ -0,0 +1,42 @@ +--- +title: Adding Agentic AI Capabilities +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- +Enable the chatbot to perform reasoning, make decisions, and execute actions autonomously + +## Define Agentic Loop +```python +class AgenticChatbot: + def __init__(self, model): + self.model = model + + def observe(self, input): + return f"User said: {input}" + + def think(self, observation): + return f"Decide best next step based on intent." + + def act(self, decision): + if "refund" in decision: + return "Processing refund..." + elif "troubleshoot" in decision: + return "Let's check your device settings." + else: + return "Connecting you with an agent." + + def respond(self, query): + obs = self.observe(query) + thought = self.think(obs) + action = self.act(thought) + return f"Reasoning: {thought}\nAction: {action}" +``` +## Integrate Llama with Reasoning Loop +```python +def generate_agentic_response(query, context): + reasoning = agent.respond(query) + model_response = generate_response(model, query, context) + return reasoning + "\n\n" + model_response +``` \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-6.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-6.md new file mode 100644 index 0000000000..dbd78eff20 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-6.md @@ -0,0 +1,107 @@ +--- +title: Android Integration +weight: 7 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +#### Integration +Build the Llama Runner Binary for Android +Cross-compile the Llama Runner to enable execution on Android by following the steps outlined below. + +#### Android NDK +Configure the environment variable to reference the Android NDK +``` +export ANDROID_NDK=$ANDROID_HOME/ndk/28.0.12433566/ +``` + +Ensure that $ANDROID_NDK/build/cmake/android.toolchain.cmake is accessible so CMake can perform cross-compilation. + +#### Use KleidiAI to build ExecuTorch and the required libraries for Android deployment +build ExecuTorch for Android, leveraging the performance optimizations offered by [KleidiAI](https://gitlab.arm.com/kleidi/kleidiai) kernels + +Use cmake to cross-compile ExecuTorch: +``` +cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ + -DANDROID_ABI=arm64-v8a \ + -DANDROID_PLATFORM=android-23 \ + -DCMAKE_INSTALL_PREFIX=cmake-out-android \ + -DEXECUTORCH_ENABLE_LOGGING=1 \ + -DCMAKE_BUILD_TYPE=Release \ + -DEXECUTORCH_BUILD_EXTENSION_DATA_LOADER=ON \ + -DEXECUTORCH_BUILD_EXTENSION_MODULE=ON \ + -DEXECUTORCH_BUILD_EXTENSION_TENSOR=ON \ + -DEXECUTORCH_BUILD_EXTENSION_FLAT_TENSOR=ON \ + -DEXECUTORCH_BUILD_XNNPACK=ON \ + -DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \ + -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \ + -DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \ + -DEXECUTORCH_BUILD_KERNELS_LLM=ON \ + -DEXECUTORCH_BUILD_EXTENSION_LLM_RUNNER=ON \ + -DEXECUTORCH_BUILD_EXTENSION_LLM=ON \ + -DEXECUTORCH_BUILD_EXTENSION_RUNNER_UTIL=ON \ + -DEXECUTORCH_XNNPACK_ENABLE_KLEIDI=ON \ + -DXNNPACK_ENABLE_ARM_BF16=OFF \ + -DBUILD_TESTING=OFF \ + -Bcmake-out-android . + +cmake --build cmake-out-android -j7 --target install --config Release +``` +Beginning with ExecuTorch version 0.7 beta, KleidiAI is enabled by default. The option -DEXECUTORCH_XNNPACK_ENABLE_KLEIDI=ON is active, providing built-in support for KleidiAI kernels within ExecuTorch when using XNNPack. + +#### Build Llama runner for Android +``` +cmake -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \ + -DANDROID_ABI=arm64-v8a \ + -DANDROID_PLATFORM=android-23 \ + -DCMAKE_INSTALL_PREFIX=cmake-out-android \ + -DCMAKE_BUILD_TYPE=Release \ + -DPYTHON_EXECUTABLE=python \ + -DEXECUTORCH_BUILD_XNNPACK=ON \ + -DEXECUTORCH_BUILD_KERNELS_OPTIMIZED=ON \ + -DEXECUTORCH_BUILD_KERNELS_QUANTIZED=ON \ + -DEXECUTORCH_BUILD_KERNELS_CUSTOM=ON \ + -DSUPPORT_REGEX_LOOKAHEAD=ON \ + -DBUILD_TESTING=OFF \ + -Bcmake-out-android/examples/models/llama \ + examples/models/llama + +cmake --build cmake-out-android/examples/models/llama -j16 --config Release + +``` +Execute on Android using adb shell. +You will need an Arm-based Android smartphone with the i8mm feature and at least 16GB of RAM. The steps below were validated on a Google Pixel 8 Pro. +#### Create New Android Project +Open Android Studio → New Project → Empty Activity + +#### Add ExecuTorch Runtime to build.gradle +``` +dependencies { + implementation files('libs/executorch.aar') +} +``` + +#### Android phone connection +Connect your Android device to your computer using a USB cable. + +Ensure that USB debugging is enabled on your device. You can follow the Configure on-device developer options guide to enable it. + +After enabling USB debugging and connecting the device via USB, run the following command: +``` +adb devices +``` + +#### model, tokenizer, and Llama runner +``` +adb shell mkdir -p /data/local/tmp/llama +adb push llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte /data/local/tmp/llama/ +adb push $HOME/.llama/checkpoints/Llama3.2-1B-Instruct/tokenizer.model /data/local/tmp/llama/ +adb push cmake-out-android/examples/models/llama/llama_main /data/local/tmp/llama/ + +``` + +#### Model Running +``` +adb shell "cd /data/local/tmp/llama && ./llama_main --model_path llama3_1B_kv_sdpa_xnn_qe_4_64_1024_embedding_4bit.pte --tokenizer_path tokenizer.model --prompt '<|start_header_id|>system<|end_header_id|>\nYour name is Cookie. you are helpful, polite, precise, concise, honest, good at writing. You always give precise and brief answers up to 32 words<|eot_id|><|start_header_id|>user<|end_header_id|>\nHey Cookie! how are you today?<|eot_id|><|start_header_id|>assistant<|end_header_id|>' --warmup=1 --cpu_threads=5" +``` \ No newline at end of file diff --git a/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-7.md b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-7.md new file mode 100644 index 0000000000..422f8c8d78 --- /dev/null +++ b/content/learning-paths/embedded-and-microcontrollers/customer-support-chatbot-with-llama-and-executorch-on-arm-based-mobile-devices/how-to-7.md @@ -0,0 +1,45 @@ +--- +title: Run, Testing and Benchmarking +weight: 8 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +#### Build the Android (AAR) +You can use the Android demo application included in the ExecuTorch repository, [LlamaDemo](https://github.com/pytorch/executorch/tree/main/examples/android/LlamaDemo), to showcase local inference with ExecuTorch + +Open a terminal and navigate to the root directory of the ExecuTorch repository.Then, set the following environment variables: + +```bash +export ANDROID_NDK=$ANDROID_HOME/ndk/28.0.12433566/ +export ANDROID_ABI=arm64-v8a +``` +Run the following commands to set up the required JNI library: +```bash +pushd extension/android +./gradlew build +popd +pushd examples/demo-apps/android/LlamaDemo +./gradlew :app:setup +popd +``` +Check if the files are available on the phone: +```bash +adb shell "ls -la /data/local/tmp/llama/" +``` +If not, copy them: +``` +adb shell mkdir -p /data/local/tmp/llama +adb push /data/local/tmp/llama/ +adb push /data/local/tmp/llama/ +``` + +#### Build the Android Package Kit using Android Studio +- Open Android Studio and choose Open an existing Android Studio project. +- Navigate to examples/demo-apps/android/LlamaDemo and open it. +- Run the app (^R) to build and launch it on your connected Android device. + +#### Measure Inference Latency +- adb shell am start -n com.example.chatbot/.MainActivity +- adb shell dumpsys meminfo com.example.chatbot diff --git a/content/learning-paths/embedded-and-microcontrollers/streamline-kernel-module/3_oot_module.md b/content/learning-paths/embedded-and-microcontrollers/streamline-kernel-module/3_oot_module.md index 05fbd33a58..9bfc522b7c 100644 --- a/content/learning-paths/embedded-and-microcontrollers/streamline-kernel-module/3_oot_module.md +++ b/content/learning-paths/embedded-and-microcontrollers/streamline-kernel-module/3_oot_module.md @@ -223,7 +223,8 @@ SSH onto your target device: ssh root@ ``` -Execute the following commads on the target to run the module: +Execute the following commands on the target to run the module: + ```bash insmod /root/mychardrv.ko mknod /dev/mychardrv c 42 0 diff --git a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/1_installation.md b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/1_installation.md index 47e65d53fb..0f15cfac7d 100644 --- a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/1_installation.md +++ b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/1_installation.md @@ -1,111 +1,108 @@ --- -title: Install and configure Workbench for Zephyr in VS Code +title: Set up your environment weight: 2 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Set up your Zephyr development environment +## The benefits of using Workbench for Zephyr Visual Code extension -Setting up a [Zephyr](https://zephyrproject.org/) RTOS development environment from scratch can be challenging, requiring you to manually install SDKs, configure toolchains, and initialize workspace directories. These steps often vary across operating systems and board vendors, leading to a fragmented and error-prone setup process. +Getting started with [Zephyr](https://zephyrproject.org/) RTOS development can be challenging. You often need to install SDKs, set up toolchains, and organize workspace directories by hand. The process is different for each operating system and board vendor, which can make setup confusing and lead to errors. -[Workbench for Zephyr](https://zephyr-workbench.com/) is an open-source Visual Studio Code [extension](https://marketplace.visualstudio.com/items?itemName=Ac6.zephyr-workbench) that transforms Zephyr RTOS development into a streamlined IDE experience. Created by [Ac6](https://www.ac6.fr/en/), it automates toolchain setup, project management, and debugging, making Zephyr projects faster to start and easier to scale. +[Workbench for Zephyr](https://zephyr-workbench.com/) is an open-source [Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=Ac6.zephyr-workbench) that transforms Zephyr RTOS development into a streamlined IDE experience. Created by [Ac6](https://www.ac6.fr/en/), it automates toolchain setup, project management, and debugging, making Zephyr projects faster to start and easier to scale. -In this Learning Path, you'll learn the essential steps to install Workbench for Zephyr and configure a complete development environment on your local machine. Once complete, you'll be ready to create, build, and debug applications for Arm Cortex-M platforms using Zephyr RTOS. +In this Learning Path, you'll set up Workbench for Zephyr and configure a complete development environment on your computer. By the end, you can create, build, and debug applications for Arm Cortex-M boards using Zephyr RTOS. -Workbench for Zephyr provides one-click environment setup that automatically installs the required tools including Python, CMake, Ninja, and Git. It supports importing and managing Zephyr SDKs with version and architecture selection, while initializing west workspaces and creating board-specific applications from samples. The extension builds Zephyr applications and flashes hardware directly from the VS Code interface. It also provides breakpoint debugging and memory usage insights with hardware probe support. +Workbench for Zephyr makes it easy to set up your development environment with a single click. It automatically installs all the tools you need, such as Python, CMake, Ninja, and Git. You can import and manage different versions of the Zephyr SDK, choose the right architecture, and quickly initialize West workspaces. The extension lets you create board-specific applications from sample projects, build and flash them to your hardware, and debug your code, all within Visual Studio Code. You also get features like breakpoint debugging and memory usage insights when using a supported hardware probe. -## What you need before installing Workbench for Zephyr +## Install dependencies -To get started with Workbench for Zephyr you need to have Visual Studio Code downloaded, installed, and running on your computer. +To get started with Workbench for Zephyr, you need to have Visual Studio Code downloaded, installed, and running on your computer: -**Windows OS:** + +### Windows For Windows, you need version 10 or later (64-bit x64), along with administrator privileges for installing runners and drivers. -**MacOS:** -On MacOS, the Homebrew package manager is required. To install Homebrew, run the following command: +### macOS +On macOS, the Homebrew package manager is required. To install Homebrew, run the following command: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` -**Linux:** -- A recent 64-bit X64 distribution such as Ubuntu 20.04 or later, Fedora, Clear Linux OS, or Arch Linux -- Other distributions might work, but may require manual configuration of system packages -- After installation, use the Workbench host tools manager to verify that all required tools were installed correctly +### Linux +To use Workbench for Zephyr on Linux, install a recent 64-bit distribution such as Ubuntu 20.04 or later, Fedora, Clear Linux OS, or Arch Linux. Other distributions can work, but you might need to manually configure some system packages. After installing your operating system, use the Workbench host tools manager to check that all required tools are installed correctly. Zephyr Workbench supports STM32 development boards (STM32 Discovery, Nucleo series), Nordic Semiconductor boards (nRF52, nRF53, nRF91 series), NXP development boards (FRDM, LPCXpresso series), Espressif boards (ESP32-based boards), and many other Zephyr-supported platforms like Renesas, Silabs or Infineon. You need a development board to try out the code examples. -## Configure the Workbench for Zephyr extension in Visual Studio Code - -This section covers installing the Zephyr Workbench extension and configuring your Arm development environment. - -### Install the extension +## Install and configure the Zephyr Workbench extension -To install the Workbench for Zephyr extension, open Visual Studio Code and navigate to the Extensions view by selecting the Extensions icon in the Activity Bar. +This section covers installing the Workbench for Zephyr extension and configuring your Arm development environment. -You can also use the keyboard shortcut `Ctrl+Shift+X` (Windows/Linux) or `Cmd+Shift+X` (macOS). +To install the Workbench for Zephyr extension, open **Visual Studio Code**. In the **Activity Bar**, select the **Extensions** icon to open the **Extensions view**. -In the search box, type "Workbench for Zephyr" and locate the official "Workbench for Workbench" extension by Ac6. Select **Install** to add the extension to VS Code. +You can also use the keyboard shortcut **Ctrl+Shift+X** on Windows or Linux, or **Cmd+Shift+X** on macOS. -The extension icon appears in the Activity Bar, and a welcome message may appear confirming successful installation. +In the search box, enter `Workbench for Zephyr`. Locate the official extension by **Ac6** and select **Install**. -Once installed, the Workbench for Zephyr icon appears in the sidebar with a welcome screen. +After installation, the Workbench for Zephyr icon appears in the **Activity Bar**. A welcome screen confirms that the extension is ready to use. -### Install the required host tools +## Install the required host tools In the Workbench for Zephyr panel, select **Install Host Tools** to automatically install the required dependencies. This process installs Python 3.x, CMake, the Ninja build system, Git, Device Tree Compiler (DTC), and the West meta-tool. -![Install Host Tools #center](images/install_host_tools.png) +![Workbench for Zephyr extension panel in Visual Studio Code showing the Install Host Tools button highlighted. The panel lists required tools such as Python, CMake, Ninja, Git, and Device Tree Compiler. The environment is a modern code editor interface with a sidebar and clear labels. The tone is instructional and welcoming. Visible text includes Install Host Tools and a checklist of dependencies to be installed. alt-text#center](images/install_host_tools.png "Workbench for Zephyr extension panel") {{% notice Note %}} -On Windows, you may be prompted for permission when tools are executed. Select "Allow" when requested. -{{% /notice %}} +On Windows, you might see permission prompts when Workbench for Zephyr installs or runs tools. Select **Allow** to continue with the setup.{{% /notice%}} + +When the installation completes, select **Verify Host Tools** to confirm that each required package is installed and up to date. The panel displays the version and status for Python, CMake, Ninja, Git, and Device Tree Compiler. If any tool is missing or out of date, follow the prompts to resolve the issue before continuing. +## Import and configure the Zephyr toolchain + +To build and debug Zephyr applications for Arm Cortex-M boards, you need to import and configure the Zephyr toolchain using Workbench for Zephyr. -When the installation completes, select **Verify Host Tools** to check the version of each installed package. +In the Workbench for Zephyr panel, select **Import Toolchain**. This opens a guided setup panel. -### Import and configure the toolchain +In the **Import Toolchain** panel, configure the following options to set up your Zephyr toolchain for Arm development: -Next, download and configure the toolchain by selecting **Import Toolchain** in the Workbench for Zephyr panel. Select the toolchain family (*Zephyr SDK*) and configure the SDK Type by choosing *Minimal* for basic functionality. +- **Toolchain Family**: select *Zephyr SDK* to use the official Zephyr toolchain. +- **SDK Type**: select *Minimal* to install only the essential components needed for development. +- **Version**: select the Zephyr SDK release you want to use, such as v0.17.0 or v0.17.3. +- **Target Architectures**: select *arm* to target Arm-based boards. -Select your desired version (such as v0.17.4... your version may vary a little) and choose the target architectures. For this Learning Path, you only need to select *arm*. +These settings ensure your environment is optimized for Arm Cortex-M development. After configuring these options, continue with the import process to download and install the selected SDK. -Specify the parent directory for SDK installation and select **Import** to download and install the SDK. +Next, specify the directory where you want to install the SDK. Select **Import** to start the download and installation process. When the import completes, the panel displays a confirmation that the toolchain is ready. -![Import Toolchain #center](images/import_toolchain.png) +If you see errors during import, check your internet connection and confirm you have at least 2 GB of free disk space. For more troubleshooting tips, review the extension's documentation or check the Visual Studio Code output panel. -### Initialize the Zephyr project workspace +![Workbench for Zephyr Import Toolchain panel in Visual Studio Code. The panel displays options for selecting the toolchain family, SDK type, version, and target architectures. Visible text includes Import Toolchain, Zephyr SDK, Minimal, v0.17.0, v0.17.3, and arm. The interface is organized and user-friendly, with clearly labeled dropdown menus and buttons. The overall tone is instructional and welcoming, set within a modern code editor workspace. alt-text #center](images/import_toolchain.png "Workbench for Zephyr Import Toolchain panel") + + +## Initialize the Zephyr project workspace Zephyr uses a Git-based workspace manager called West to organize its source code, modules, and samples. Use Workbench for Zephyr to initialize your first West workspace. -In the Workbench for Zephyr panel, select **Initialize Workspace** to set up your project environment. Configure the workspace settings by selecting "Minimal from template" for the source location and using the default path `https://github.com/zephyrproject-rtos/zephyr`. +In the Workbench for Zephyr panel, select **Initialize Workspace** to set up your project environment. Configure the workspace settings by selecting **Minimal from template** for the source location and using the default path `https://github.com/zephyrproject-rtos/zephyr`. Choose a target-specific template (such as STM32 or NXP) and select your Zephyr version (such as v4.3.0... your version may vary a bit). Specify the directory for your workspace, keeping in mind that initialization takes approximately 10 minutes to complete. Select **Import** to create and update the workspace. -![Initialize West Workspace #center](images/initialize_workspace.png) +![Workbench for Zephyr Initialize Workspace panel in Visual Studio Code. The panel displays options for setting up a new West workspace, including fields for source location, template selection, Zephyr version, and workspace directory. Visible text includes Initialize Workspace, Minimal from template, https://github.com/zephyrproject-rtos/zephyr, STM32, NXP, v3.7.0, v4.1.0, and Import. The interface is organized and user-friendly, with dropdown menus and buttons clearly labeled. The overall tone is instructional and welcoming, set within a modern code editor workspace. alt-text#center](images/initialize_workspace.png "Workbench for Zephyr Initialize Workspace panel in Visual Studio Code.") {{% notice Note %}} -The workspace initialization downloads the Zephyr source code and dependencies. This process may take several minutes depending on your internet connection speed. - -Additionally, the selected revision you select may be a bit different from the one shown above. +The workspace initialization downloads the Zephyr source code and dependencies. This process can take several minutes depending on your internet connection speed. {{% /notice %}} -### Verify setup +## Verify setup Test your setup by confirming that the Workbench for Zephyr panel shows all components as installed successfully. Verify the host tools are installed, the SDK is imported and detected, and the West workspace is initialized. Ensure no error messages appear in the VS Code output panel. +{{% notice Troubleshooting Tips %}} If you have trouble installing host tools on Windows, try running Visual Studio Code as an administrator. Make sure your firewall allows internet access so dependencies can download. Before importing the SDK, confirm you have at least 2 GB of free disk space.{{% /notice %}} -{{% notice Note %}} -**Troubleshooting tips:** -- Run VS Code as Administrator if host tool installation fails on Windows -- Ensure internet access is allowed through your firewall -- Check for minimum 2 GB free disk space before importing SDK -{{% /notice %}} - -You're ready to create and build your first Zephyr application targeting an Arm Cortex-M board. +You're now ready to create and build your first Zephyr application targeting an Arm Cortex-M board. diff --git a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/2_development.md b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/2_development.md index ca17c005c1..1d4b4b494b 100644 --- a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/2_development.md +++ b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/2_development.md @@ -1,42 +1,45 @@ --- -title: Build Zephyr applications in VS Code +title: Build a Zephyr application with Zephyr workbench weight: 3 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Create and build your first Zephyr application +## Set up your Zephyr development board and environment -In this session, you'll learn how to create and build your first Zephyr application using Workbench for Zephyr. This step prepares you to customize, test, and expand real firmware projects on Arm Cortex-M boards. +In this section, you'll learn how to create and build your first Zephyr application using Workbench for Zephyr. This step prepares you to customize, test, and expand real firmware projects on Arm Cortex-M boards. +For this demonstration, you'll use an [NXP FRDM-MCXN947](https://www.nxp.com/design/design-center/development-boards-and-designs/FRDM-MCXN947) development board as your target device. The same process works for any Zephyr-supported Arm Cortex-M board. -For demonstration, you'll use an [NXP FRDM-MCXN947](https://www.nxp.com/design/design-center/development-boards-and-designs/FRDM-MCXN947) development board as the target device. However, the same steps apply to any Zephyr-supported Arm Cortex-M board. -You can find the full list of supported boards in the [Supported Boards](https://docs.zephyrproject.org/latest/boards/#). +To see all compatible boards, visit the [Zephyr Supported Boards list](https://docs.zephyrproject.org/latest/boards/#). -Depending on your board, you might need to install a different debug tool aka `runner`. The next module covers this setup. +Depending on your board, you might need to install a different debug tool aka `runner`. The next section covers this setup. -### Create application +## Create the application In the Zephyr Workbench panel: -1. Select **Create New Application** -2. Configure your project: - - Select workspace and SDK - - Choose your target board (for example, NXP FRDM-MCXN947) - - Select a sample app (for example, `hello_world`) - - Provide a project name +Select **Create New Application** in the Zephyr Workbench panel. -![Create App](images/create_app.png) +Configure your project: + - Select the workspace and SDK version. + - Choose your target board (for example, NXP FRDM-MCXN947). + - Select a sample application (for example, `hello_world`). + - Enter a project name. -### Build the application +After you complete these steps, Workbench for Zephyr creates the project and prepares it for building. + +![Zephyr Workbench Create New Application panel in VS Code showing workspace selection SDK version target board dropdown sample application selection and project name fields. The interface is clean and organized with clear labels and buttons. The wider VS Code environment is visible in the background with a neutral and professional tone. All text in the panel is legible and guides the user through creating a new Zephyr application. alt-text #center](images/create_app.png "Zephyr Workbench Create New Application panel") + +## Build the application Select the **Build** button in Workbench for Zephyr or press `Ctrl+Shift+B`. The build system compiles your application and links it against the Zephyr kernel and board-specific drivers. -![Build Application](images/build_application.png) +![VS Code Zephyr Workbench build application panel showing workspace selection, SDK version, target board dropdown, sample application selection, and project name fields. The primary subject is the Zephyr Workbench interface guiding users through building a Zephyr application. Visible text includes labels such as Workspace, SDK, Target Board, Sample Application, and Project Name, with buttons for Create New Application and Build. The wider VS Code environment is visible in the background, presenting a clean and organized workspace with a neutral, professional tone. alt-text#center] -### Install board-specific debug utilities +## Install board-specific debug utilities To enable debugging on your target hardware, you might need to install additional tools based on the board vendor. @@ -46,12 +49,10 @@ For the NXP FRDM-MCXN947, download and install the LinkServer debug utility: Once installed, Workbench for Zephyr attempts to detect it automatically during a debug session. If you're using a different board, see your vendor's documentation to install the appropriate debug utility. -{{% notice Note %}} -If Workbench for Zephyr doesn't automatically detect the installed debug runner, you can manually configure it. -Open the **Debug Manager** from the Zephyr sidebar, and enter the full path to the runner executable. -{{% /notice %}} +{{% notice Note %}} If Workbench for Zephyr doesn't automatically detect the installed debug runner, you can manually configure it. +Open the **Debug Manager** from the Zephyr sidebar, and enter the full path to the runner executable.{{% /notice %}} -### Review output +## Review the output Check the build output at the bottom panel of VS Code. Make sure there are no errors or warnings. A successful build displays: @@ -62,7 +63,7 @@ Memory region Used Size Region Size % Used SRAM: 4048 B 256 KB 1.5% ``` -### Code walkthrough: hello_world +## Code walkthrough: hello_world The following code shows a basic Zephyr application that prints a message to the console: @@ -79,8 +80,8 @@ int main(void) `CONFIG_BOARD` expands to your target board name. You'll modify this app in the next module! -### Try this: modify and rebuild +## Try this: modify and rebuild Now that the app works, try editing the message in `printk()` or changing the board target in the application settings. Then rebuild and observe the output. This helps verify that your toolchain and workspace respond correctly to code and config changes. -With your first Zephyr application successfully built, you're ready to take the next step—debugging. In the next module, you'll launch a debug session, set breakpoints, and perform memory analysis using Workbench for Zephyr. These skills help you validate and optimize applications running on real Arm Cortex-M hardware. +With your first Zephyr application successfully built, you're ready to take the next step, which is debugging. In the next module, you'll launch a debug session, set breakpoints, and perform memory analysis using Workbench for Zephyr. These skills help you validate and optimize applications running on real Arm Cortex-M hardware. diff --git a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/3_debug.md b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/3_debug.md index cb5ac8f61f..bf7ffc97eb 100644 --- a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/3_debug.md +++ b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/3_debug.md @@ -1,35 +1,37 @@ --- -title: Analyze and debug Zephyr applications in VS Code +title: Analyze and debug a Zephyr application weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Analyze and debug Zephyr applications in VS Code +## Overview -In this module, you'll learn how to inspect memory usage and perform live debugging on your Zephyr applications using Workbench for Zephyr. These capabilities are essential for diagnosing bugs and optimizing embedded firmware performance on Arm Cortex-M platforms. +In this section, you'll learn how to inspect memory usage and perform live debugging on your Zephyr applications using Workbench for Zephyr. These capabilities are essential for diagnosing bugs and optimizing embedded firmware performance on Arm Cortex-M platforms. ## Analyze memory usage Understanding how your application uses memory is crucial for optimizing embedded firmware on resource-constrained Arm Cortex-M systems. Workbench for Zephyr provides built-in tools to generate detailed memory usage reports after a successful build, helping you identify ROM and RAM consumption hotspots early in development. -### Generate memory reports +## Generate memory reports -After building your Zephyr application, analyze how memory is allocated and used. Workbench for Zephyr offers built-in memory reporting tools that help you visualize RAM and ROM usage, identify inefficient memory patterns, and guide optimization efforts. These insights are especially useful when working with constrained Arm Cortex-M platforms. +After building your Zephyr application, you can analyze how memory is allocated and used. Workbench for Zephyr offers built-in memory reporting tools that help you visualize RAM and ROM usage, identify inefficient memory patterns, and guide optimization efforts. These insights are especially useful when working with constrained Arm Cortex-M platforms. -To generate memory reports, open the **Zephyr Workbench** panel and select **Memory Analysis** after a successful build. The tool generates detailed reports showing RAM usage (stack, heap, static variables), ROM usage (code size, constants), and **Puncover** analysis for binary analysis including function size, call graphs, and timing on Arm Cortex-M processors. +To generate memory reports, open the **Workbench for Zephyr** panel and select **Memory Analysis** after building your application. This tool provides detailed insights into RAM usage (including stack, heap, and static variables), ROM usage (such as code size and constants), and integrates **Puncover** for advanced binary analysis. With Puncover, you can visualize function sizes, call graphs, and timing information specific to Arm Cortex-M processors. -The following steps show how to generate and review memory reports: +Follow these steps to generate and review memory reports: -- Open the **Workbench for Zephyr** panel -- Select **Memory Analysis** after a successful build -- Review detailed memory reports: - - **RAM usage**: stack, heap, static variables - - **ROM usage**: code size, constants - - **Puncover**: binary analysis for function size, call graphs, and timing on Arm Cortex-M +- Open the **Workbench for Zephyr** panel. +- Select **Memory Analysis** after a successful build. +- Review the generated reports: + - **RAM usage**: View stack, heap, and static variable allocation. + - **ROM usage**: Examine code size and constant data. + - **Puncover analysis**: Explore function sizes, call graphs, and timing metrics for your Arm Cortex-M application. -![Memory Analysis](images/memory_analysis.png) +These insights help you identify memory bottlenecks and optimize your embedded firmware for Arm platforms. + +![Workbench for Zephyr Memory Analysis panel displaying a detailed memory usage report. The main subject is a table listing memory sections, allocation sizes, percentages, addresses, and section names for an embedded Zephyr application targeting Arm Cortex-M. The environment is a technical workspace within Visual Studio Code, with the Zephyr Workbench sidebar open. Visible text includes Memory Analysis, RAM usage, ROM usage, and Puncover analysis. The tone is neutral and focused on software development and debugging tasks. alt-text#center](images/memory_analysis.png "Workbench for Zephyr Memory Analysis panel") The RAM Report displays detailed memory allocation information and should look like this: @@ -159,7 +161,7 @@ Root ``` -## Install and configure debug Runners +## Install and configure debug runners Depending on your board, different debug utilities may be required. Workbench for Zephyr integrates and discovers several common runners: @@ -170,13 +172,13 @@ Depending on your board, different debug utilities may be required. Workbench fo Workbench for Zephyr will automatically detect these tools when they are installed in their default locations and available on your system `PATH`. If a tool is installed in a custom location, you can either update your `PATH` or configure your environment so that Workbench for Zephyr can find it. -### Install Runners Utilities +## Install runners utilities To install debug tools for your specific board, go to **Host Tools > Install Debug Tools** in the Zephyr Workbench panel and select the tools applicable to your board. You may need to press the refresh symbol to get the latest installation state for the selected/installed runners: -![Debug Runners](images/install_runners.png) +![Install Debug Runners panel in Workbench for Zephyr showing a list of available debug runner tools with checkboxes for selection. The main subject is the installation interface, which displays options such as OpenOCD, J-Link, LinkServer, and STM32CubeProgrammer, each with status indicators and install buttons. The environment is a Visual Studio Code window with the Zephyr Workbench sidebar open. Visible text includes Install Debug Tools, OpenOCD, J-Link, LinkServer, STM32CubeProgrammer, and status labels for each tool. The tone is neutral and focused on technical setup tasks. alt-text#center](images/install_runners.png "Install Debug Runners panel in Workbench for Zephyr showing a list of available debug runner tools with checkboxes for selection") -## Configure debug settings +### Configure debug settings Before starting a debug session, make sure your settings match your application and board configuration. @@ -189,27 +191,21 @@ The ELF executable path is auto-filled after build. You can optionally add a **C ### Debug server Choose the runner from OpenOCD, J-Link, LinkServer, or PyOCD. If the system doesn't detect your runner automatically, enter the runner path manually. Select **Apply** to save your settings or launch debug directly. -![Debug Manager](images/debug_manager.png) +![Debug Manager panel in Workbench for Zephyr showing a list of connected debug runners and configuration options. The interface displays fields for selecting the runner executable path, board profile, and status indicators for each tool. The environment is a VS Code window with the Zephyr Workbench sidebar open. The tone is technical and neutral. Visible text includes Debug Manager, Runner Path, Board Profile, and status labels for detected runners.alt-text #center](images/debug_manager.png "Debug Manager panel in Workbench for Zephyr") -### Manual debug runner configuration +## Configure manual debug runner If Workbench for Zephyr doesn't automatically detect the installed debug runner, open the **Debug Manager** from the sidebar and locate your board profile to enter the path to the runner executable manually. -{{% notice Note %}} -Manual configuration might be required on first-time setups or if using custom runner versions. -{{% /notice %}} +{{% notice Note %}}Manual configuration might be required on first-time setups or if using custom runner versions.{{% /notice %}} ## Launch and use the debugger You can start debugging from Workbench for Zephyr by selecting **Debug**, or from VS Code by going to **Run and Debug** (`Ctrl+Shift+D`), selecting the debug config, and selecting **Run**. -![Debug Application](images/debug_app.png) - -{{% notice Note %}} -Depending on whether you are running on Windows or a Mac, the selection of the serial monitor port may be different from what is shown above. The above picture shows a serial port from the development board being connected to a Mac. -{{% /notice %}} +![VS Code window displaying the Zephyr Workbench Debug Application panel. The main subject is the debug interface showing active debugging controls, breakpoints, and variable watch windows. The environment is a technical workspace with the Zephyr Workbench sidebar open in Visual Studio Code. Visible text includes Debug Application, Breakpoints, Variables, and Call Stack. The tone is neutral and focused on software development tasks. alt-text #center](images/debug_app.png "VS Code window displaying the Zephyr Workbench Debug Application panel.") -### Debug toolbar controls +## Debug toolbar controls The debug toolbar provides the following controls for stepping through your code: @@ -220,10 +216,12 @@ The debug toolbar provides the following controls for stepping through your code - **Restart (Ctrl+Shift+F5)** - **Stop (Shift+F5)** -### Debug features +## Debug features The debugger provides comprehensive inspection capabilities including breakpoints and variable watches, **Register view** for Arm CPU states, **Call stack navigation**, and **Memory view** of address space. If using `pyocd`, target support might take a few seconds to initialize. +## What you accomplished + In this Learning Path, you explored how to analyze memory usage and debug Zephyr applications using Workbench for Zephyr. You learned to generate memory reports, install and configure debug tools, and launch interactive debug sessions. These steps help you troubleshoot and optimize embedded applications for Arm Cortex-M boards. diff --git a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/_index.md b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/_index.md index 57f42d741b..4bf8375523 100644 --- a/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/_index.md +++ b/content/learning-paths/embedded-and-microcontrollers/zephyr_vsworkbench/_index.md @@ -1,10 +1,6 @@ --- title: Build Zephyr projects with Workbench for Zephyr in VS Code -draft: true -cascade: - draft: true - minutes_to_complete: 30 who_is_this_for: This is an introductory topic for embedded developers targeting Arm-based platforms with the Zephyr RTOS using the Workbench for Zephyr extension for VS Code. @@ -18,7 +14,7 @@ learning_objectives: prerequisites: - Basic familiarity with embedded C programming - - Visual Studio Code installed and running + - Visual Studio Code - A Cortex-M development board - Windows 10+ (64-bit), macOS with Homebrew, or Linux (preferably Ubuntu 20.04+) diff --git a/content/learning-paths/laptops-and-desktops/_index.md b/content/learning-paths/laptops-and-desktops/_index.md index b3a0e010bc..ada046b622 100644 --- a/content/learning-paths/laptops-and-desktops/_index.md +++ b/content/learning-paths/laptops-and-desktops/_index.md @@ -9,11 +9,11 @@ maintopic: true operatingsystems_filter: - Android: 2 - ChromeOS: 2 -- Linux: 36 +- Linux: 37 - macOS: 10 - Windows: 46 subjects_filter: -- CI-CD: 5 +- CI-CD: 6 - Containers and Virtualization: 7 - Migration to Arm: 30 - ML: 4 @@ -42,7 +42,7 @@ tools_software_languages_filter: - GCC: 12 - Git: 1 - GitHub: 3 -- GitLab: 1 +- GitLab: 2 - Google Test: 1 - HTML: 2 - Hugging Face: 1 diff --git a/content/learning-paths/laptops-and-desktops/windowsperf_sampling_cpython/windowsperf_sampling_cpython_example_2.md b/content/learning-paths/laptops-and-desktops/windowsperf_sampling_cpython/windowsperf_sampling_cpython_example_2.md index 94d70c7aec..02e45df99e 100644 --- a/content/learning-paths/laptops-and-desktops/windowsperf_sampling_cpython/windowsperf_sampling_cpython_example_2.md +++ b/content/learning-paths/laptops-and-desktops/windowsperf_sampling_cpython/windowsperf_sampling_cpython_example_2.md @@ -4,7 +4,7 @@ title: WindowsPerf record example weight: 4 --- -## Example 2: Using the `record` command to simplify things +## Example 2: Simplify the steps using the record command The `record` command spawns the process and pins it to the core specified by the `-c` option. You can either use `--pe_file` to let WindowsPerf know which process to spawn or simply add the process to spawn at the very end of the `wperf` command. diff --git a/content/learning-paths/mobile-graphics-and-gaming/android_opencv_kleidicv/process-images.md b/content/learning-paths/mobile-graphics-and-gaming/android_opencv_kleidicv/process-images.md index 49b32dc032..1c638b5e18 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/android_opencv_kleidicv/process-images.md +++ b/content/learning-paths/mobile-graphics-and-gaming/android_opencv_kleidicv/process-images.md @@ -18,7 +18,8 @@ This Learning Path uses a [cameraman image](https://github.com/antimatter15/came For easier navigation between files in Android Studio, use the **Project** menu option from the project browser pane. -## `ImageOperation` +## Create the ImageOperation class + You will now create an enum class, which is an enumeration, for a set of image processing operations in an application that uses the OpenCV library. In the `src/main/java/com/arm/arm64kleidicvdemo` file directory, add the `ImageOperation.kt` file, and modify it as follows: @@ -89,7 +90,8 @@ Generally, only single-channel images are supported; with Gaussian blur being an There is also the companion object that provides a utility method `fromDisplayName`. This function maps the string `displayName` to its corresponding enum constant by iterating through the list of all enum values, and returns null if no match is found. -## `ImageProcessor` +## Create the ImageProcessor class + Now add the `ImageProcessor.kt`: ```Kotlin @@ -110,7 +112,7 @@ The `ImageProcessor` class acts as a simple orchestrator for image processing ta This design is clean and modular, allowing developers to easily add new processing operations or reuse the `ImageProcessor` in different parts of an application. It aligns with object-oriented principles by promoting encapsulation and reducing processing logic complexity. -## `PerformanceMetrics` +## Create the PerformanceMetrics class Now supplement the project with the `PerformanceMetrics.kt` file: ```Kotlin @@ -150,7 +152,8 @@ The `PerformanceMetrics` class analyzes and summarizes performance measurements, By encapsulating the raw data `durationsNano` and exposing only meaningful metrics through computed properties, the class ensures clear separation of data and functionality. The overridden `toString` method makes it easy to generate a human-readable summary for reporting or debugging purposes. You can use this method to report the performance metrics to the user. -## `MainActivity` +## Create the MainActivity + You can now move on to modify `MainActivity.kt` as follows: ```Kotlin @@ -332,7 +335,8 @@ The activity also implements several helper methods: 7. `measureOperationTime` - Measures the execution time of an operation in nanoseconds using System.nanoTime(). 8. `displayProcessedImage`. This method converts the processed Mat back to a Bitmap for display and updates the ImageView with the processed image. -## `Databinding` +## Add Databinding + Finally, modify `build.gradle.kts` by adding the databinding under build features: ```JSON diff --git a/content/learning-paths/mobile-graphics-and-gaming/using-neon-intrinsics-to-optimize-unity-on-android/10-appendix.md b/content/learning-paths/mobile-graphics-and-gaming/using-neon-intrinsics-to-optimize-unity-on-android/10-appendix.md index e621683354..399084bf49 100644 --- a/content/learning-paths/mobile-graphics-and-gaming/using-neon-intrinsics-to-optimize-unity-on-android/10-appendix.md +++ b/content/learning-paths/mobile-graphics-and-gaming/using-neon-intrinsics-to-optimize-unity-on-android/10-appendix.md @@ -5,60 +5,60 @@ weight: 11 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## The Neon intrinsics we used +## The Neon intrinsics used Here is a breakdown of some of the Neon intrinsics that were used to optimize the AABB collision detection in the function called `NeonAABBObjCollisionDetectionUnrolled`. It can be found at line **718** in _CollisionCalculationScript.cs_. `NeonAABBObjCollisionDetectionUnrolled` performs the collision detection between the characters and the walls. The outer loop iterates through all of the characters, while the inner loop iterates through the walls. The result is an array of boolean values (**true** denotes a collision has occurred) which tells us which characters have collided with which walls. -### `Unity.Burst.Intrinsics.v64` (loading data into a vector register) +`Unity.Burst.Intrinsics.v64` (loading data into a vector register) ``` Line 721: var tblindex1 = new Unity.Burst.Intrinsics.v64((byte)0, 4, 8, 12, 255, 255, 255, 255) ``` Create a 64-bit vector with 8 8-bit elements with values (0, 4, 8, 12, 255, 255, 255, 255). This is used as a lookup table on _line 741_. -### `vdupq_n_f32` +`vdupq_n_f32` ``` Line 728: charMaxXs = vdupq_n_f32(*(characters + c)) ``` Duplicate floating point values into all 4 lanes of the 128-bit returned vector. The returned vector will contain 4 copies of a single _Max X_ value (of character bounds). -### `vld1q_f32` +`vld1q_f32` ``` Line 736: wallMinXs = vld1q_f32(walls + w) ``` Load multiple floating point values from memory into a single vector register. The returned vector will contain _Min X_ values from 4 different walls. -### `vcgeq_f32` +`vcgeq_f32` ``` Line 741: vcgeq_f32(wallMinXs, charMaxXs) ``` Floating point comparisons (greater-than or equal). It compares 4 walls at once with a character's _Max X_. Each of the four results will either be all ones (true) or all zeros (false). -### `vorrq_u32` +`vorrq_u32` ``` Line 741: vorrq_u32(vcgeq_f32(wallMinXs, charMaxXs), vcgeq_f32(wallMinYs, charMaxYs)) ``` Bitwise inclusive OR. The nested calls to `vcgeq\_f32` are comparing the walls (Min X and Min Y) against the characters' Max X and Max Y. The four comparison results are combined with a bitwise OR. -### `vqtbl1_u8` +`vqtbl1_u8` ``` Line 741: results = vqtbl1_u8(_result of ORs_, tblindex1) ``` Table lookup function that selects elements from an array based on the indices provided. The result of the OR operations will be treated as an array of 8-bit values. The values from _tblindex1_ (0, 4, 8 and 12) ensure that we select the most significant bytes from each u32 OR result. So 4 character-wall comparisons are being merged into one 128-bit vector along with 4 dummies (because of the _out of range_ values in tblindex1) that will be replaced later with the 64-bit value of the next 4 wall comparisons (from _wmvn_u8_). -### `vqtbx1_u8` +`vqtbx1_u8` ``` Line 751: vqtbx1_u8(results, …) ``` Table lookup function except when an index is out of range as it leaves the existing data alone. This has the effect of selecting 4 bytes (using indices from _tblindex2_) from the results of the last 4 comparisons and combines them with the previous 4 results. _results_ will now contain the results of 8 wall-character comparisons. -### `vmvn_u8` +`vmvn_u8` ``` Line 751: results = vmvn_u8(...) ``` Bitwise NOT operation. This negates each of the 8 character-wall comparisons. It is effectively the _!_ (NOT) in our [AABB intersection function](/learning-paths/mobile-graphics-and-gaming/using-neon-intrinsics-to-optimize-unity-on-android/5-the-optimizations#the-aabb-intersection-function) except that it is working on 8 results instead of 1. -### `Unity.Burst.Intrinsics.v64` (storing to memory) +`Unity.Burst.Intrinsics.v64` (storing to memory) ``` Line 755: *(Unity.Burst.Intrinsics.v64*)(collisions + (c * numWalls + w - 4)) = results; ``` diff --git a/content/learning-paths/servers-and-cloud-computing/_index.md b/content/learning-paths/servers-and-cloud-computing/_index.md index 28a0501e3a..b3566a1353 100644 --- a/content/learning-paths/servers-and-cloud-computing/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/_index.md @@ -8,7 +8,7 @@ key_ip: maintopic: true operatingsystems_filter: - Android: 3 -- Linux: 197 +- Linux: 200 - macOS: 13 - Windows: 14 pinned_modules: @@ -18,9 +18,9 @@ pinned_modules: - providers - migration subjects_filter: -- CI-CD: 10 -- Containers and Virtualization: 34 -- Databases: 21 +- CI-CD: 11 +- Containers and Virtualization: 35 +- Databases: 22 - Libraries: 9 - ML: 34 - Performance and Architecture: 74 @@ -94,7 +94,7 @@ tools_software_languages_filter: - Daytona: 1 - Demo: 3 - Django: 2 -- Docker: 25 +- Docker: 26 - Docker Buildx: 1 - Envoy: 3 - ExecuTorch: 1 @@ -105,6 +105,7 @@ tools_software_languages_filter: - Fortran: 1 - FunASR: 1 - FVP: 7 +- Gardener: 1 - GCC: 25 - gdb: 1 - Geekbench: 1 @@ -113,7 +114,7 @@ tools_software_languages_filter: - GitHub: 6 - GitHub Actions: 1 - GitHub CLI: 1 -- GitLab: 1 +- GitLab: 2 - GKE: 1 - glibc: 1 - Go: 4 @@ -124,7 +125,7 @@ tools_software_languages_filter: - Google Test: 1 - Gunicorn: 1 - HammerDB: 1 -- Helm: 1 +- Helm: 2 - Herd7: 1 - Hiera: 1 - Hugging Face: 12 @@ -139,7 +140,9 @@ tools_software_languages_filter: - KEDA: 1 - Kedify: 1 - Keras: 2 -- Kubernetes: 13 +- KinD: 1 +- kube-bench: 1 +- Kubernetes: 14 - Libamath: 1 - libbpf: 1 - Linaro Forge: 1 @@ -188,7 +191,8 @@ tools_software_languages_filter: - QEMU: 1 - RAG: 1 - Rails: 1 -- Redis: 3 +- Redis: 4 +- redis-benchmark: 1 - Remote.It: 2 - RME: 8 - Ruby: 2 @@ -233,8 +237,8 @@ tools_software_languages_filter: - ZooKeeper: 1 weight: 1 cloud_service_providers_filter: -- AWS: 18 -- Google Cloud: 31 +- AWS: 19 +- Google Cloud: 34 - Microsoft Azure: 19 - Oracle: 2 --- diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md index cdec577f5b..1b1dc2aceb 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md @@ -1,17 +1,13 @@ --- -title: CircleCI Arm Native Workflows on AWS Graviton2 (EC2) +title: Deploy CircleCI Arm Native Workflows on AWS EC2 Graviton2 minutes_to_complete: 45 -draft: true -cascade: - draft: true - -who_is_this_for: This learning path is intended for software developers and DevOps engineers looking to set up and run CircleCI Arm native workflows on Linux Arm64 VMs, specifically on AWS EC2 Graviton2 instances (Neoverse N1), using self-hosted runners. +who_is_this_for: This is an introductory topic for developers and DevOps engineers who want to set up and run CircleCI Arm native workflows on Linux Arm64 virtual machines. You'll use AWS EC2 Graviton2 instances (Neoverse N1) and self-hosted runners. learning_objectives: - Provision an AWS EC2 Graviton2 Arm64 virtual machine - - Install and configure CircleCI self-hosted machine runners on Arm64 + - Install and configure a CircleCI self-hosted machine runners on Arm64 - Verify the runner by running a simple workflow and test computation - Define and execute CircleCI job using a machine executor - Check CPU architecture and execute a basic script to confirm if the runner is operational @@ -19,6 +15,7 @@ learning_objectives: prerequisites: - An [AWS account](https://aws.amazon.com/free/) with billing enabled + - A [CircleCI account](https://circleci.com/) - Basic familiarity with Linux command line - Basic understanding of CircleCI concepts such as [workflows](https://circleci.com/docs/guides/orchestrate/workflows/), diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md index e4836e896c..a731b786f6 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md @@ -1,27 +1,28 @@ --- -title: Getting Started with CircleCI on AWS Graviton2 (Arm Neoverse-N1) +title: Get Started with CircleCI on AWS Graviton2 weight: 2 layout: "learningpathall" --- +## AWS Graviton2 Arm instances on Amazon EC2 -## AWS Graviton2 Arm Instances on Amazon EC2 - -**AWS Graviton2** is a family of Arm-based processors designed by AWS and built on **Arm Neoverse-N1 cores**. These instances deliver exceptional price-to-performance efficiency, making them ideal for compute-intensive workloads such as CI/CD pipelines, microservices, containerized applications, and data processing tasks. +AWS Graviton2 is a family of Arm-based processors designed by AWS and built on Arm Neoverse-N1 cores. These instances deliver exceptional price-to-performance efficiency, making them ideal for compute-intensive workloads such as CI/CD pipelines, microservices, containerized applications, and data processing tasks. Graviton2-powered EC2 instances provide high performance and energy efficiency compared to traditional x86-based instances while maintaining compatibility with popular Linux distributions and open-source software stacks. -To learn more about AWS Graviton processors, refer to the [AWS Graviton2 Processor Overview](https://aws.amazon.com/ec2/graviton/). +To learn more about AWS Graviton processors, see the [AWS Graviton2 Processor Overview](https://aws.amazon.com/ec2/graviton/). ## CircleCI -**CircleCI** is a leading cloud-based **Continuous Integration and Continuous Delivery (CI/CD)** platform that automates the **building, testing, and deployment** of software projects. +CircleCI is a leading cloud-based Continuous Integration and Continuous Delivery (CI/CD) platform that automates the building, testing, and deployment of software projects. + +It seamlessly integrates with popular version control systems such as GitHub, Bitbucket, and GitLab, allowing developers to define automation workflows through a `.circleci/config.yml` file written in YAML syntax. + +CircleCI supports multiple execution environments, including Docker, Linux, macOS, and Windows, while providing advanced capabilities like parallel job execution, build caching, and matrix builds for optimized performance. -It seamlessly integrates with popular version control systems such as **GitHub**, **Bitbucket**, and **GitLab**, allowing developers to define automation workflows through a `.circleci/config.yml` file written in **YAML syntax**. +It is widely adopted by development teams to accelerate build cycles, enforce code quality, automate testing, and streamline application delivery. -CircleCI supports multiple execution environments, including **Docker**, **Linux**, **macOS**, and **Windows**, while providing advanced capabilities like **parallel job execution**, **build caching**, and **matrix builds** for optimized performance. +To learn more, visit the [CircleCI website](https://circleci.com/) and the [CircleCI documentation](https://circleci.com/docs/). -It is widely adopted by development teams to **accelerate build cycles, enforce code quality, automate testing, and streamline application delivery**. -To learn more, visit the [official CircleCI website](https://circleci.com/) and explore its [documentation](https://circleci.com/docs/). diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md index ad04b62737..a84e0dc5ac 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md @@ -1,20 +1,18 @@ --- -title: Install CircleCI Machine Runner on AWS Graviton2 +title: Install CircleCI machine runner on AWS Graviton2 weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Install CircleCI Machine Runner on AWS Graviton2 +## Install CircleCI machine runner on AWS Graviton2 -This guide provides step-by-step instructions to install and configure the **CircleCI Machine Runner** on an **AWS Graviton2 (Neoverse N1) instance**. -With this setup, your self-hosted **Arm64 environment** can efficiently execute CircleCI jobs directly on the Graviton2 architecture, enabling faster builds and improved performance for ARM-based workloads. +This Learning Path shows you how to install and configure the CircleCI Machine Runner on an AWS Graviton2 (Neoverse N1) instance. With this setup, your self-hosted Arm64 environment can efficiently execute CircleCI jobs directly on the Graviton2 architecture, enabling faster builds and improved performance for Arm-based workloads. -### Add CircleCI Package Repository -For **Debian/Ubuntu-based systems** running on **AWS Graviton2 (Arm64)**, first add the official CircleCI repository. -This ensures you can install the CircleCI Runner package directly using `apt`. +## Add the CircleCI package repository +For Debian/Ubuntu-based systems running on AWS Graviton2 (Arm64), first add the official CircleCI repository. This ensures you can install the CircleCI Runner package directly using `apt`: ```console curl -s https://packagecloud.io/install/repositories/circleci/runner/script.deb.sh?any=true | sudo bash @@ -24,8 +22,8 @@ curl -s https://packagecloud.io/install/repositories/circleci/runner/script.deb. - It configures the repository on your system, allowing `apt` to fetch and install the CircleCI runner package. - After successful execution, the CircleCI repository will be added under `/etc/apt/sources.list.d/`. -### Configure the Runner Token -- Each self-hosted runner requires a unique authentication token generated from your Resource Class in the CircleCI Dashboard. +## Configure the runner token +- Each self-hosted runner requires a unique authentication token generated from your resource class in the CircleCI dashboard. - Copy the token from the CircleCI web interface. - Export the token as an environment variable and update the runner configuration file as shown: @@ -34,25 +32,24 @@ export RUNNER_AUTH_TOKEN="YOUR_AUTH_TOKEN" sudo sed -i "s/<< AUTH_TOKEN >>/$RUNNER_AUTH_TOKEN/g" /etc/circleci-runner/circleci-runner-config.yaml ``` -### Install the CircleCI Runner -Install the pre-built CircleCI runner package: +## Install the CircleCI runner +To install the CircleCI runner, use the following command: ```console sudo apt-get install -y circleci-runner ``` - -- Installs the latest CircleCI Machine Runner compatible with your Arm64 instance. -- Runner binary and configuration files are located in `/usr/bin/` and `/etc/circleci-runner/`. -### Configure the Runner Authentication Token -Update the CircleCI runner configuration with your authentication token. This token is generated from the Resource Class you created in the CircleCI Dashboard. +This command installs the latest CircleCI Machine Runner for your Arm64 system. The runner program is placed in `/usr/bin/`, and its configuration files are stored in `/etc/circleci-runner/`. + +## Configure the runner authentication token +Update the CircleCI runner configuration with your authentication token. This token is generated from the resource class you created in the CircleCI Dashboard. ```console export RUNNER_AUTH_TOKEN="YOUR_AUTH_TOKEN" sudo sed -i "s/<< AUTH_TOKEN >>/$RUNNER_AUTH_TOKEN/g" /etc/circleci-runner/circleci-runner-config.yaml ``` -### Enable and Start the CircleCI Runner +## Enable and start the CircleCI runner Set the CircleCI runner service to start automatically and verify it is running: ```console @@ -88,6 +85,6 @@ Oct 17 06:19:13 ip-172-31-34-224 circleci-runner[2226]: 06:19:13 c34c1 22.514ms This confirms that the CircleCI Runner is actively connected to your CircleCI account and ready to accept jobs. -Also, you can verify it from the dashboard: +You can also verify it from the dashboard: -![Self-Hosted Runners alt-text#center](images/runner.png "Figure 1: Self-Hosted Runners ") +![Diagram showing the CircleCI self-hosted runner architecture. The main subject is an AWS Graviton2 server labeled as a self-hosted runner, connected to the CircleCI cloud platform. Arrows indicate job requests flowing from CircleCI to the runner and job results returning to CircleCI. The environment includes icons for cloud infrastructure and developer workstations. The tone is technical and informative. Any visible text in the image is transcribed as: Self-Hosted Runners. alt-text#center](images/runner.png "Self-Hosted Runners ") diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md index 2e1234c548..755ed4e4a7 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md @@ -6,29 +6,29 @@ weight: 4 layout: learningpathall --- -## Install CircleCI CLI on AWS Graviton2 (Neoverse N1) Instance -This guide explains installing the **CircleCI Command Line Interface (CLI)** on an **AWS Graviton2 (Neoverse N1) Arm64 EC2 instance**. -The CLI enables you to interact with CircleCI directly from your terminal — for validating configuration files, managing pipelines, and operating self-hosted runners on your EC2 instance. +## Install CircleCI CLI on AWS Graviton2 (Neoverse N1) instance +This section walks you through how to install the CircleCI command line interface (CLI) on an AWS Graviton2 (Neoverse N1) Arm64 EC2 instance. +With the CLI, you can work with CircleCI from your terminal to check configuration files, manage pipelines, and run self-hosted runners on your EC2 instance. -### Install Required Packages -Before installing the CircleCI CLI, ensure your system has the necessary tools for downloading and extracting files. +## Install the required packages +Before installing the CircleCI CLI, ensure your system has the necessary tools for downloading and extracting files: ```console sudo apt update && sudo apt install -y curl tar gzip coreutils gpg git ``` -### Download and Extract the CircleCI CLI +## Download and extract the CircleCI CLI -Next, download the CircleCI CLI binary for **Linux arm64** and extract it. +Next, download the CircleCI CLI binary for Linux arm64 and extract it: ```console curl -fLSs https://github.com/CircleCI-Public/circleci-cli/releases/download/v0.1.33494/circleci-cli_0.1.33494_linux_arm64.tar.gz | tar xz sudo mv circleci-cli_0.1.33494_linux_arm64/circleci /usr/local/bin/ ``` -- The `curl` command fetches the official **CircleCI CLI archive** from GitHub. +- The `curl` command fetches the official CircleCI CLI archive from GitHub. - The `| tar xz` command extracts the compressed binary in a single step. - After extraction, a new folder named **`circleci-cli_0.1.33494_linux_arm64`** appears in your current directory. -### Verify the Installation +## Verify the installation To ensure that the CLI is installed successfully, check its version: diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md index cf6bf669d9..f715182c8c 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md @@ -8,31 +8,30 @@ layout: learningpathall ## Overview -In this section, you will learn how to provision an **AWS Graviton2 Arm64 EC2 instance** on **Amazon Web Services (AWS)** using the **m6g.xlarge** instance type (2 vCPUs, 8 GB memory) in the **AWS Management Console**. +In this section, you'll learn how to provision an AWS Graviton2 Arm64 EC2 instance on Amazon Web Services (AWS) using the m6g.xlarge instance type (2 vCPUs, 8 GB memory) in the AWS Management Console. {{% notice Note %}} -For support on AWS setup, see the Learning Path [Getting started with AWS](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/aws/). +For support on AWS setup, see the Learning Path [Getting started with AWS](/learning-paths/servers-and-cloud-computing/csp/aws/). {{% /notice %}} -## Provision an AWS EC2 Arm64 Graviton2 Instance in the AWS Management Console +## Provision the instance in the AWS Management Console -To create a virtual machine based on the AWS Graviton2 Instance type: +To create a virtual machine based on the AWS Graviton2 Instance type, follow these steps: - Navigate to the [AWS Management Console](https://aws.amazon.com/console/). - Go to **EC2 > Instances** and select **Launch Instance**. - Under **Instance configuration**: - - Enter an appropriate **Instance name**. - - Choose an **Amazon Machine Image (AMI)** such as **Ubuntu 24.04 ARM64**. + - Enter an appropriate **Instance name**. + - Choose an **Amazon Machine Image (AMI)** such as **Ubuntu 24.04 ARM64**. - ![AWS Management Console alt-text#center](images/aws1.png "Figure 1: Amazon Machine Image (AMI)") - + ![AWS Management Console showing the Amazon Machine Image selection screen with Ubuntu 24.04 ARM64 highlighted. The interface displays a list of available AMIs, each with details such as name, architecture, and description. The wider environment includes navigation menus on the left and a search bar at the top. The mood is neutral and instructional, focused on guiding users through selecting an appropriate AMI. Visible text includes Amazon Machine Image, Ubuntu 24.04 ARM64, and related AMI details. alt-text#center](images/aws1.png "Amazon Machine Image (AMI)") - Under **Instance type**, select a Graviton2-based type `m6g.xlarge`. - ![AWS Management Console alt-text#center](images/aws2.png "Figure 2: Instance type") + ![AWS Management Console displaying the instance type selection screen with m6g.xlarge highlighted. The primary subject is the list of available EC2 instance types, each showing details such as name, vCPUs, memory, and architecture. The m6g.xlarge row is selected, indicating 2 vCPUs and 8 GB memory, with Arm64 architecture. The wider environment includes navigation menus on the left and a search bar at the top. Visible text includes Instance type, m6g.xlarge, vCPUs, Memory, and Arm64. The tone is neutral and instructional, guiding users to select the correct instance type. #alt-text#center](images/aws2.png "Instance type") - Configure your **Key pair (login)** by either creating a new key pair or selecting an existing one to securely access your instance. - In **Network settings**, ensure that **Allow HTTP traffic from the internet** and **Allow HTTPS traffic from the internet** are checked. - ![AWS Management Console alt-text#center](images/aws3.png "Figure 3: Network settings") + ![AWS Management Console showing the Network settings configuration screen for launching an EC2 instance. The primary subject is the Network settings panel, where the options Allow HTTP traffic from the internet and Allow HTTPS traffic from the internet are both checked. The wider environment includes navigation menus on the left and a summary of instance configuration steps at the top. Visible text includes Network settings, Allow HTTP traffic from the internet, and Allow HTTPS traffic from the internet. The tone is neutral and instructional, guiding users to enable the correct network access for their instance. #alt-text#center](images/aws3.png "Network settings") - - Adjust **Storage** settings as needed — for most setups, 30 GB of gp3 (SSD) storage is sufficient. - - Click **Launch Instance** to create your EC2 virtual machine. + - Adjust the Storage settings. For most use cases, 30 GB of gp3 (SSD) storage is enough. + - Select **Launch Instance** to create your EC2 virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md index 3e02ea50f4..ecd3df14e5 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md @@ -1,40 +1,31 @@ --- -title: Create Resource Class in CircleCI +title: Create a resource class in CircleCI weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Create a Resource Class for Self-Hosted Runner in CircleCI -This guide describes creating a **Resource Class** in the **CircleCI Web Dashboard** for a **self-hosted runner**. -A Resource Class uniquely identifies the runner and links it to your CircleCI namespace, enabling jobs to run on your custom machine environment. +## Overview -### Steps +This section describes creating a resource class in the CircleCI Web Dashboard for a self-hosted runner. A resource class uniquely identifies the runner and links it to your CircleCI namespace, enabling jobs to run on your custom machine environment. -1. **Go to the CircleCI Web Dashboard** - - From the left sidebar, navigate to **Self-Hosted Runners**. - - You’ll see a screen asking you to accept the **terms of use**. - - **Check the box** that says **“Yes, I agree to the terms”** to enable runners. - - Then click **Self-Hosted Runners** to continue setup. +## Register a resource class for your CircleCI self-hosted runner -![Self-Hosted Runners alt-text#center](images/shrunner0.png "Figure 1: Self-Hosted Runners ") +To register a resource class for your CircleCI self-hosted runner, start by navigating to **Self-Hosted Runners** in the left sidebar of the CircleCI dashboard. You’ll be prompted to accept the terms of use; check the box labeled “Yes, I agree to the terms” to enable runners. Once you’ve agreed, select **Self-Hosted Runners** to continue with the setup process. -2. **Create a New Resource Class** - - Click **Create Resource Class**. +![CircleCI dashboard showing the Self-Hosted Runners section. The main subject is the Self-Hosted Runners setup screen with a checkbox labeled Yes I agree to the terms and a button to enable runners. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. Visible text: Self-Hosted Runners, Yes I agree to the terms. The emotional tone is neutral and instructional. alt-text#center](images/shrunner0.png "Self-Hosted Runners section") -![Self-Hosted Runners alt-text#center](images/shrunner1.png "Figure 2: Create Resource Class ") +To create a new resource class, select **Create Resource Class**. -3. **Fill in the Details** - - **Namespace:** Your CircleCI username or organization (e.g., `circleci`) - - **Resource Class Name:** A descriptive name for your runner, such as `arm64` +![CircleCI dashboard showing the Create Resource Class button. The main subject is the Self-Hosted Runners setup screen with a prominent button labeled Create Resource Class. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. Visible text: Create Resource Class. The emotional tone is neutral and instructional. alt-text#center](images/shrunner1.png "Create Resource Class") -![Self-Hosted Runners alt-text#center](images/shrunner2.png "Figure 3: Details Resource Class & Namespace") +Fill in the details for your new resource class by entering your CircleCI username or organization in the **Namespace** field (for example, `circleci`). In the **Resource Class Name** field, provide a descriptive name for your runner, such as `arm64`, to clearly identify its purpose or architecture. -4. **Save and Copy the Token** - - Once created, CircleCI will generate a **Resource Class Token**. - - Copy this token and store it securely — you will need it to register your runner on the AWS Arm VM. +![CircleCI dashboard showing the form to create a resource class. The main subject is the Details section with fields for Namespace and resource class Name. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. Visible text: Namespace, resource class Name, Create resource class. The emotional tone is neutral and instructional. alt-text#center](images/shrunner2.png "Create a resource class") -![Self-Hosted Runners alt-text#center](images/shrunner3.png "Figure 4: Resource Class Token") - -With your Resource Class and token ready, proceed to the next section to set up the CircleCI self-hosted runner. +After creation, CircleCI generates a **Resource Class Token**. Copy this token and store it securely - you need it to register your runner on the AWS Arm VM. + +![CircleCI dashboard showing resource Class Token field and copy button. The main subject is the resource Class Token displayed in a text box, with a button labeled Copy next to it. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. The emotional tone is neutral and instructional. Visible text: resource class Token, Copy. alt-text#center](images/shrunner3.png "Resource Class Token field and copy button") + +With your resource class and token ready, proceed to the next section to set up the CircleCI self-hosted runner. diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md index 385df18f9f..ed4a4d321e 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md @@ -1,5 +1,5 @@ --- -title: Verify CircleCI Arm64 Self-Hosted Runner +title: Verify CircleCI Arm64 Self-Hosted runner weight: 7 ### FIXED, DO NOT MODIFY @@ -8,9 +8,9 @@ layout: learningpathall ## Verify CircleCI Arm64 Self-Hosted Runner -This guide demonstrates validating your **self-hosted CircleCI runner** on an **Arm64 machine** by executing a simple workflow and a test computation. This ensures your runner is correctly configured and ready to process jobs. +This section walks you through validating your self-hosted CircleCI runner on an Arm64 machine by executing a simple workflow and a test computation. This ensures your runner is correctly configured and ready to process jobs. -### Create a Test Repository +## Create a test repository Start by creating a GitHub repository dedicated to verifying your Arm64 runner: ```console @@ -19,16 +19,23 @@ cd aws-circleci ``` This repository serves as a sandbox to confirm that your CircleCI runner can pick up and run jobs for Arm64 workflows. -### Add a Sample Script -Create a minimal shell script that will be used to confirm the runner executes commands correctly: +## Add a sample script +Create a minimal shell script to confirm your runner can execute commands: + +```bash +echo 'echo "Hello from CircleCI Arm64 Runner!"' > hello.sh +chmod +x hello.sh +``` + +This script prints a message when run, helping you verify that your self-hosted runner is working as expected. ```console echo 'echo "Hello from CircleCI Arm64 Runner!"' > hello.sh chmod +x hello.sh ``` -### Define the CircleCI Configuration -Create a `.circleci/config.yml` file to define the workflow that will run on your Arm64 runner: +## Define the CircleCI configuration +Now create a `.circleci/config.yml` file to define the workflow that runs on your Arm64 runner: ```yaml version: 2.1 @@ -59,13 +66,17 @@ workflows: jobs: - test-Arm64 ``` -- Defines a single job `test-Arm64` using a machine executor on a self-hosted Arm64 runner. -- Checks CPU architecture with `uname -m` and `lscpu` to verify the runner. -- Executes a simple script `hello.sh` to confirm the runner can run commands. -- Runs a sample computation step to display CPU info and print. +This configuration does the following: + +- Defines a single job called `test-Arm64` that uses a machine executor on your self-hosted Arm64 runner +- Verifies the runner's architecture by running `uname -m` and checking the output of `lscpu` +- Runs the `hello.sh` script to confirm the runner can execute commands +- Performs a sample computation step that displays CPU information and prints a success message -### Commit and Push to GitHub -Once all files you created (`hello.sh`, `.circleci/config.yml`) are ready, push your project to GitHub so CircleCI can build and verify the Arm64 runner automatically. +Each step helps you confirm that your CircleCI Arm64 runner is set up correctly and ready to process jobs. + +## Commit and push to GitHub +After you create `hello.sh` and `.circleci/config.yml`, push your project to GitHub so CircleCI can build and verify your Arm64 runner: ```console git add . @@ -74,36 +85,38 @@ git branch -M main git push -u origin main ``` -- **Add Changes**: Stage all modified and new files using `git add .`. -- **Commit Changes**: Commit the staged files with a descriptive message. -- **Set Main Branch**: Rename the current branch to `main`. -- **Add Remote Repository**: Link your local repository to GitHub. -- **Push Changes**: Push the committed changes to the `main` branch on GitHub. +Here's what each command does: +- git add . — stages all your files for commit +- git commit -m ... — saves your changes with a message +- git branch -M main — sets your branch to main (if it's not already) +- git push -u origin main — pushes your code to GitHub + +Once your code is on GitHub, CircleCI can start running your workflow automatically. +## Start the CircleCI runner and run your job -### Start CircleCI Runner and Execute Job -Ensure that your CircleCI runner is enabled and started. This will allow your self-hosted runner to pick up jobs from CircleCI. +Before you test your workflow, make sure your CircleCI runner is enabled and running. This lets your self-hosted runner pick up jobs from CircleCI: ```console sudo systemctl enable circleci-runner sudo systemctl start circleci-runner sudo systemctl status circleci-runner ``` -- **Enable CircleCI Runner**: Ensure the CircleCI runner is set to start automatically on boot. -- **Start and Check Status**: Start the CircleCI runner and verify it is running. +- Enable the runner so it starts automatically when your machine boots +- Start the runner and check its status to confirm it is running -After pushing your code to GitHub, open your **CircleCI Dashboard → Projects**, and confirm that your **test-Arm64 workflow** starts running using your **self-hosted runner**. +After you push your code to GitHub, go to your CircleCI Dashboard and select Projects. Look for your test-Arm64 workflow and check that it is running on your self-hosted runner. -If the setup is correct, you’ll see your job running under the resource class you created. +If everything is set up correctly, you’ll see your job running under the resource class you created. -### Output -Once the job starts running, CircleCI will: +## Output +Once the job starts running, CircleCI does the following: -- Verify Arm64 Runner: +- It verifies the Arm64 Runner: - ![Self-Hosted Runners alt-text#center](images/runnerv1.png "Figure 1: Self-Hosted Runners ") + ![CircleCI self-hosted runner dashboard showing a successful Arm64 job execution. The main panel displays job status as successful with green check marks. The sidebar lists workflow steps including checkout, verify Arm64 runner, and run sample computation. The environment is a web interface with a clean, professional layout. The overall tone is positive and confirms successful validation of the self-hosted runner. alt-text#center](images/runnerv1.png "Self-Hosted Runners ") -- Run sample computation: +- It runs a sample computation: - ![Self-Hosted Runners alt-text#center](images/computation.png "Figure 1: Self-Hosted Runners ") + ![CircleCI dashboard displaying the results of a sample computation job on a self-hosted Arm64 runner. The main panel shows the job status as successful with green check marks. Workflow steps listed in the sidebar include checkout, verify Arm64 runner, and run sample computation. The environment is a modern web interface with a clean, organized layout. On-screen text includes Success and CPU Info. The overall tone is positive, confirming the successful execution of the computation step on the Arm64 runner. alt-text#center](images/computation.png "Self-Hosted Runners ") All CircleCI jobs have run successfully, the sample computation completed, and all outputs are visible in the CircleCI Dashboard. diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/_index.md new file mode 100644 index 0000000000..fc1f9a0288 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/_index.md @@ -0,0 +1,72 @@ +--- +title: Deploy Gardener on Google Cloud C4A (Arm-based Axion VMs) + +draft: true +cascade: + draft: true + +minutes_to_complete: 50 + +who_is_this_for: This learning path is intended for software developers deploying and optimizing Gardener workloads on Linux/Arm64 environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. + +learning_objectives: + - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) + - Install and configure Gardener on a SUSE Arm64 (C4A) instance + - Deploy Garden, Seed, and Shoot clusters locally using KinD + - Validate Gardener functionality by deploying workloads into a Shoot cluster + - Perform baseline security benchmarking of Gardener-managed Kubernetes clusters using kube-bench on Arm64 + +prerequisites: + - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled + - Basic familiarity with [Kubernetes](https://kubernetes.io/) + - Familiarity with container concepts ([Docker](https://www.docker.com/)) + +author: Pareena Verma + +##### Tags +skilllevels: Introductory +subjects: Containers and Virtualization +cloud_service_providers: Google Cloud + +armips: + - Neoverse + +tools_software_languages: + - Gardener + - Kubernetes + - Docker + - KinD + - Helm + - kube-bench + +operatingsystems: + - Linux + +# ================================================================================ +# FIXED, DO NOT MODIFY +# ================================================================================ +further_reading: + - resource: + title: Gardener documentation + link: https://gardener.cloud/ + type: documentation + + - resource: + title: Gardener GitHub repository + link: https://github.com/gardener/gardener + type: documentation + + - resource: + title: Kubernetes documentation + link: https://kubernetes.io/docs/ + type: documentation + + - resource: + title: kube-bench documentation + link: https://github.com/aquasecurity/kube-bench + type: documentation + +weight: 1 +layout: "learningpathall" +learning_path_main_page: "yes" +--- diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/_next-steps.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/_next-steps.md new file mode 100644 index 0000000000..c3db0de5a2 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/_next-steps.md @@ -0,0 +1,8 @@ +--- +# ================================================================================ +# FIXED, DO NOT MODIFY THIS FILE +# ================================================================================ +weight: 21 # Set to always be larger than the content in this path to be at the end of the navigation. +title: "Next Steps" # Always the same, html page title. +layout: "learningpathall" # All files under learning paths have this same wrapper for Hugo processing. +--- diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/background.md new file mode 100644 index 0000000000..83255edbf4 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/background.md @@ -0,0 +1,25 @@ +--- +title: Getting started with Gardener on Google Axion C4A (Arm Neoverse-V2) + +weight: 2 + +layout: "learningpathall" +--- + +## Google Axion C4A Arm instances in Google Cloud + +Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. + +The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. + +To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. + +## Gardener + +Gardener is an open-source, Kubernetes-native system for managing and operating Kubernetes clusters at scale. It enables automated creation, update, healing, and deletion of clusters across multiple cloud and on-prem providers. + +Gardener uses Kubernetes APIs and CRDs to declaratively manage clusters in a cloud-agnostic way. It follows a **Garden–Seed–Shoot** architecture to separate control planes from workload clusters. + +Gardener is widely used to build reliable internal developer platforms and operate thousands of Kubernetes clusters. + +To learn more, visit the Gardener [official website](https://gardener.cloud/) and explore the [documentation](https://gardener.cloud/docs/). diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/baseline.md new file mode 100644 index 0000000000..f02c532a9f --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/baseline.md @@ -0,0 +1,306 @@ +--- +title: Gardener Baseline Testing on Google Axion C4A Arm Virtual Machine +weight: 5 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Gardener Baseline Testing on GCP SUSE VMs +This section checks whether your Gardener Local setup is working correctly on an Arm-based GCP Axion (C4A) VM before running real workloads. + +### Set Kubeconfig +This tells Kubernetes commands (**kubectl) which cluster to talk to**. Without this, kubectl won’t know where your Gardener cluster is. +``` console +export KUBECONFIG=$PWD/example/gardener-local/kind/local/kubeconfig +``` + +### Check Cluster Health +Before testing any workload, verify that the Gardener-local Kubernetes cluster is healthy. This ensures the control plane and node are functional. + +``` console +kubectl get nodes -o wide +kubectl get pods -A +``` +You should see an output similar to: + +```output +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +gardener-local-control-plane Ready control-plane 148m v1.32.5 172.18.0.2 Debian GNU/Linux 12 (bookworm) 5.14.21-150500.55.124-default containerd://2.1.1 +extension-networking-calico-8z7jw gardener-extension-networking-calico-94bcb44bf-kmmpj 1/1 Running 0 102m +extension-networking-calico-8z7jw gardener-extension-networking-calico-94bcb44bf-whgtn 1/1 Running 0 135m +extension-provider-local-m7d79 gardener-extension-provider-local-fc75c4494-47szg 1/1 Running 0 137m +extension-provider-local-m7d79 gardener-extension-provider-local-fc75c4494-hkksz 1/1 Running 0 137m +garden dependency-watchdog-prober-d47b5899f-ml6x9 1/1 Running 0 61m +garden dependency-watchdog-prober-d47b5899f-xmzh2 1/1 Running 0 60m +garden dependency-watchdog-weeder-66f8bffd8b-lgx7f 1/1 Running 0 60m +garden dependency-watchdog-weeder-66f8bffd8b-vd9md 1/1 Running 0 61m +garden etcd-0 1/1 Running 0 141m +garden etcd-druid-65d56db866-bstcm 1/1 Running 0 139m +garden etcd-druid-65d56db866-zkfjb 1/1 Running 0 139m +garden fluent-bit-8259c-s5wnv 1/1 Running 0 139m +garden fluent-operator-5b9ff5bfb7-6ffvc 1/1 Running 0 137m +garden fluent-operator-5b9ff5bfb7-cw67l 1/1 Running 0 137m +garden gardener-admission-controller-899c585bf-2mp9g 1/1 Running 2 (141m ago) 141m +garden gardener-admission-controller-899c585bf-xp2f4 1/1 Running 2 (141m ago) 141m +garden gardener-apiserver-54fcdfcd97-5zkgr 1/1 Running 0 141m +garden gardener-controller-manager-77bf4b686f-zxgsh 1/1 Running 3 (140m ago) 141m +garden gardener-extension-admission-local-57d674d98f-6qbcv 1/1 Running 0 136m +garden gardener-extension-admission-local-57d674d98f-zlgpd 1/1 Running 0 135m +garden gardener-resource-manager-cfd685fc5-n9mp7 1/1 Running 0 133m +garden gardener-resource-manager-cfd685fc5-spbn7 1/1 Running 0 134m +garden gardener-scheduler-6599d654c9-vw2q5 1/1 Running 0 141m +garden gardenlet-59cb4b6956-hsmdp 1/1 Running 0 96m +garden kube-state-metrics-seed-f89d48b49-94l46 1/1 Running 0 121m +garden kube-state-metrics-seed-f89d48b49-q95kr 1/1 Running 0 130m +garden nginx-ingress-controller-5bb9b58c44-ck2q7 1/1 Running 0 139m +garden nginx-ingress-controller-5bb9b58c44-r8wwd 1/1 Running 0 139m +garden nginx-ingress-k8s-backend-5547dddffd-fqsfm 1/1 Running 0 139m +garden perses-operator-9f9694dcd-wvl5z 1/1 Running 0 139m +garden plutono-776964667b-225r7 2/2 Running 0 139m +garden prometheus-aggregate-0 2/2 Running 0 87m +garden prometheus-cache-0 2/2 Running 0 22m +garden prometheus-operator-8447dc86f9-6mb25 1/1 Running 0 139m +garden prometheus-seed-0 2/2 Running 0 87m +garden vali-0 2/2 Running 0 139m +garden vpa-admission-controller-76b4c99684-lkf27 1/1 Running 0 30m +garden vpa-admission-controller-76b4c99684-tkg7n 1/1 Running 0 81m +garden vpa-recommender-5b668455db-fctrs 1/1 Running 0 139m +garden vpa-recommender-5b668455db-sdpv6 1/1 Running 0 139m +garden vpa-updater-7dd7dccc6d-dgg7r 1/1 Running 0 131m +garden vpa-updater-7dd7dccc6d-whlqx 1/1 Running 0 133m +gardener-extension-provider-local-coredns coredns-69d964db7f-mrmq9 1/1 Running 0 139m +istio-ingress istio-ingressgateway-5b48596bf9-4pzsw 1/1 Running 0 139m +istio-ingress istio-ingressgateway-5b48596bf9-ff4zp 1/1 Running 0 139m +istio-system istiod-769565bbdb-2hnzz 1/1 Running 0 76m +istio-system istiod-769565bbdb-wlbts 1/1 Running 0 77m +kube-system calico-kube-controllers-bfc8cf74c-pj9hh 1/1 Running 0 148m +kube-system calico-node-88sdt 1/1 Running 0 148m +kube-system coredns-54bf7d48d5-j6zbg 1/1 Running 0 148m +kube-system coredns-54bf7d48d5-zrqqc 1/1 Running 0 148m +kube-system etcd-gardener-local-control-plane 1/1 Running 0 148m +kube-system kube-apiserver-gardener-local-control-plane 1/1 Running 0 148m +kube-system kube-controller-manager-gardener-local-control-plane 1/1 Running 0 148m +kube-system kube-proxy-fxxzc 1/1 Running 0 148m +kube-system kube-scheduler-gardener-local-control-plane 1/1 Running 0 148m +kube-system metrics-server-78b7d676c8-cjwrs 1/1 Running 0 148m +local-path-storage local-path-provisioner-7dc846544d-m825q 1/1 Running 0 148m +registry registry-c85bbb98c-lqtcj 1/1 Running 0 148m +registry registry-europe-docker-pkg-dev-7956694cfb-hbg69 1/1 Running 0 148m +registry registry-gcr-6d4b454594-b9plv 1/1 Running 0 148m +registry registry-k8s-5bf5795799-t44xd 1/1 Running 0 148m +registry registry-quay-84dbcd78b4-dw2pn 1/1 Running 0 148m +shoot--local--local blackbox-exporter-58c4f64c97-l96ct 1/1 Running 0 104m +shoot--local--local blackbox-exporter-58c4f64c97-nlhjj 1/1 Running 0 105m +shoot--local--local cluster-autoscaler-b894888d6-qwrpp 1/1 Running 0 116m +shoot--local--local etcd-events-0 2/2 Running 0 136m +shoot--local--local etcd-main-0 2/2 Running 0 136m +shoot--local--local event-logger-777b7b7c7c-77h9n 1/1 Running 0 133m +shoot--local--local gardener-resource-manager-764b5d4f97-bdd8n 1/1 Running 0 118m +shoot--local--local gardener-resource-manager-764b5d4f97-z48b5 1/1 Running 0 129m +shoot--local--local kube-apiserver-6545887cc9-26h5w 1/1 Running 0 124m +shoot--local--local kube-apiserver-6545887cc9-gf92k 1/1 Running 0 98m +shoot--local--local kube-controller-manager-555b598dbf-45n8v 1/1 Running 0 122m +shoot--local--local kube-scheduler-695d49b6c5-xr7hp 1/1 Running 0 125m +shoot--local--local kube-state-metrics-76cc7bb4f9-xq4g2 1/1 Running 0 130m +shoot--local--local machine-controller-manager-775dc6d574-mntqt 2/2 Running 0 111m +shoot--local--local machine-shoot--local--local-local-68499-nhvjl 1/1 Running 0 131m +shoot--local--local plutono-869d676bb9-jjwcx 2/2 Running 0 133m +shoot--local--local prometheus-shoot-0 2/2 Running 0 95m +shoot--local--local vali-0 4/4 Running 0 133m +shoot--local--local vpa-admission-controller-bcc4c968c-8ndg8 1/1 Running 0 133m +shoot--local--local vpa-admission-controller-bcc4c968c-r6lnt 1/1 Running 0 72m +shoot--local--local vpa-recommender-b49f4dd7c-mk9sx 1/1 Running 0 107m +shoot--local--local vpa-updater-6cc999b5bc-jcrbg 1/1 Running 0 123m +shoot--local--local vpn-seed-server-7497c89db-b5p5c 2/2 Running 0 15m +``` + +### Deploy a Test Nginx Pod +This step deploys a simple web server (nginx) to confirm that workloads can run. +- Creates one nginx pod +- Confirms Kubernetes can pull images and start containers + +When the pod status becomes Running, workload deployment works. + +``` console +kubectl run test-nginx --image=nginx --restart=Never +kubectl get pod test-nginx -w +``` +- `kubectl run test-nginx` → Creates a single nginx pod. +- `kubectl get pod test-nginx -w` → Watches pod status in real time. + +You should see an output similar to: + +```output +>pod/test-nginx created +> kubectl get pod test-nginx -w +NAME READY STATUS RESTARTS AGE +test-nginx 0/1 ContainerCreating 0 0s +test-nginx 0/1 ContainerCreating 0 1s +test-nginx 1/1 Running 0 4s +``` + +Now, press "ctrl-c" in the ssh shell to kill the currently running monitor. + +### Expose the Pod (ClusterIP Service) +Pods cannot be accessed directly by other pods reliably. +So we create a Kubernetes Service. +- The service gives nginx a stable internal IP +- It allows other pods to reach nginx using a name + +This confirms Kubernetes service networking is working. + +``` console +kubectl expose pod test-nginx --port=80 --name=test-nginx-svc +kubectl get svc test-nginx-svc +``` +- `kubectl expose pod` → Creates a ClusterIP service on port 80. +- `kubectl get svc` → Shows the service details. + +You should see an output similar to: + +```output +>service/test-nginx-svc exposed +> kubectl get svc test-nginx-svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +test-nginx-svc ClusterIP 10.2.194.17 80/TCP 9s +``` +A ClusterIP is assigned (example: 10.2.194.17). This confirms that Kubernetes services are functioning. + +### Test Service-to-Pod Connectivity +Now we verify that one pod can talk to another pod through a service. + +- Start a temporary curl pod +- Send an HTTP request to the nginx service + +**Start a curl pod:** Create a temporary curl pod. + +``` console +kubectl run curl --image=curlimages/curl -i --tty -- sh +``` + +Inside pod shell: + +``` console +curl http://test-nginx-svc +``` +Exit shell: + +``` console +exit +``` + +You should see an output similar to: + +```output +All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt. +If you don't see a command prompt, try pressing enter. +~ $ curl http://test-nginx-svc + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + +~ $ exit +Session ended, resume using 'kubectl attach curl -c curl -i -t' command when the pod is running +``` +This confirms pod-to-service networking. + +- Creates a curl container with an interactive shell. +- Uses curl to send an HTTP request to the nginx service. + +### Test DNS Resolution +Ensures CoreDNS is functioning and services resolve properly. Run `nslookup` inside the curl pod to check DNS service discovery. + +- `nslookup test-nginx-svc` checks if DNS can resolve the service name +- `CoreDNS` is responsible for this + +If DNS resolves correctly, service discovery is healthy. + +``` console +kubectl exec curl -- nslookup test-nginx-svc.default.svc.cluster.local +``` + +You should see an output similar to: + +```output +Server: 10.2.0.10 +Address: 10.2.0.10:53 + +Name: test-nginx-svc.default.svc.cluster.local +Address: 10.2.194.17 +``` +If DNS fails, networking or CoreDNS is broken. + +### Test Logs and Exec +This confirms two important Kubernetes features: +- Logs – you can debug applications +- Exec – you can run commands inside containers + +If logs show nginx startup and exec returns version info, pod access works. + +``` console +kubectl logs test-nginx | head +kubectl exec test-nginx -- nginx -v +``` +- `kubectl logs` → Shows nginx pod logs. +- `kubectl exec` → Runs nginx -v inside the pod. + +You should see an output similar to: + +```output +/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration +/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ +/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh +10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf +10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf +/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh +/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh +/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh +/docker-entrypoint.sh: Configuration complete; ready for start up +2025/11/25 07:50:33 [notice] 1#1: using the "epoll" event method + +> kubectl exec test-nginx -- nginx -v +nginx version: nginx/1.29.3 +``` +- Logs show nginx starting. +- Exec shows nginx version (e.g., `nginx version: nginx/1.25.3`). + +### Delete Test Resources +Once testing is complete, temporary resources should be removed. +- Deletes nginx and curl pods +- Deletes the service + +``` console +kubectl delete pod test-nginx curl +kubectl delete svc test-nginx-svc +``` +Confirms cleanup works and keeps the cluster clean. + +You should see an output similar to: + +```output +pod "test-nginx" deleted from default namespace +pod "curl" deleted from default namespace +> kubectl delete svc test-nginx-svc +service "test-nginx-svc" deleted from default namespace +``` +After completing these steps, you have confirmed that the Kubernetes cluster and Gardener setup are healthy, core components are functioning correctly, pods start successfully, networking and services operate as expected, DNS resolution works, and the cluster is ready to run real workloads. diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/benchmarking.md new file mode 100644 index 0000000000..8923067e7b --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/benchmarking.md @@ -0,0 +1,165 @@ +--- +title: Gardener Benchmarking +weight: 6 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + + +## Gardener Benchmark on GCP SUSE Arm64 VM +This guide shows how to check the security health of a Gardener Kubernetes cluster running on a GCP SUSE Arm64 VM. We use a tool called **kube-bench**, which checks the cluster against CIS security standards + +### Prerequisites +Before starting, make sure: +- Gardener Local successfully installed +- Garden cluster and Shoot cluster in Ready state +- Docker is running +- Admin access on the VM + +**Why this matters:** +If the cluster is not running or you don’t have admin access, security checks won’t work. + +**Verify cluster:** + +```console +cd ~/gardener +export KUBECONFIG=$PWD/example/gardener-local/kind/local/kubeconfig +kubectl apply -f example/provider-local/shoot.yaml +kubectl -n garden-local get shoots +kubectl get nodes +``` +If the cluster is not ready, benchmarking does not make sense yet. + +### Download kube-bench +Download the Arm64-compatible kube-bench binary from the official GitHub release. This tool will be used to check your Kubernetes cluster against CIS security benchmarks. + +```console +cd ~ +curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.10.3/kube-bench_0.10.3_linux_arm64.tar.gz --output ./kube-bench_0.10.3_linux_arm64.tar.gz +``` + +### Extract and Install kube-bench Configuration +Extract the downloaded file and place the kube-bench binary and configuration files in standard system locations so the tool can run correctly. + +```console +tar -xvf kube-bench_0.10.3_linux_arm64.tar.gz +sudo mkdir -p /etc/kube-bench +sudo cp -r cfg /etc/kube-bench/ +sudo mv kube-bench /usr/local/bin/ +``` + +**Verify binary:** + +Confirm that kube-bench is installed in the correct path and make it executable. This ensures the system can successfully run the benchmarking tool. + +```console +ls -l /usr/local/bin/kube-bench +``` + +**Make it executable:** + +Execute kube-bench to scan the Kubernetes cluster and evaluate its security based on industry standard CIS checks. + +```console +sudo chmod +x /usr/local/bin/kube-bench +``` + +### Run kube-bench Benchmark +Execute kube-bench to scan the Kubernetes cluster and evaluate its security based on industry standard CIS checks. + +```console +sudo /usr/local/bin/kube-bench --config-dir /etc/kube-bench/cfg +``` +You should see an output similar to: + +```output +5.2.11 Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers that have `.securityContext.windowsOptions.hostProcess` set to `true`. + +5.2.12 Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers with `hostPath` volumes. + +5.2.13 Add policies to each namespace in the cluster which has user workloads to restrict the +admission of containers which use `hostPort` sections. + +5.3.1 If the CNI plugin in use does not support network policies, consideration should be given to +making use of a different plugin, or finding an alternate mechanism for restricting traffic +in the Kubernetes cluster. + +5.3.2 Follow the documentation and create NetworkPolicy objects as you need them. + +5.4.1 If possible, rewrite application code to read Secrets from mounted secret files, rather than +from environment variables. + +5.4.2 Refer to the Secrets management options offered by your cloud provider or a third-party +secrets management solution. + +5.5.1 Follow the Kubernetes documentation and setup image provenance. + +5.7.1 Follow the documentation and create namespaces for objects in your deployment as you need +them. + +5.7.2 Use `securityContext` to enable the docker/default seccomp profile in your pod definitions. +An example is as below: + securityContext: + seccompProfile: + type: RuntimeDefault + +5.7.3 Follow the Kubernetes documentation and apply SecurityContexts to your Pods. For a +suggested list of SecurityContexts, you may refer to the CIS Security Benchmark for Docker +Containers. + +5.7.4 Ensure that namespaces are created to allow for appropriate segregation of Kubernetes +resources and that all new resources are created in a specific namespace. + + +== Summary policies == +4 checks PASS +4 checks FAIL +27 checks WARN +0 checks INFO + +== Summary total == +43 checks PASS +38 checks FAIL +49 checks WARN +0 checks INFO +``` + +### Benchmark summary +Results from the earlier run on the `c4a-standard-4` (4 vCPU, 16 GB memory) Arm64 VM in GCP (SUSE) will be similar to: + +| Category / Subsection | PASS | FAIL | WARN | +|-------------------------------------------------|:----:|:----:|:----:| +| **1. Control Plane Security Configuration** | | | | +| └─ 1.1 Control Plane Node Configuration Files | 0 | 18 | 3 | +| └─ 1.2 API Server | 9 | 7 | 5 | +| └─ 1.3 Controller Manager | 5 | 0 | 1 | +| └─ 1.4 Scheduler | 1 | 1 | 0 | +| **2. Etcd Node Configuration** | | | | +| └─ 2.1-2.7 Etcd Node Config | 7 | 0 | 0 | +| **3. Control Plane Configuration** | | | | +| └─ 3.1 Authentication and Authorization | 0 | 0 | 3 | +| └─ 3.2 Logging | 1 | 0 | 1 | +| **4. Worker Node Security Configuration** | | | | +| └─ 4.1 Worker Node Configuration Files | 2 | 5 | 4 | +| └─ 4.2 Kubelet | 5 | 3 | 3 | +| └─ 4.3 kube-proxy | 1 | 0 | 0 | +| **5. Kubernetes Policies** | | | | +| └─ 5.1 RBAC and Service Accounts | 2 | 4 | 6 | +| └─ 5.2 Pod Security Standards | 2 | 0 | 9 | +| └─ 5.3 Network Policies and CNI | 0 | 0 | 2 | +| └─ 5.4 Secrets Management | 0 | 0 | 2 | +| └─ 5.5 Extensible Admission Control | 0 | 0 | 1 | +| └─ 5.7 General Policies | 0 | 0 | 4 | +| **Total** | 43 | 38 | 49 | + +### Conclusions +- **Strong Baseline Security:** The cluster passed **43 CIS checks**, indicating a solid foundational security posture out of the box on **Arm64 (C4A)** infrastructure. +- **Control Plane Hardening Gaps:** A significant number of **FAIL results (38)** are concentrated in **control plane file permissions and API server settings**, which are commonly unmet in development and local/KinD-based setups. +- **Healthy Etcd Configuration:** All **Etcd-related checks passed (7/7)**, demonstrating correct encryption, access controls, and secure peer/client configuration on **Arm64**. +- **Worker Node Improvements Needed:** Worker node checks show mixed results, with failures mainly around **kubelet configuration and file permissions**, highlighting clear opportunities for security hardening. +- **Policy-Level Defaults:** Most **Kubernetes policy checks surfaced as WARN**, reflecting features such as **Pod Security Standards, NetworkPolicies, and admission controls** being optional or not strictly enforced by default. +- **Arm64 Parity with x86_64:** The overall benchmark profile aligns with typical results seen on **x86_64 clusters**, confirming that **Arm64 introduces no architecture-specific security limitations**. +- **Production Readiness Signal:** With targeted remediation—especially for **control plane and kubelet configurations**—the **Arm64-based cluster** can achieve **full CIS compliance** while benefiting from **Arm’s cost and energy efficiency**. diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-shell.png b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-shell.png new file mode 100644 index 0000000000..7e2fc3d1b5 Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-shell.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-ssh.png b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-ssh.png new file mode 100644 index 0000000000..597ccd7fea Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-ssh.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-vm.png b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-vm.png new file mode 100644 index 0000000000..0d1072e20d Binary files /dev/null and b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/images/gcp-vm.png differ diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/installation.md new file mode 100644 index 0000000000..6172e82a70 --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/installation.md @@ -0,0 +1,401 @@ +--- +title: Install Gardener +weight: 4 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Install Gardener on GCP SUSE VM +This guide walks you through a clean, corrected, and fully working setup of Gardener Local on a GCP SUSE Arm64 VM, including installation of Go 1.24, the Arm64 yq binary, all required Gardener CLI tools, KinD cluster setup, shoot creation, and kubeconfig retrieval. + +### Update System +This step updates the operating system packages to the latest versions to avoid bugs and compatibility issues. + +``` console +sudo zypper refresh +sudo zypper update -y +``` +### Enable SUSE Containers Module +This enables SUSE’s official container support, so Docker and container tools can work properly. + +``` console +sudo SUSEConnect -p sle-module-containers/15.5/arm64 +sudo SUSEConnect --list-extensions | grep Containers +``` + +You should see "Activated" as part of the output from the above commands. + +### Install Docker +Docker is required to run KinD and Kubernetes components as containers. This step installs Docker, starts it, and allows your user to run Docker without sudo. +``` console +sudo zypper refresh +sudo zypper install -y docker +sudo systemctl enable --now docker +sudo usermod -aG docker $USER +exit +``` + +Next, re-open a new shell into your VM and type the following: + +```console +docker ps +``` + +You should see the following output: + +```output +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +``` + +### Install Go 1.24 (Manual) +Gardener requires a newer Go version than what SUSE provides by default. Here, Go 1.24 is downloaded and installed manually. + +``` console +cd /tmp +curl -LO https://go.dev/dl/go1.24.0.linux-arm64.tar.gz +sudo rm -rf /usr/local/go +sudo tar -C /usr/local -xzf go1.24.0.linux-arm64.tar.gz +echo 'export PATH=$PATH:/usr/local/go/bin' >> ~/.consolerc +source ~/.consolerc +go version +``` +You should see an output similar to: + +```output +go version go1.24.0 linux/arm64 +``` + +### Install Git, Build Tools +These tools are needed to download the Gardener source code and compile components during setup. + +``` console +sudo zypper install -y git curl tar gzip make gcc +``` + +### Install kubectl +`kubectl` is the command-line tool for interacting with Kubernetes clusters. It lets you check nodes, pods, and cluster status. + +``` console +curl -LO https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl +chmod +x kubectl +sudo mv kubectl /usr/local/bin/ +kubectl version --client +``` + +You should see an output similar to: + +```output +Client Version: v1.34.0 +Kustomize Version: v5.7.1 +``` + +### Install Helm +Helm is used to install and manage Kubernetes applications. Gardener uses Helm internally to deploy its components. + +``` console +curl -sSfL https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh +chmod 755 ./get_helm.sh +./get_helm.sh +helm version +``` + +You should see an output similar to: + +```output +version.BuildInfo{Version:"v3.19.2", GitCommit:"8766e718a0119851f10ddbe4577593a45fadf544", GitTreeState:"clean", GoVersion:"go1.24.9"} +``` + +### Install yq +`yq` is a YAML processing tool used by Gardener scripts to read and modify configuration files. + +``` console +sudo curl -L -o /usr/local/bin/yq https://github.com/mikefarah/yq/releases/download/v4.43.1/yq_linux_arm64 +sudo chmod +x /usr/local/bin/yq +yq --version +``` + +You should see an output similar to: + +```output +yq (https://github.com/mikefarah/yq/) version v4.43.1 +``` + +### Install Kustomize +Kustomize helps customize Kubernetes YAML files without changing the original manifests. + +``` console +curl -LO https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize/v5.3.0/kustomize_v5.3.0_linux_arm64.tar.gz +tar -xvf kustomize_v5.3.0_linux_arm64.tar.gz +sudo mv kustomize /usr/local/bin/ +kustomize version +``` + +You should see an output (Kustomize version) that is similar to: + +```output +v5.3.0 +``` + +### Install Kind +Kind (Kubernetes in Docker) creates a local Kubernetes cluster inside Docker. Gardener Local runs entirely on this KinD cluster. + +``` console +curl -Lo kind https://kind.sigs.k8s.io/dl/v0.30.0/kind-linux-arm64 +chmod +x kind +sudo mv kind /usr/local/bin/ +kind version +``` + +You should see an output similar to: + +```output +kind v0.30.0 go1.24.6 linux/arm64 +``` + +### Add Required Loopback IPs +These special loopback IPs are needed so Gardener services and the local API endpoints work correctly. + +``` console +sudo ip addr add 172.18.255.1/32 dev lo +sudo ip addr add 172.18.255.22/32 dev lo +ip addr show lo +``` + +### Add Hosts Entry +This step maps a Gardener domain name to the local machine so services can be accessed by name. + +``` console +echo "127.0.0.1 garden.local.gardener.cloud" | sudo tee -a /etc/hosts +``` + +You should see an output similar to: + +```output +127.0.0.1 garden.local.gardener.cloud +``` + +### Clone Gardener Repo +Here you download the Gardener’s source code and switch to a known, stable release version. + +``` console +cd ~ +git clone https://github.com/gardener/gardener.git +cd gardener +git fetch --all --tags +git checkout v1.122.0 +``` + +### Clean Old KinD Network +This removes any leftover KinD network from previous runs to avoid IP or port conflicts. + +``` console +docker network rm kind +``` + +You should get the following output which is correct: + +```output +Error response from daemon: network kind not found +exit status 1 +``` + +You can confirm this by typing: + +```console +docker network ls +``` + +Your output should look something like this (note the absence of the "kind" name...its not present in the network config): + +```output +NETWORK ID NAME DRIVER SCOPE +bb9f7955c11b bridge bridge local +aec64365a860 host host local +d60c34b45e0a none null local +``` + +### Create Gardener KinD Cluster +This step creates the Kubernetes cluster using KinD and prepares it to run Gardener. + +``` console +make kind-up +``` + +You should see an output similar to: + +```output +clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server serverside-applied +service/metrics-server serverside-applied +deployment.apps/metrics-server serverside-applied +apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io serverside-applied +[gardener-local-control-plane] Setting up containerd registry mirror for host gcr.io. +[gardener-local-control-plane] Setting up containerd registry mirror for host registry.k8s.io. +[gardener-local-control-plane] Setting up containerd registry mirror for host quay.io. +[gardener-local-control-plane] Setting up containerd registry mirror for host europe-docker.pkg.dev. +[gardener-local-control-plane] Setting up containerd registry mirror for host garden.local.gardener.cloud:5001. +Waiting for FelixConfiguration to be created... +FelixConfiguration 'default' successfully updated. +Approving Kubelet Serving Certificate Signing Requests... +certificatesigningrequest.certificates.k8s.io/csr-nbmbj approved +certificatesigningrequest.certificates.k8s.io/csr-rnvdk approved +Kubelet Serving Certificate Signing Requests approved. +``` + +### If the above "make" command fails... + +If the above make command fails, please reset the loopback interfaces as follows and re-try the make command: + +``` console +sudo ip addr del 172.18.255.1/32 dev lo +sudo ip addr del 172.18.255.22/32 dev lo +sudo ip addr add 172.18.255.1/32 dev lo +sudo ip addr add 172.18.255.22/32 dev lo +ip addr show lo +make kind-up +``` + + +### Export kubeconfig +This config file allows `kubectl` to connect to the newly created Gardener local cluster. + +``` console +export KUBECONFIG=$PWD/example/gardener-local/kind/local/kubeconfig +kubectl get nodes +``` +You should see an output similar to: + +```output +NAME STATUS ROLES AGE VERSION +gardener-local-control-plane Ready control-plane 41s v1.32.5 +``` + +### Deploy Gardener Components +This installs all Gardener control-plane services, including the API server, controller, scheduler, and monitoring tools. + +``` console +make gardener-up +kubectl get pods -n garden +``` + +You should see an output similar to: + +```output +NAME READY STATUS RESTARTS AGE +dependency-watchdog-prober-d47b5899f-9dltz 1/1 Running 0 118s +dependency-watchdog-prober-d47b5899f-gn7fh 1/1 Running 0 118s +dependency-watchdog-weeder-66f8bffd8b-skb64 1/1 Running 0 118s +dependency-watchdog-weeder-66f8bffd8b-th59c 1/1 Running 0 118s +etcd-0 1/1 Running 0 3m56s +etcd-druid-65d56db866-bstcm 1/1 Running 0 118s +etcd-druid-65d56db866-zkfjb 1/1 Running 0 117s +fluent-bit-8259c-s5wnv 1/1 Running 0 98s +fluent-operator-5b9ff5bfb7-6tz2w 1/1 Running 0 118s +fluent-operator-5b9ff5bfb7-6xqfx 1/1 Running 0 118s +gardener-admission-controller-899c585bf-2mp9g 1/1 Running 2 (3m41s ago) 3m44s +gardener-admission-controller-899c585bf-xp2f4 1/1 Running 2 (3m41s ago) 3m44s +gardener-apiserver-54fcdfcd97-5zkgr 1/1 Running 0 3m44s +gardener-controller-manager-77bf4b686f-zxgsh 1/1 Running 3 (3m20s ago) 3m44s +gardener-extension-admission-local-57d674d98f-fswsz 1/1 Running 0 25s +gardener-extension-admission-local-57d674d98f-fwg49 1/1 Running 0 25s +gardener-resource-manager-cfd685fc5-jdv8v 1/1 Running 0 2m34s +gardener-resource-manager-cfd685fc5-spmrr 1/1 Running 0 2m34s +gardener-scheduler-6599d654c9-vw2q5 1/1 Running 0 3m44s +gardenlet-59cb4b6956-9htmc 1/1 Running 0 2m45s +kube-state-metrics-seed-f89d48b49-hbddp 1/1 Running 0 118s +kube-state-metrics-seed-f89d48b49-sc66l 1/1 Running 0 118s +nginx-ingress-controller-5bb9b58c44-ck2q7 1/1 Running 0 118s +nginx-ingress-controller-5bb9b58c44-r8wwd 1/1 Running 0 117s +nginx-ingress-k8s-backend-5547dddffd-fqsfm 1/1 Running 0 117s +perses-operator-9f9694dcd-wvl5z 1/1 Running 0 119s +plutono-776964667b-225r7 2/2 Running 0 117s +prometheus-aggregate-0 2/2 Running 0 113s +prometheus-cache-0 2/2 Running 0 113s +prometheus-operator-8447dc86f9-6mb25 1/1 Running 0 119s +prometheus-seed-0 2/2 Running 0 112s +vali-0 2/2 Running 0 118s +vpa-admission-controller-76b4c99684-4m6pb 1/1 Running 0 115s +vpa-admission-controller-76b4c99684-qf8c6 1/1 Running 0 115s +vpa-recommender-5b668455db-fctrs 1/1 Running 0 116s +vpa-recommender-5b668455db-sdpv6 1/1 Running 0 116s +vpa-updater-7dd7dccc6d-bcgv8 1/1 Running 0 116s +vpa-updater-7dd7dccc6d-jdxrg 1/1 Running 0 116s +``` + +### Verify Seed +This checks whether the “seed” cluster (the infrastructure cluster managed by Gardener) is healthy and ready. + +``` console +./hack/usage/wait-for.sh seed local GardenletReady SeedSystemComponentsHealthy ExtensionsReady +kubectl get seeds +``` + +You should see an output similar to: + +```output +⏳ Checking last operation state and conditions for seed/local with a timeout of 600 seconds... +✅ Last operation state is 'Succeeded' and all conditions passed for seed/local. +gcpuser@lpprojectsusearm64:~/gardener> kubectl get seeds +NAME STATUS LAST OPERATION PROVIDER REGION AGE VERSION K8S VERSION +local Ready Reconcile Succeeded (100%) local local 2m48s v1.122.0 v1.32.5 +``` + +### Create Shoot Cluster +A Shoot cluster is a user Kubernetes cluster managed by Gardener. This step creates a sample Shoot running locally. + +``` console +kubectl apply -f example/provider-local/shoot.yaml +kubectl -n garden-local get shoots +``` + +You should see an output similar to: + +```output +shoot.core.gardener.cloud/local created +> kubectl -n garden-local get shoots +NAME CLOUDPROFILE PROVIDER REGION K8S VERSION HIBERNATION LAST OPERATION STATUS AGE +local local local local 1.33.0 Awake Create Succeeded (100%) healthy 3h45m +``` + +### Add shoot DNS +These DNS entries allow your system to resolve the Shoot cluster’s API endpoint correctly. + +``` console +cat < admin-kubeconf.yaml +KUBECONFIG=admin-kubeconf.yaml kubectl get nodes +``` + +{{% notice Note %}} +If you get the following result from the "kubectl get nodes" command above: +```output +No resources found +``` +Please wait a bit and retry again. Your nodes are still being generated! +{{% /notice %}} + + +You should see an output similar to: + +```output +NAME STATUS ROLES AGE VERSION +machine-shoot--local--local-local-68499-nhvjl Ready worker 12m v1.33.0 +``` + +You now have **Gardener Local running on SUSE Arm64** with Go 1.24, Helm, kubectl, yq, Kustomize, Kind, and a working Shoot cluster. diff --git a/content/learning-paths/servers-and-cloud-computing/gardener-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/instance.md new file mode 100644 index 0000000000..0aa449deae --- /dev/null +++ b/content/learning-paths/servers-and-cloud-computing/gardener-gcp/instance.md @@ -0,0 +1,44 @@ +--- +title: Create a Google Axion C4A Arm virtual machine on GCP +weight: 3 + +### FIXED, DO NOT MODIFY +layout: learningpathall +--- + +## Overview + +In this section, you will learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. + +{{% notice Note %}} +For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). +{{% /notice %}} + +## Provision a Google Axion C4A Arm VM in Google Cloud Console + +To create a virtual machine based on the C4A instance type: +- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). +- Go to **Compute Engine > VM Instances** and select **Create Instance**. +- Under **Machine configuration**: + - Populate fields such as **Instance name**, **Region**, and **Zone**. + - Set **Series** to `C4A`. + - Select `c4a-standard-4` for machine type. + + ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + + +- Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. +- If using use **SUSE Linux Enterprise Server**. Select "Pay As You Go" for the license type. +- Edit the Disk size ("Size(GB)" Textfield...) below and change it from "10" to "50" to increase the disk size of the VM to 50 GB... +- Once appropriately selected and configured, please Click **Select**. +- Under **Networking**, enable **Allow HTTP traffic** as well as **Allow HTTPS traffic**. +- Click **Create** to launch the instance. +- Once created, you should see a "SSH" option to the right in your list of VM instances. Click on this to launch a SSH shell into your VM instance: + +![Invoke a SSH session via your browser alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") + +- A window from your browser should come up and you should now see a shell into your VM instance: + +![Terminal Shell in your VM instance alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") + +Next, let's install Gardner! \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/memory_consistency/litmus_syntax.md b/content/learning-paths/servers-and-cloud-computing/memory_consistency/litmus_syntax.md index 0bb56d6f3f..77adbe5941 100755 --- a/content/learning-paths/servers-and-cloud-computing/memory_consistency/litmus_syntax.md +++ b/content/learning-paths/servers-and-cloud-computing/memory_consistency/litmus_syntax.md @@ -68,7 +68,7 @@ Now inspect this litmus file to gain a better understanding of the assembly code - "On `P1`, is it possible to observe register `W0` (the flag) set to 1 **AND** register `W2` (the payload) set to 0?" - Wait...but the condition uses register names `X0` and `X2`, not `W0` and `W2`. See the note below for more. - In this condition check syntax, `/\` is a logical **AND**, while `\/` is a logical **OR**. -#### Note on `X` and `W` Registers: +#### Note on X and W Registers: - Notice you are using `X` registers for storing addresses and for doing the condition check, but `W` registers for everything else. - Addresses need to be stored as 64-bit values, hence the need to use `X` registers for the addresses because they are 64-bit. `W` registers are 32-bit. In fact, register `Wn` is the lower 32-bits of register `Xn`. - Writing the litmus tests this way is simpler than using all `X` registers. If all `X` registers are used, the data type of each register needs to be declared on additional lines. For this reason, most tests are written as shown above. The way this is done may be changed in the future to reduce potential confusion around the mixed use of `W` and `X` registers, but all of this is functionally correct. @@ -77,7 +77,7 @@ Before you run this test with `herd7` and `litmus7`, you can hypothesize on what Further, if you interleave these instructions in all possible permutations, you can figure out all of the possible valid outcomes of registers `X0` (flag) and `X2` (payload) on `P1`. For the example test above, the possible valid outcomes of `(X0,X2)` (or `(flag,data)`) are `(0,0)`, `(0,1)`, & `(1,1)`. Some permutations that result in these valid outcomes are shown below. These are not all the possible instruction permutations for this test. Listing them all would make this section needlessly long. -#### A Permutation That Results in `(0,0)`: +#### A Permutation That Results in (0,0): ```output (P1) LDR W0, [X1] # P1 reads flag, gets 0 @@ -89,7 +89,7 @@ Further, if you interleave these instructions in all possible permutations, you ``` In this permutation of the test execution, `P1` runs to completion before `P0` even starts its execution. For this reason, `P1` observes the initial values of 0 for both the flag and payload. -#### A Permutation That Results in `(0,1)`: +#### A Permutation That Results in (0,1): ```output (P1) LDR W0, [X1] # P1 reads flag, gets 0 @@ -101,7 +101,7 @@ In this permutation of the test execution, `P1` runs to completion before `P0` e ``` In this permutation of the test execution, `P1` reads the initial value of the flag (the first line) because this instruction is executed before `P0` writes the flag (the last list). However `P1` reads the payload value of 1 because it executes after `P0` writes the payload to 1 (third and forth lines). -#### A Permutation that Results in `(1,1)`: +#### A Permutation that Results in (1,1): ```output (P0) MOV W0, #1 @@ -142,7 +142,7 @@ The Arm memory model tends to be considered a Relaxed Consistency model, which m In a Release Consistency model, ordinary memory accesses like `STR` and `LDR` do not need to follow program order. This relaxation in the ordering rules expands the list of instruction permutations in the litmus test above. It is these additional instruction permutations allowed by the Relaxed Consistency model that yield at least one permutation that results in `(1,0)`. Below is one such example of a permutation. For this permutation, the `LDR` instructions in `P1` are reordered. -#### One Possible Permutation Resulting in `(1,0)`: +#### One Possible Permutation Resulting in (1,0): ```output (P1) LDR W2, [X3] # P1 reads payload, gets 0 diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md index c509ef77f2..dc51715408 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md @@ -1,26 +1,21 @@ --- -title: Deploy Puppet on Google Cloud C4A (Arm-based Axion VMs) - -draft: true -cascade: - draft: true +title: Deploy Puppet on Google Cloud C4A minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for software developers deploying and optimizing Puppet workloads on Arm Linux environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. +who_is_this_for: This is an introductory topic for developers deploying and optimizing Puppet workloads on Arm Linux environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. learning_objectives: - - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) - - Install Puppet on a SUSE Arm64 (C4A) instance + - Provision an Arm-based SUSE SLES (SUSE Linux Enterprise Server) virtual machine (VM) on Google Cloud C4A with Axion processors + - Install Puppet on a SUSE Arm64 C4A instance - Verify Puppet by applying a test manifest and confirming successful resource creation on Arm64 - Benchmark Puppet by measuring catalog compile time, apply speed, and resource usage on Arm64 +author: Pareena Verma prerequisites: - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled - Basic familiarity with [Puppet](https://www.puppet.com/) -author: Pareena Verma - ##### Tags skilllevels: Introductory subjects: Performance and Architecture diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md index a83fa26d68..878ec7f707 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md @@ -1,27 +1,30 @@ --- -title: Getting started with Puppet on Google Axion C4A (Arm Neoverse-V2) +title: Get started with Arm-based Google Axion and Puppet weight: 2 layout: "learningpathall" --- -## Google Axion C4A Arm instances in Google Cloud +## Automate and optimize cloud deployments on Arm + +Modern cloud workloads demand scalable, efficient, and automated infrastructure management. By combining Arm-based Google Axion C4A instances with Puppet, you can take advantage of high-performance, energy-efficient virtual machines and powerful configuration management tools. This section introduces the key technologies you'll use to automate and optimize your cloud deployments on Arm in Google Cloud. + +## Explore Google Axion C4A Arm instances on Google Cloud Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. -To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. - -## Puppet +To learn more about Google Axion, see the Google blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). -[Puppet](https://puppet.com/) is an **open-source configuration management and automation tool** designed to help system administrators and DevOps teams **manage infrastructure as code**. Developed by [Puppet Labs](https://puppet.com/company/), it automates the provisioning, configuration, and management of servers and services across large-scale environments. +## Explore Puppet +[Puppet](https://puppet.com/) is an open-source configuration management and automation tool designed to help system administrators and DevOps teams manage infrastructure as code. Developed by [Puppet Labs](https://puppet.com/company/), it automates the provisioning, configuration, and management of servers and services across large-scale environments. -Puppet uses a **declarative language** to define system configurations, ensuring that every machine’s state matches the desired setup described in its manifests. It supports both **agent-based** and **agentless** architectures, making it flexible for diverse deployment needs. +Puppet uses a declarative language to define system configurations, ensuring that every machine’s state matches the desired setup described in its manifests. It supports both agent-based and agentless architectures, making it flexible for diverse deployment needs. -Known for its **scalability**, **reliability**, and **idempotent behavior**, Puppet continuously enforces configurations, reducing manual effort and configuration drift. It integrates well with major platforms like **Linux**, **Windows**, **macOS**, and cloud providers such as **AWS**, **Azure**, and **GCP**. +Known for its scalability, reliability, and idempotent behavior, Puppet continuously enforces configurations, reducing manual effort and configuration drift. It integrates well with major platforms like Linux, Windows, macOS, and cloud providers such as AWS, Azure, and GCP. -Common use cases include **automating server configuration**, **applying security policies**, **software installation**, and **infrastructure auditing**. Puppet is widely used in enterprises for managing **hybrid and multi-cloud environments** efficiently. +Common use cases include automating server configuration, applying security policies, software installation, and infrastructure auditing. Puppet is widely used in enterprises for managing hybrid and multi-cloud environments efficiently. To learn more, visit the [official Puppet website](https://puppet.com/). diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md index 8d2b43a6a3..c2fcc9ec61 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md @@ -1,16 +1,15 @@ --- -title: Puppet Baseline Testing on Google Axion C4A Arm Virtual Machine +title: Perform Puppet baseline testing weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Puppet baseline testing on GCP SUSE VMs +## Overview +You can perform baseline testing of Puppet on a GCP SUSE Arm64 VM to make sure your installation works as expected. In this Learning Path, you'll verify Puppet and Facter versions, run basic Puppet commands, apply a simple manifest, and confirm that system facts are collected correctly. These steps help you validate your setup before moving on to advanced Puppet tasks. -You can perform baseline testing of Puppet on a GCP SUSE Arm64 VM to verify that the installation works correctly. You will check Puppet and Facter versions, run basic Puppet commands, apply a simple manifest, and confirm that system facts are collected accurately. - -### Verify the Puppet installation +## Verify the Puppet installation Verify that Puppet and Facter are correctly installed and respond to version checks. @@ -49,7 +48,7 @@ The output confirms the Ruby version and architecture: ruby 3.1.4p223 (2023-03-30 revision 957bb7cb81) [aarch64-linux] ``` -### Run a simple Puppet command +## Run a simple Puppet command Check that Puppet responds to commands by running `puppet help`. If the help menu appears, Puppet is working correctly. @@ -96,9 +95,9 @@ See 'puppet help ' for help on a specific subcommand. Puppet v8.10.0 ``` -### Test a Simple Puppet Manifest +## Test a simple Puppet manifest -Create a basic Puppet script to make sure Puppet can apply configurations. If it successfully creates the test file, your Puppet agent functions as expected. +Create a basic Puppet script to make sure Puppet can apply configurations. The purpose of this script is to verify that Puppet can successfully apply configurations on your system. If the script runs and creates the specified test file, it confirms that your Puppet agent is functioning correctly: ```bash cd ~ @@ -124,7 +123,7 @@ Notice: /Stage[main]/Main/File[/tmp/puppet_test.txt]/ensure: defined content as Notice: Applied catalog in 0.01 seconds ``` -Open the file created by Puppet to confirm the content matches your script. This step validates that Puppet executed your manifest correctly. +Open the file created by Puppet to confirm the content matches your script. This step validates that Puppet executed your manifest correctly: ```console cat /tmp/puppet_test.txt @@ -135,7 +134,7 @@ Output: Hello from Puppet on SUSE ARM64! ``` -### Check Facter integration +## Check Facter integration Run `facter` commands to verify that it collects accurate system details, such as the OS and CPU type. This ensures Puppet can gather the facts it needs for automation decisions. @@ -207,4 +206,13 @@ The output is similar to the following: } ``` -With these checks complete, proceed to the Puppet benchmarking section to run workload-focused tests on the GCP SUSE VMs. +## What you've accomplished and what's next + +You've completed the essential baseline checks for Puppet on your GCP SUSE Arm64 VM. At this point, you've: + +- Verified that Puppet, Facter, and Ruby are installed and working +- Confirmed Puppet responds to commands and applies manifests +- Validated that Facter collects accurate system facts + +This progress means your environment is ready for more advanced testing. Next, you'll move on to Puppet benchmarking, where you'll run workload-focused tests to measure performance on your GCP SUSE VM. + diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md index 134d987dad..c06156d3d8 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md @@ -1,5 +1,5 @@ --- -title: Puppet Benchmarking +title: Benchmark Puppet weight: 6 ### FIXED, DO NOT MODIFY @@ -7,13 +7,12 @@ layout: learningpathall --- -## Puppet Benchmark on GCP SUSE Arm64 VM +## Benchmark Puppet on a GCP SUSE Arm64 VM -This guide explains how to perform a **Puppet standalone benchmark** on a **Google Cloud Platform (GCP) SUSE Linux Arm64 VM**. -It measures Puppet’s local execution performance without requiring a Puppet Master. +This section walks you through how to perform a Puppet standalone benchmark on a Google Cloud Platform (GCP) SUSE Linux Arm64 VM. It measures Puppet’s local execution performance without requiring a Puppet Master. -### Prerequisites +## Prerequisites Ensure that Puppet is installed and functioning correctly: ```console @@ -24,7 +23,7 @@ Output: 8.10.0 ``` -### Create a Benchmark Manifest +## Create a benchmark manifest Create a directory and a simple manifest file: ```console @@ -41,11 +40,16 @@ notify { 'Benchmark Test': } ``` -- **notify** is a built-in Puppet resource type that displays a message during catalog application (like a print or log message). -- **'Benchmark Test'** is the title of the resource — a unique identifier for this notify action. -- **message => 'Running Puppet standalone benchmark.'** specifies the text message Puppet will print when applying the manifest. +## Explore the code -### Run the Benchmark Command +Here is a breakdown of the key elements in the `benchmark.pp` manifest to help you understand how Puppet processes and displays information during the benchmark: + +- `notify` is a built-in Puppet resource type that displays a message during catalog application (like a print or log message). +- `Benchmark Test` is the title of the resource. It's a unique identifier for this notify action. +- `message => 'Running Puppet standalone benchmark.'` specifies the text message Puppet prints when applying the manifest. + + +## Run the benchmark command This step runs Puppet in standalone mode using the `apply` command to execute the benchmark manifest locally while measuring execution time and performance statistics. ```console @@ -67,30 +71,40 @@ user 0m0.676s sys 0m0.367s ``` -### Benchmark Metrics Explanation - -- **Compiled catalog** → Puppet compiled your manifest into an execution plan. -- **Applied catalog** → Puppet executed the plan on your system. -- **real** → Total elapsed wall time (includes CPU + I/O). -- **user** → CPU time spent in user-space. -- **sys** → CPU time spent in system calls. - -### Benchmark results -The above results were executed on a `c4a-standard-4` (4 vCPU, 16 GB memory) Axiom Arm64 VM in GCP running SuSE: - -| **Metric / Log** | **Output** | -|-------------------|------------| -| Compiled catalog | 0.01 seconds | -| Environment | production | -| Applied catalog | 0.01 seconds | -| real | 0m1.054s | -| user | 0m0.676s | -| sys | 0m0.367s | - -### Puppet benchmarking summary - -- **Catalog compilation:** Completed in just **0.01 seconds**, showing excellent processing speed on **Arm64**. -- **Environment:** Executed smoothly under the **production** environment. -- **Configuration version:** Recorded as **1763407825**, confirming successful version tracking. -- **Catalog application:** Finished in **0.01 seconds**, demonstrating very low execution latency. -- **Real time:** Total runtime of **1.054 seconds**, reflecting efficient end. +## Interpret the benchmark metrics +Here is a breakdown of the key benchmark metrics you will see in the output: + +- `Compiled catalog`: Puppet parsed your manifest and generated a catalog, which is an execution plan describing the desired system state. This metric shows how quickly Puppet can process and prepare your configuration for application. Fast compilation times indicate efficient manifest design and good platform performance. +- `Applied catalog`: Puppet applied the compiled catalog to your VM, making the necessary changes to reach the desired state. This value reflects how quickly Puppet can enforce configuration changes on your Arm64 system. Low application times suggest minimal system overhead and effective resource management. +- `real`: This is the total elapsed wall-clock time from start to finish of the `puppet apply` command. It includes all time spent running the process, waiting for I/O, and any other delays. Lower real times mean faster end-to-end execution, which is important for automation and scaling. +- `user`: This measures the amount of CPU time spent executing user-space code (Puppet and Ruby processes) during the benchmark. High user time relative to real time can indicate CPU-bound workloads, while lower values suggest efficient code execution. +- `sys`: This is the CPU time spent in system (kernel) calls, such as file operations or network access. Lower sys times are typical for lightweight manifests, while higher values may indicate more intensive system interactions or I/O operations. + +## Benchmark results + +The following table summarizes the benchmark metrics collected from running Puppet on a `c4a-standard-4` (4 vCPU, 16 GB memory) Axiom Arm64 VM in Google Cloud Platform (GCP) with SUSE Linux. These results provide a baseline for evaluating Puppet’s performance on Arm64 infrastructure. Use this data to compare against other VM types or architectures, and to identify areas for further optimization. + +| Metric / Log | Output | +|--------------------|--------------| +| Compiled catalog | 0.01 seconds | +| Environment | production | +| Applied catalog | 0.01 seconds | +| real | 0m1.054s | +| user | 0m0.676s | +| sys | 0m0.367s | + +These metrics reflect efficient catalog compilation and application times, as well as low system overhead, demonstrating the strong performance of Puppet on Arm64-based GCP VMs. + +## Review Puppet benchmarking results + +Confirm that your benchmark output matches the expected metrics for catalog compilation, application, and system resource usage. If your results differ significantly, investigate VM resource allocation, manifest complexity, or system load. Use these metrics to validate Puppet performance on Arm64 and identify opportunities for further optimization. + +These benchmark results demonstrate that catalog compilation completed in only 0.01 seconds, highlighting the processing speed of the Arm64 platform. The benchmark ran smoothly in the production environment, and the configuration version was successfully recorded as 1763407825. Catalog application also finished in 0.01 seconds, indicating very low execution latency. The total runtime was 1.054 seconds, which reflects efficient overall performance for Puppet on an Arm64 SUSE VM in Google Cloud Platform. + +This benchmarking method is useful for validating Puppet performance after migration to Arm64, or when optimizing infrastructure for cost and speed. For more advanced benchmarking, consider automating multiple runs, collecting metrics over time, and comparing results with x86-based VMs to quantify the benefits of Arm64 on GCP. + +## Summary and next steps + +You’ve successfully benchmarked Puppet on an Arm64-based SUSE VM in Google Cloud Platform. You created and applied a simple manifest, measured key performance metrics, and interpreted the results to validate Puppet’s efficiency on Arm infrastructure. These steps help ensure your configuration management setup is optimized for speed and reliability on modern cloud platforms. + +Well done - completing this benchmark gives you a solid foundation for further automation and optimization with Puppet on Arm. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md index 5e5d65cfcc..2909d98352 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md @@ -1,30 +1,28 @@ --- -title: Install Puppet +title: Install Puppet on a GCP VM weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Install Puppet on GCP VM -This guide walks you through installing Puppet on a **Google Cloud Platform (GCP) SUSE Linux Arm64 VM**, including all dependencies, Ruby setup, and environment preparation. +## Set up your environment and install Puppet +This section walks you through installing Puppet on a Google Cloud Platform (GCP) SUSE Linux arm64 VM. You'll set up all required dependencies, build Ruby from source, and prepare the environment for Puppet automation. -### Install build dependencies and Ruby from source -Installs all required tools and builds Ruby 3.1.4 from source to ensure compatibility with Puppet. +## Install dependencies and Ruby +To get started, you'll install the required development tools and libraries, then build Ruby 3.1.4 from source. This approach prepares your environment for Puppet and helps prevent compatibility problems. + +To install the necessary packages for Ruby use this command: -First we install the prerequisites for ruby: ```console sudo zypper install git curl gcc make patch libyaml-devel libffi-devel libopenssl-devel readline-devel zlib-devel gdbm-devel bzip2 bzip2-devel ``` -NOTE: -```note -Due to changing version dependencies, you may receive a message in the "zypper" -command above that ncurses-devel is not the correct version. If so, please select the -option that permits downgrading of the installed ncurses-devel package to the required -version (normally "Solution 1"), followed by confirmation with "y". -``` -Then, we will install ruby itself: + +{{% notice Note %}}If you see a version conflict for `ncurses-devel` during the `zypper` install, choose the option that allows downgrading `ncurses-devel` to the required version (usually "Solution 1"). Confirm the downgrade by entering "y" when prompted. This step can be confusing at first, but it's a common requirement for building Ruby from source on SUSE Linux.{{% /notice %}} + +Next, install ruby: + ```console cd ~ sudo wget https://cache.ruby-lang.org/pub/ruby/3.1/ruby-3.1.4.tar.gz @@ -34,45 +32,57 @@ sudo ./configure sudo make && sudo make install ``` -### Verify Ruby -Checks that Ruby is correctly installed and available in your system path. +## Verify Ruby +Check that Ruby is correctly installed and available in your system PATH: ```console ruby -v which ruby ``` +The expected output is: ```output ruby 3.1.4p223 (2023-03-30 revision 957bb7cb81) [aarch64-linux] /usr/local/bin/ruby ``` -### Install Puppet dependencies -Installs essential Puppet libraries (`semantic_puppet, facter, hiera`) needed for automation tasks. +## Install Puppet dependencies +Install the core Puppet libraries to enable automation and configuration management on your Arm-based GCP VM. -- **semantic_puppet** – Provides tools for handling Puppet-specific versioning, modules, and dependency constraints. -- **facter** – Collects system information (facts) such as OS, IP, and hardware details for Puppet to use in configuration decisions. -- **hiera** – Key-value lookup tool that manages configuration data outside of Puppet manifests for flexible data separation. +First, download and extract the Puppet source code: ```console cd ~ sudo wget https://github.com/puppetlabs/puppet/archive/refs/tags/8.10.0.tar.gz sudo tar -xvf 8.10.0.tar.gz cd ~/puppet-8.10.0 +``` + +Next, install the required Ruby gems for Puppet: + +```console sudo /usr/local/bin/gem install semantic_puppet -v "~> 1.0" sudo gem install facter -v "~> 4.0" sudo gem install hiera ``` -{{% notice Note %}} -Puppet 8.8.1 version expands official support for Arm and AArch64, with new agent compatibility for AlmaLinux 9 (AARCH64), Rocky Linux 9 (AARCH64), and Ubuntu 24.04 (ARM). The release ensures compatibility with Ruby 3.3 and resolves multiple agent and catalog-related issues. Security is enhanced with an OpenSSL 3.0.14 upgrade, addressing CVE-2024-4603 and CVE-2024-2511 vulnerabilities. -You can view [this release note](https://help.puppet.com/osp/current/Content/PuppetCore/PuppetReleaseNotes/release_notes_puppet_x-8-8-1.htm) +- `semantic_puppet` manages Puppet-specific versioning and module dependencies +- `facter` collects system information, such as operating system, IP address, and hardware details, for Puppet to use +- `hiera` separates configuration data from Puppet manifests, making your automation setup more flexible + +These libraries ensure Puppet runs smoothly and can manage your Arm-based SUSE Linux VM effectively. + +These libraries are required for Puppet to work correctly on your Arm-based GCP VM. + + + +< notice Note %>Puppet version 8.8.1 introduces expanded support for Arm and AArch64 platforms. This release adds agent compatibility for AlmaLinux 9 (AARCH64), Rocky Linux 9 (AARCH64), and Ubuntu 24.04 (ARM). It works with Ruby 3.3 and fixes several agent and catalog issues. Security is improved with OpenSSL 3.0.14, addressing recent vulnerabilities (CVE-2024-4603 and CVE-2024-2511). -The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Puppet version 8.8.1, the minimum recommended on the Arm platforms. -{{% /notice %}} +For more information, see the [official Puppet release notes](https://help.puppet.com/osp/current/Content/PuppetCore/PuppetReleaseNotes/release_notes_puppet_x-8-8-1.htm). -### Build and install the Puppet gem -The **Puppet gem** provides the core Puppet framework, including its CLI, manifest parser, and resource management engine. +The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Puppet 8.8.1 as the minimum version for Arm platforms. +## Build and install the Puppet gem +The Puppet gem provides the core Puppet framework, including its CLI, manifest parser, and resource management engine. Build and install the Puppet 8.10.0 package from source into your Ruby environment. @@ -81,7 +91,7 @@ sudo gem build puppet.gemspec sudo /usr/local/bin/gem install puppet-8.10.0.gem ``` -### Verification +## Verify Puppet installation Confirm Puppet is successfully installed and ready to use on the system. ```console diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md index a8a819d241..d31b1fccd8 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md @@ -11,33 +11,41 @@ layout: learningpathall In this section, you will learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. {{% notice Note %}} -For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). +If you need help on setting up GCP, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). {{% /notice %}} -## Provision a Google Axion C4A Arm VM in Google Cloud Console +## Provision a Google Axion C4A Arm VM -To create a virtual machine based on the C4A instance type: -- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). -- Go to **Compute Engine > VM Instances** and select **Create Instance**. -- Under **Machine configuration**: - - Populate fields such as **Instance name**, **Region**, and **Zone**. - - Set **Series** to `C4A`. - - Select `c4a-standard-4` for machine type. +To create a virtual machine based on the C4A instance type, start by navigating to the [Google Cloud Console](https://console.cloud.google.com/), and follow these steps: +- In the Google Cloud Console, go to **Compute Engine > VM Instances** and select **Create Instance**. +- Under **Machine configuration**, enter the following details: + - **Instance name**: Choose a unique name for your VM. + - **Region** and **Zone**: Select the location closest to your users or workloads. + - **Series**: Set to `C4A` to use Arm-based Axion processors. + - **Machine type**: Select `c4a-standard-4` (4 vCPUs, 16 GB memory). - ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + ![Creating a Google Axion C4A Arm virtual machine in Google Cloud Console with c4a-standard-4 selected. The screenshot shows the VM creation form with the C4A series and c4a-standard-4 machine type highlighted. alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") -- Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. -- If using use **SUSE Linux Enterprise Server**. Select "Pay As You Go" for the license type. -- Once appropriately selected, please Click **Select**. -- Under **Networking**, enable **Allow HTTP traffic**. -- Click **Create** to launch the instance. -- Once created, you should see a "SSH" option to the right in your list of VM instances. Click on this to launch a SSH shell into your VM instance: +- Under **OS and storage**, select **Change**, then choose an Arm64-based operating system image. For this Learning Path, select **SUSE Linux Enterprise Server**. +- For **SUSE Linux Enterprise Server**, select *Pay As You Go* as the license type. +- After selecting the image and license, select **Select** to confirm your choice. +- Under **Networking**, enable **Allow HTTP traffic** to permit web access. +- Select **Create** to launch your VM instance. +- When the instance is ready, you'll see an **SSH** option next to your VM in the list. Select **SSH** to open a shell session in your browser. -![Invoke a SSH session via your browser alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") +![Browser window showing the Google Cloud Console with the SSH button highlighted next to a running VM instance. The interface displays the VM name, status, and available actions. The environment is a web-based dashboard with navigation menus on the left. The emotional tone is neutral and instructional. Visible text includes VM instance details and the SSH button label. alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") -- A window from your browser should come up and you should now see a shell into your VM instance: +When you select **SSH**, a new browser window opens with a shell prompt for your VM instance. You now have direct command-line access to your Arm-based VM, ready to run commands and manage your environment. -![Terminal Shell in your VM instance alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") +![Terminal window displaying a shell prompt inside a Google Axion C4A Arm VM instance. The interface shows a command line ready for input, with the username and hostname visible at the prompt. The wider environment is a browser-based SSH session within the Google Cloud Console. The emotional tone is neutral and instructional. Visible text includes the shell prompt and any default welcome messages shown in the terminal. alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") -Next, let's install puppet! \ No newline at end of file + +## What you've accomplished and what's next + +You have successfully provisioned a Google Axion C4A Arm virtual machine on Google Cloud Platform using the Console. You selected the Arm64-based SUSE Linux Enterprise Server image, configured networking, and launched your VM. You also connected to your instance using the built-in SSH feature. You now have a running Arm VM on GCP and access to its shell environment. + +Next, you'll install Puppet on your new instance to automate configuration and management tasks. + + + \ No newline at end of file